This paper presents the Latent Adaptive Planner (LAP), a latent variable-based policy for dynamic non-contact manipulation (e.g., box grasping). LAP infers plans in a low-dimensional latent space and is effectively trained using human demonstration videos. During execution, LAP maintains posterior probabilities for the latent plan and performs variational replanning as new observations arrive, achieving real-time adaptation. To bridge the implementation gap between humans and robots, we introduce a model-based proportional mapping that accurately recreates kinematic joint states and object positions from human demonstrations. Through challenging box grasping experiments with diverse object properties, LAP learns human-like compliant motions and adaptive behaviors, demonstrating excellent success rates, trajectory smoothness, and energy efficiency. Overall, LAP enables dynamic manipulation through real-time adaptation and successfully transfers across heterogeneous robot platforms using the same human demonstration videos.