This paper addresses the dynamic motion planning problem of computing collision-free trajectories while respecting dynamic constraints of a robot. Existing sampling-based planners (SBPs) construct search trees via action propagation to explore the robot's high-dimensional state space, but their random sampling results in slow exploration. Learning-based approaches offer faster execution times but fail to generalize to out-of-distribution (OOD) scenarios and lack important guarantees such as safety. In this paper, we present Diffusion Tree (DiTree), a verifiable generalization framework that efficiently guides state-space exploration within SBPs by leveraging Diffusion Policies (DPs) as informed samplers. DiTree combines the ability of DPs to model complex distributions of expert trajectories conditioned on local observations with the completeness of SBPs to generate verifiably safe solutions to complex dynamic systems within a few action propagation iterations. We demonstrate the performance of DiTree through an implementation that combines an RRT planner with a DP action sampler trained in a single environment. In a comprehensive evaluation of OOD scenarios, DiTree achieves an average 30% higher success rate than standalone DP or SBP in dynamic vehicle and Mujoco ant robot setups (in the latter case, SBP completely failed). Beyond simulations, real-world vehicle experiments demonstrate excellent trajectory quality and robustness even in severe simulation-to-real-world gaps, confirming DiTree's applicability.