In this paper, we propose a method to improve the performance of Vision-Language Model (VLM) motion risk prediction by synthesizing high-risk driving data to solve the problem of safety prediction in long-tail scenarios in autonomous driving. We model the risks in terms of three aspects: self-vehicle, other vehicles, and environment using Bird's-Eye View (BEV)-based motion simulation, and generate a high-risk driving dataset, DriveMRP-10K, which is suitable for VLM training. In addition, we propose a risk estimation framework, DriveMRP-Agent, which operates independently of VLM, and integrates global information, self-vehicle viewpoint, and a novel information injection strategy for trajectory prediction to enable VLM to effectively infer spatial relationships. Experimental results show that DriveMRP-Agent fine-tuned with DriveMRP-10K significantly improves the motion hazard prediction performance of multiple VLM-based models (crash recognition accuracy increases from 27.13% to 88.03%) and generalizes well on real high-risk driving datasets (accuracy increases from 29.42% to 68.50%).