This paper proposes REACT (Real-time Edge-based Autonomous Co-pilot Trajectory Planner), a real-time, lightweight vision-language model-based trajectory planning framework that integrates vehicle-to-everything (V2X) communication to overcome the detection limitations of autonomous driving systems. REACT fine-tunes a lightweight vision-language model (VLM) to integrate infrastructure-provided hazard alerts with in-vehicle sensor data, understands complex traffic dynamics and vehicle intent through visual embedding, interprets precise numerical data from symbolic inputs, and generates safety-centric, optimized trajectories through context-sensitive inference. For real-time deployment, REACT utilizes a residual path fusion (RTF) design and a specialized edge adaptation strategy to reduce model complexity and improve inference efficiency. Evaluation results on the DeepAccident benchmark demonstrate state-of-the-art performance, achieving a 77% reduction in collision rate, 48.2% improvement in Video Panoptic Quality (VPQ), and 0.57 seconds of inference latency.