G-CUT3R is a novel feedforward method that integrates prior information into the existing CUT3R model to improve 3D scene reconstruction performance. Unlike existing feedforward methods that rely solely on input images, G-CUT3R leverages auxiliary data commonly available in real-world environments, such as depth, camera calibration information, and camera position. We propose a lightweight modified CUT3R method that extracts features through dedicated encoders for each modality and fuses them with RGB image tokens via zero convolution. This flexible design allows for seamless integration of various prior information combinations during inference. Evaluations on multiple benchmarks, including 3D reconstruction and other multi-view tasks, demonstrate significant performance improvements, demonstrating G-CUT3R's ability to effectively utilize available prior information while maintaining compatibility with diverse input modalities.