This paper addresses vision-guided quadruped robot control using reinforcement learning (RL), emphasizing the essential integration of proprioception and vision for robust control. We propose QuadKAN, a spline-parameterized cross-modal policy using Kolmogorov-Arnold Networks (KANs). QuadKAN integrates a spline encoder for proprioception and a spline fusion head for proprioceptive-visual information. This structured class of functions aligns the state-action mapping with the piecewise smoothness of gait, improving sample efficiency, reducing action tremor and energy consumption, and providing interpretable pose-action sensitivity. We employ Multimodal Delay Randomization (MMDR) and perform end-to-end learning with Proximal Policy Optimization (PPO). Evaluation results on a variety of terrains, including uniform and uneven surfaces and scenarios with static and dynamic obstacles, demonstrate that QuadKAN consistently achieves higher returns, longer travel distances, and fewer collisions than state-of-the-art (SOTA) baseline models. These results demonstrate that spline parameterized policies are a simple, effective, and interpretable alternative for robust vision-guided walking.