While Kolmogorov-Arnold-based interpretable networks (KANs) possess powerful theoretical expressiveness, they face parameter explosion and high-frequency feature capture issues in high-dimensional tasks. To address these challenges, this paper proposes a Kolmogorov-Arnold-Fourier network (KAF) that effectively integrates learnable random Fourier features (RFFs) and a novel hybrid GELU-Fourier activation mechanism to balance parameter efficiency and spectral representational capabilities. Our key contributions include: (1) significantly reducing parameters by merging the dual matrix structure of KANs with the matrix-coupling property; (2) introducing a learnable RFF initialization strategy to eliminate spectral distortion in high-dimensional approximation tasks; and (3) implementing an adaptive hybrid activation function that progressively improves frequency representation during training. Comprehensive experiments demonstrate the superiority of KAF across a variety of domains, including vision, NLP, audio processing, and differential equation solving tasks, effectively combining theoretical interpretability with practicality and computational efficiency.