This paper proposes Corrective Adaptive Low-Rank Decomposition (CALR), a novel method that improves on the low-rank decomposition technique using singular value decomposition (SVD) to address the challenges of deploying large-scale language models (LLMs), particularly their massive size and high computational demands. While existing SVD-based compression methods focus on minimizing model reconstruction errors, which degrades functional performance, CALR addresses this issue by combining layers compressed using SVD with parallel low-rank correction modules trained to recover functional residual errors. Experimental results on models such as SmolLM2-135M, Qwen3-0.6B, and Llama-3.2-1B demonstrate that CALR reduces the number of parameters by 26.93% and 51.77%, respectively, while maintaining 59.45% and 90.42% of the original model performance, respectively, outperforming existing methods such as LaCo, ShortGPT, and LoSparse. This demonstrates that treating functional information loss as a learnable signal is an effective compression paradigm.