This paper introduces the Calibrated LoRA Initialization for Quantized LLMs (CLoQ), a widely used method for fine-tuning large-scale language models (LLMs) for efficient subtasks in resource-limited environments. This approach addresses the challenges inherent in applying the Low-Rank Adaptation (LoRA) technique to quantized LLMs. CLoQ focuses on minimizing the layer-by-layer differences between the original LLM and the quantized LLM during the initialization phase. It leverages a small calibration dataset to quantize pre-trained LLMs and determine optimal LoRA components for each layer, thereby establishing a robust foundation for subsequent fine-tuning. One of the key contributions of this study is the presentation of novel theoretical results that enable the precise and closed construction of optimal LoRA components. We experimentally demonstrate the effectiveness of CLoQ across various tasks, including language generation, arithmetic reasoning, and common-sense reasoning, demonstrating its superior performance over existing LoRA fine-tuning methods for quantized LLMs, particularly at ultra-low bit widths.