This paper focuses on the efficient use of Parameter-Efficient Fine-Tuning (PEFT) method, especially Low-Rank Adaptation (LoRA). The traditional LoRA suffers from slow convergence speed and knowledge loss issues, which we address by improving the LoRA initialization method. Unlike previous works that focus only on efficient fine-tuning or knowledge preservation of pre-trained LLMs, this paper proposes Subspace-Constrained LoRA (SC-LoRA) to achieve both goals simultaneously. SC-LoRA is designed to constrain the output of trainable LoRA adapters to a low-dimensional subspace, so that the contextual information of fine-tuning data is preserved as much as possible and the contextual information of existing knowledge is kept as minimal as possible. This allows the training weights to focus on the main features of fine-tuning data while not damaging the existing knowledge. Through theoretical analysis and experimental results on various subtasks, we demonstrate that SC-LoRA provides superior fine-tuning performance and significantly reduces knowledge loss compared to traditional LoRA initialization methods.