This paper proposes Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning technique for adapting large-scale language models (LLMs) to specific tasks, and introduces the zkLoRA framework, which integrates it with zero-knowledge proofs (ZKPs) to ensure security and verifiability. zkLoRA uses cryptographic techniques such as lookup arguments, sum verification protocols, and polynomial commitments to verify both arithmetic and non-arithmetic operations in a Transformer-based architecture. It scales up to 13 billion parameters in open-source LLMs like LLaMA and provides verifiability throughout the propagation, backpropagation, and parameter update processes, preserving the privacy of model parameters and training data. Ultimately, zkLoRA enables secure and reliable LLM deployment in constrained environments.