This paper presents LoRA-XS, a novel parameter-efficient fine-tuning method to address the limitations of LoRA, which suffer from storage and computational difficulties when deploying modules for various tasks or users. LoRA-XS dramatically reduces the number of trainable parameters by incorporating small trainable weight matrices among fixed low-rank matrices obtained from the singular value decomposition (SVD) of pre-trained weights. Compared to LoRA in a 7B model, it reduces storage requirements by over 100x and scales from one parameter per module to any arbitrary size. Evaluations on GLUE, GSM8K, MATH, and common-sense inference benchmarks demonstrate that LoRA-XS performs equally or better in accuracy than LoRA and VeRA, while offering superior parameter efficiency. Additional experiments highlighting the importance of singular vectors demonstrate the utility of LoRA-XS as a robust and storage-efficient solution for scaling and personalizing large-scale language models.