This paper aims to improve the performance of LoRA (Low-Rank Adaptation) as a method for efficiently fine-tuning large-scale language models. LoRA-SB proposes a method that approximates the entire fine-tuning within a low-rank subspace using a carefully designed initialization strategy. Based on the LoRA-XS architecture, it embeds a learnable rxr matrix to provide optimal scaling and initialization conditions. We experimentally demonstrate that our proposed method outperforms LoRA and existing baselines on mathematical reasoning, common-sense reasoning, and language understanding tasks, while significantly reducing the number of learnable parameters.