This paper proposes HD-PiSSA (High-rank Distributed PiSSA), a novel distributed Parameter-Efficient Fine-Tuning (PEFT) method for efficient fine-tuning of large-scale language models (LLMs). Existing PEFT methods, such as LoRA and PiSSA, restrict model updates to low-rank subspaces, limiting their expressiveness and preventing optimal performance on complex tasks. HD-PiSSA initializes orthogonal adapters across multiple devices and fine-tunes them by aggregating delta updates to W. HD-PiSSA assigns different principal components of pre-trained weights to each GPU, significantly expanding the range of update directions compared to data-parallel LoRA or PiSSA. Experimental results show that HD-PiSSA outperforms LoRA by an average of 10.0 points and PiSSA by an average of 4.98 points on a variety of demanding downstream tasks, including mathematics, code generation, and multi-task learning.