This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization
Created by
Haebom
Author
Xujia Wang, Yunjia Qi, Bin Xu
Outline
Parameter-Efficient Fine-Tuning (PEFT) methods, such as LoRA, introduce low-coefficient decomposition matrices to significantly reduce the number of learnable parameters. However, they perform numerous matrix multiplications for domain-specific tasks, resulting in poor computational efficiency and poor fine-tuning performance. In this paper, we propose Low-Resources Subnet Integration Adaptation (LoSiA), an innovative method that dynamically identifies and optimizes important parameters during the training process. Specifically, we use gradient sparsity analysis to identify subnetworks and optimize them as learnable targets. This design enables effective high-coefficient adaptation by updating only subnetwork parameters, reducing additional matrix multiplications. We also present LoSiA-Pro, a faster implementation of LoSiA that reduces training latency by approximately 27% compared to LoRA. Extensive evaluations demonstrate that this method requires the shortest training times for domain-specific and common-sense reasoning tasks while minimizing performance degradation compared to full fine-tuning. Further analysis confirms that LoSiA also reduces forgetting during continuous training. The source code can be found at https://github.com/KlozeWang/LoSiA .