Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA introduce low-coefficient decomposition matrices to significantly reduce the number of learnable parameters. However, existing methods perform a lot of matrix multiplications in domain-specific tasks, resulting in low computational efficiency and poor fine-tuning performance. In this paper, we propose Low-Resources Subnet Integration Adaptation (LoSiA). LoSiA is an innovative method to dynamically find and optimize important parameters during the learning process. Specifically, it uses gradient sparsity analysis to identify subnetworks and optimizes them as learnable targets. This design enables effective high-coefficient adaptation by updating only subnetwork parameters, reducing additional matrix multiplications. We also present LoSiA-Pro, which reduces the learning delay by about 27% compared to LoRA. Extensive evaluations show that the proposed method achieves the minimum performance degradation compared to full fine-tuning while requiring the shortest learning time on domain-specific and common-sense reasoning tasks. Further analysis shows that LoSiA reduces forgetting during continuous learning.