This paper presents a method for fine-tuning large-scale pre-trained language models across multiple tasks using an adapter that combines parameter efficiency and expressive power. We introduce Kron-LoRA, a novel hybrid adapter that combines Kronecker decomposition with traditional low-rank LoRA compression. Kron-LoRA uses up to four times fewer parameters than standard LoRA while maintaining similar expressive power. Experiments on eight benchmarks targeting DistilBERT, Mistral-7B, LLaMA-2-7B, and LLaMA-3-8B demonstrate that Kron-LoRA performs on par with or better than the LoRA baseline model, with a lower memory footprint and a speed overhead of only 5-8%. Even with sequential fine-tuning, it achieves competitive cross-task transfer performance while using only a quarter of the adapter's parameters. Therefore, Kron-LoRA offers a scalable and sustainable solution for multi-task adaptation of large-scale language models.