In this paper, we propose a theoretically-backed LoRA tuning method designed for users with limited computing resources, especially those with only a CPU-based standard laptop. To overcome the limitations of existing GPU-based LoRA tuning, we propose a scheme that leverages a large number of pre-trained adapters for Mistral-7B-Instruct-v0.2 models to learn a meta-operator that maps an input dataset (represented as a probability distribution) to a set of LoRA weights. Instead of a novel gradient-based update, we generate the adapters via a lightweight composition of existing LoRAs on the CPU. Although not as performant as the GPU-trained adapters, our results consistently outperform the baseline Mistral model on subtasks, providing a practical and accessible alternative to existing GPU-based fine-tuning.