This paper presents a method for constructing LoRA adapters from a pre-trained module library using parameter-efficient transfer learning. Existing approaches rely on simple search heuristics or uniform averaging, which overlook the latent structure of task relationships in the representation space. This paper proposes a novel framework for adapter reuse, formulating adapter construction as a geometry-aware sparse reconstruction problem. Specifically, we represent each task as a latent prototype vector derived from the encoder of a base model, and approximate the target task prototype as a sparse linear combination of the retrieved reference prototypes under an ℓ1-regularized optimization objective. The resulting combined weights are used to blend the corresponding LoRA adapters to generate a composite adapter tailored to the target task. This formulation not only preserves the local geometric structure of the task representation manifold but also selects a minimal set of relevant adapters, promoting interpretability and efficient reuse. We demonstrate the effectiveness of this approach in several domains, including medical image segmentation, medical report generation, and image synthesis. Experimental results highlight the benefits of combining search and latent geometry-aware optimization for improved zero-shot generalization.