Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Beyond Adapter Retrieval: Latent Geometry-Preserving Composition via Sparse Task Projection

Created by
  • Haebom

Author

Pengfei Jin, Peng Shu, Sifan Song, Sekeun Kim, Qing Xiao, Cheng Chen, Tianming Liu, Xiang Li, Quanzheng Li

Outline

This paper presents a method for constructing LoRA adapters from a pre-trained module library using parameter-efficient transfer learning. Existing approaches rely on simple search heuristics or uniform averaging, which overlook the latent structure of task relationships in the representation space. This paper proposes a novel framework for adapter reuse, formulating adapter construction as a geometry-aware sparse reconstruction problem. Specifically, we represent each task as a latent prototype vector derived from the encoder of a base model, and approximate the target task prototype as a sparse linear combination of the retrieved reference prototypes under an ℓ1-regularized optimization objective. The resulting combined weights are used to blend the corresponding LoRA adapters to generate a composite adapter tailored to the target task. This formulation not only preserves the local geometric structure of the task representation manifold but also selects a minimal set of relevant adapters, promoting interpretability and efficient reuse. We demonstrate the effectiveness of this approach in several domains, including medical image segmentation, medical report generation, and image synthesis. Experimental results highlight the benefits of combining search and latent geometry-aware optimization for improved zero-shot generalization.

Takeaways, Limitations

Takeaways:
Overcoming the limitations of existing simple search and average-based LoRA adapter reuse methods
Leverage latent geometric information to enable more accurate and efficient adapter configurations.
Improved interpretability and efficiency by selecting only the minimum number of relevant adapters through sparse linear combinations.
Demonstrating improved zero-shot generalization performance across diverse domains.
Limitations:
The effectiveness of the proposed method may depend on specific datasets and tasks.
The computational cost of ℓ1-regularization optimization can be relatively high.
Further experiments are needed across a wider range of tasks and domains.
👍