Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Efficient Compositional Multi-tasking for On-device Large Language Models

Created by
  • Haebom

Author

Ondrej Bohdal, Mete Ozay, Jijoong Moon, Kyeng-Hun Lee, Hyeonmok Ko, Umberto Michieli

Outline

This paper explores the use of adapter parameters to modify the behavior of large-scale language models (LLMs) and generative AI, focusing particularly on complex text-based multitasking problems. Each test case considers situations where multiple tasks must be performed simultaneously, such as translation and summarization. We propose a benchmark consisting of four practical, complex tasks and present an efficient method (learnable calibration) suitable for limited computing resources. Our goal is to enhance the practical multitasking capabilities of LLMs and to apply them to complex, resource-constrained use cases.

Takeaways, Limitations

Takeaways:
We present research to solve text-based complex multitasking problems in on-device environments.
Facilitating research by providing benchmarks that incorporate practical, complex tasks.
We emphasize practicality by proposing a resource-efficient Learnable Calibration method.
Contributes to improving LLM's practical multitasking skills and broadening its scope of application.
Limitations:
Lack of information on specific experimental results and performance comparisons.
Insufficient detailed implementation and performance analysis of the Learnable Calibration method.
Absence of specific definitions and criteria for limited computing resource environments.
Lack of comparative analysis with other multitasking techniques.
👍