This paper explores the use of adapter parameters to modify the behavior of large-scale language models (LLMs) and generative AI, focusing particularly on complex text-based multitasking problems. Each test case considers situations where multiple tasks must be performed simultaneously, such as translation and summarization. We propose a benchmark consisting of four practical, complex tasks and present an efficient method (learnable calibration) suitable for limited computing resources. Our goal is to enhance the practical multitasking capabilities of LLMs and to apply them to complex, resource-constrained use cases.