Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Enhancing Few-Shot Transfer Learning with Optimized Multi-Task Prompt Tuning through Modular Prompt Composition

Created by
  • Haebom

Author

Ahmad Pouramini, Hesham Faili

Outline

To improve the performance of multi-task prompt tuning, this paper proposes a method that decomposes the prompts for each task into shared prompts (source prompts) and task-specific prompts (private prompts). During training, the source prompts are fine-tuned and combined with the private prompts to generate the target prompts for each task. We present and compare several methods for combining source prompts, analyzing the roles of source and private prompts and providing flexible and tunable configurations for optimizing performance. Experimental results demonstrate improved accuracy and robustness compared to existing prompt tuning and related research, and outperform other methods on a variety of tasks, including the GLUE benchmark, particularly in the small-shot setting. This achievement requires significantly less training data, suggesting that the method is useful in the small-shot setting.

Takeaways, Limitations

Takeaways:
A novel method for improving the performance of multi-task prompt tuning is presented.
Clearly analyze the roles of source prompts and private prompts, and provide flexible configuration based on this.
Outperforms existing methods in small-shot settings
Shows that high performance can be achieved with a small amount of training data.
Limitations:
Further research is needed to determine the generalization performance of the proposed method.
Scalability evaluation for various tasks and models is needed.
Further research is needed to optimize source prompt selection and combining strategies.
👍