Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Automatic Prompt Optimization with Prompt Distillation

Created by
  • Haebom

Author

Ernest A. Dyagin, Nikita I. Kulin, Artur R. Khairullin, Viktor N. Zhuravlev, Alena N. Sitkina

Outline

This paper presents DistillPrompt, a novel approach to automatic prompt generation (autoprompting), which has been attracting attention due to advances in prompt engineering research for large-scale language models (LLMs). DistillPrompt is an LLM-based autoprompting method that leverages training data to integrate task-specific information into prompts in a multi-step process. Distillation, compression, and aggregation operations are employed to thoroughly explore the prompt space. Experiments with the t-lite-instruct-0.1 language model on various datasets for text classification and generation tasks demonstrate significant performance improvements over existing methods in key metrics (e.g., an average of 20.12% improvement over Grips across the entire dataset). This demonstrates that DistillPrompt is one of the most effective non-gradient-based autoprompting approaches.

Takeaways, Limitations

Takeaways:
A novel methodology integrating distillation, compression, and aggregation operations in LLM-based autoprompting is presented.
Achieves significant performance improvements in text classification and generation tasks compared to existing methods.
Demonstrated high efficiency in a non-slope-based autoprompting approach.
Limitations:
Experimental results are presented only for a specific LLM (t-lite-instruct-0.1) and a limited dataset. Generalization performance on other LLMs and datasets is required.
Lack of detailed analysis of the size and performance of the LLMs used.
A more in-depth comparative analysis with other autoprompting methods is needed.
👍