Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

MPO: Boosting LLM Agents with Meta Plan Optimization

Created by
  • Haebom

Author

Weimin Xiong, Yifan Song, Qingxiu Dong, Bingchan Zhao, Feifan Song, Xun Wang, Sujian Li

Outline

This paper proposes a meta-planning optimization (MPO) framework to improve the interactive planning performance of agents based on large-scale language models (LLMs). Unlike existing methods, which suffer from planning hallucinations and require retraining for each new agent, MPO directly integrates explicit guidance through meta-planning to enhance the agent's planning ability. Unlike existing methods that rely on complex knowledge, MPO leverages high-level, general guidance and continuously optimizes the meta-plan based on feedback from the agent's task execution. Experimental results on two representative tasks demonstrate that MPO significantly outperforms existing baseline models, providing a plug-and-play solution that enhances task completion efficiency and generalization across new scenarios.

Takeaways, Limitations

Takeaways:
Contribution to solving the planning hallucination problem of LLM-based agents.
Reduced need for retraining for new agents.
High-level general guidelines enable efficient planning optimization.
Improved task completion efficiency and generalization ability.
Easy to integrate into existing systems with plug-and-play functionality.
Limitations:
There is also a dependence on the design and quality of the meta-plan.
Generalization performance verification is needed for various task types.
Further research is needed to address unexpected issues that may arise in real-world applications.
👍