This page organizes papers related to artificial intelligence published around the world. This page is summarized using Google Gemini and is operated on a non-profit basis. The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.
PARL-MT: Progress Awareness for Multi-Turn Function Calling
Outline
This paper describes the proposed PARL-MT framework to improve the performance of large-scale language models (LLMs) in real-world applications involving multiple conversations, such as travel planning or multi-step data analysis. PARL-MT addresses the challenges faced by LLMs in multi-turn conversations, particularly in understanding progress, summarizing past interactions, and planning for future tasks to ensure consistent performance. PARL-MT explicitly integrates progress awareness into LLM training, automatically building a dataset combining conversation summaries and future task planning using a progress-aware generative (PAG) pipeline, and then employs a progress-aware guided reinforcement learning (PAG-RL) algorithm to reduce contextual redundancy and improve alignment between local and global task completion.
Takeaways, Limitations
•
Takeaways:
◦
Explicitly integrating progress awareness into LLM training improves the efficiency and robustness of multi-turn function calls.
◦
Automatically build a dataset containing conversation summaries and future work plans using the Progress Aware Generation (PAG) pipeline.
◦
Reducing context redundancy and improving alignment between local and global task completion through the Progress-Aware Guided Reinforcement Learning (PAG-RL) algorithm.
◦
It outperforms existing methods in two public benchmarks.
•
Limitations:
◦
There is no Limitations directly mentioned in this paper.