This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
This paper proposes SE-Agent, a self-evolution (SE) framework that effectively leverages interaction trajectories that emerge during the problem-solving process of a large-scale language model (LLM)-based agent to improve its performance. To overcome the limitations of existing methods like MCTS, which lead to suboptimal results due to interdependencies and lack of diversity, SE-Agent iteratively optimizes the inference process through three operations: modifying, recombining, and improving previous trajectories. This allows it to explore diverse solution paths, mitigate the impact of inefficient paths, and enhance performance. Experimental results using SWE-bench Verified demonstrate state-of-the-art performance, achieving up to 55% performance gains on five robust LLMs.
Takeaways, Limitations
•
Takeaways:
◦
A novel approach to optimizing the problem-solving process of LLM-based agents is presented.
◦
Addressing the interdependence and lack of diversity issues of existing MCTS Limitations.
◦
Efficient performance improvement and expanded search space through previous path reuse.
◦
Excellent performance proven in real GitHub issue resolution tasks.
◦
Expanding research and suggesting usability through open source disclosure.
•
Limitations:
◦
The effectiveness of SE-Agent may depend on the performance of the LLM used.
◦
Since these results are based on a specific domain (GitHub issue), further research is needed to determine generalizability.
◦
Further research is needed on optimization strategies for the three operations (modification, recombination, and improvement).
◦
There is a need to verify the scalability of SE-Agent for problems with very high complexity.