This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents
Created by
Haebom
Author
Zhen Tan, Jun Yan, I-Hung Hsu, Rujun Han, Zifeng Wang, Long T. Le, Yiwen Song, Yanfei Chen, Hamid Palangi, George Lee, Anand Iyer, Tianlong Chen, Huan Liu, Chen-Yu Lee, Tomas Pfister
Outline
This paper proposes Reflective Memory Management (RMM), a novel mechanism for maintaining personalized information in long-term conversations. To overcome the limitations of existing external memory approaches, which involve fixed memory segmentation and fixed retrieval mechanisms, RMM integrates forward reflection (dynamically summarizing interactions at the utterance, turn, and session levels to create a personalized memory bank) and backward reflection (iteratively improving retrieval using online reinforcement learning based on citation evidence from LMM). Experimental results demonstrate consistent performance improvements across various metrics and benchmarks, with an accuracy increase of more than 10% over a baseline model that does not use a traditional memory management approach on the LongMemEval dataset.
Takeaways, Limitations
•
Takeaways:
◦
A novel memory management mechanism is presented to improve the performance of long-term conversational agents.
◦
Dynamic and adaptive memory management through prospective and retrospective reflection
◦
Effectively capture the semantic structure of conversations by considering various memory granularity levels.
◦
Improving search efficiency using online reinforcement learning.
◦
We confirmed an accuracy improvement of more than 10% compared to existing methods on the LongMemEval dataset.
•
Limitations:
◦
Further research is needed to determine the generalization performance of the proposed RMM and its applicability to various conversation types.
◦
The need to improve the computational cost and efficiency of online reinforcement learning processes.
◦
Analysis is needed to determine whether RMM's memory management strategy may be biased toward certain types of conversations.