Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

PRIME: Large Language Model Personalization with Cognitive Dual-Memory and Personalized Thought Process

Created by
  • Haebom

Author

Xinliang Frederick Zhang, Nick Beauchamp, Lu Wang

Outline

This paper studies LLM personalization, which adjusts the output of a large-scale language model (LLM) to reflect individual preferences and opinions. Recognizing the lack of a unified theoretical framework for systematically understanding the driving forces behind effective personalization, we integrate the cognitive dual memory model into LLM personalization. Specifically, we propose a unified framework, PRIME, by mapping episodic memory to past user engagement and semantic memory to long-term user beliefs. Furthermore, we add personalized thinking capabilities inspired by slow thinking strategies. To evaluate long-term context personalization, we introduce a specially designed dataset using Reddit's Change My View (CMV). Extensive experiments demonstrate the effectiveness of PRIME, demonstrating its ability to effectively capture dynamic personalization beyond mere popularity bias.

Takeaways, Limitations

Takeaways:
Integrating the cognitive dual memory model into LLM personalization presents a novel approach.
Proposing an integrated framework called PRIME
Add personalized thinking skills
Introducing a new dataset to evaluate long-term context personalization.
Extensive experiments demonstrate PRIME's effectiveness and confirm its dynamic personalization capabilities.
Limitations:
Specific Limitations is not specified in the paper (not in the abstract)
👍