This paper studies LLM personalization, which adjusts the output of a large-scale language model (LLM) to reflect individual preferences and opinions. Recognizing the lack of a unified theoretical framework for systematically understanding the driving forces behind effective personalization, we integrate the cognitive dual memory model into LLM personalization. Specifically, we propose a unified framework, PRIME, by mapping episodic memory to past user engagement and semantic memory to long-term user beliefs. Furthermore, we add personalized thinking capabilities inspired by slow thinking strategies. To evaluate long-term context personalization, we introduce a specially designed dataset using Reddit's Change My View (CMV). Extensive experiments demonstrate the effectiveness of PRIME, demonstrating its ability to effectively capture dynamic personalization beyond mere popularity bias.