This paper presents Nemori, a novel self-organizing memory architecture based on human cognitive principles, to address the lack of persistent long-term memory retention, which limits the effectiveness of large-scale language models (LLMs) as autonomous agents in long-term interactions. Nemori addresses the granularity problem of memory units by autonomously organizing conversational flows into semantically coherent episodes through the Two-Step Alignment Principle, inspired by event segmentation theory. Furthermore, the Predict-Calibrate Principle, inspired by free energy principles, enables adaptive knowledge evolution beyond predefined heuristics based on prediction differences. Extensive experiments on the LoCoMo and LongMemEval benchmarks demonstrate that Nemori significantly outperforms existing state-of-the-art systems, particularly in long-term contexts.