This paper presents Nemori, a novel self-organizing memory architecture based on human cognitive principles, to address the inherent limitations of large-scale language models (LLMs) as autonomous agents in long-term interactions, which limit their effectiveness. Nemori addresses the memory size issue by autonomously organizing conversational flows into semantically coherent episodes through the Two-Step Alignment Principle, inspired by event segmentation theory. Furthermore, the Predict-Calibrate Principle, inspired by free energy principles, enables adaptive knowledge generation beyond predefined heuristics based on prediction differences. Extensive experiments on the LoCoMo and LongMemEval benchmarks demonstrate that Nemori significantly outperforms existing state-of-the-art systems, particularly in long-term contexts.