We present MemoryAgentBench, a new benchmark for evaluating memory capabilities, a core competency of large-scale language model (LLM) agents. Existing benchmarks fail to capture the interactive and multi-stage nature of memory agents and address all four core competencies (accurate retrieval, test-time learning, long-range comprehension, and selective forgetting). MemoryAgentBench simulates the incremental information processing characteristics of memory agents by transforming existing long-text context datasets and integrating newly constructed datasets into a multi-stage format. Through evaluations of various memory agents, we demonstrate that current methodologies fail to adequately capture all capabilities, highlighting the need for research into memory mechanisms.