This paper provides a comprehensive review of self-evolving agent systems. Recent advances in large-scale language models have fueled growing interest in AI agents capable of solving complex real-world tasks. However, most existing agent systems rely on manually crafted configurations that remain static after deployment, limiting their ability to adapt to dynamic and evolving environments. To address this issue, recent research has explored agent evolution techniques that automatically improve agent systems based on interaction data and environmental feedback. This paper presents a unified conceptual framework that abstracts the feedback loop underlying the design of self-evolving agent systems. This framework emphasizes four core components: system input, agent system, environment, and optimizer, providing a foundation for understanding and comparing various strategies. Building on this framework, we systematically review a wide range of self-evolving techniques targeting various components of agent systems and also examine domain-specific evolution strategies developed in specialized fields such as biomedicine, programming, and finance. We also provide dedicated discussions on the evaluation, safety, and ethical considerations of self-evolving agent systems. This paper aims to provide researchers and practitioners with a systematic understanding of self-evolving AI agents and lay the foundation for developing more adaptive, autonomous, and life-long agent systems.