This paper addresses the issue of value alignment in multi-agent systems based on large-scale language models (LLMs), particularly as AI research shifts from single-agent to multi-agent autonomous decision-making and collaborative work in complex environments. The advancement and diverse applications of LLMs have increased situational and systemic risks, making value alignment crucial to ensure that agents' goals, preferences, and behaviors align with human values and social norms. This paper addresses the need for social governance through a multi-layered value framework, comprehensively examining value alignment using LLM-based multi-agent systems as a representative prototype of agent AI systems. We structure value principles into a hierarchical structure at macro, meso, and micro levels, categorize application scenarios along a continuum from general to specific, and map value alignment methods and evaluations onto the hierarchical framework. Furthermore, we examine value coordination among multiple agents within agent AI systems in detail and suggest future research directions.