This paper addresses the problem that the performance of reinforcement learning in robotic systems relies on the rationality of predefined reward functions, but manually designed reward functions can lead to policy failure due to inaccuracies. Inverse reinforcement learning (IRL) addresses this problem by inferring implicit reward functions from expert demonstrations, but existing methods rely heavily on a large number of expert demonstrations to recover accurate reward functions. The high cost of collecting expert demonstrations, especially in multi-robot systems, severely hinders the practical deployment of IRL. Therefore, improving sampling efficiency has emerged as a critical challenge in multi-agent inverse reinforcement learning (MIRL). This paper theoretically demonstrates that leveraging the inherent symmetry in multi-agent systems can recover more accurate reward functions. Based on this insight, we propose a general framework that incorporates symmetry into existing multi-agent adversarial IRL algorithms, significantly improving sampling efficiency. Experimental results on various challenging tasks demonstrate the effectiveness of this framework, and further validation on real-world multi-robot systems demonstrates the practicality of our method.