In this paper, we propose a Red-Team Multi-Agent Reinforcement Learning framework to address the problem that decision-making studies in safety-critical situations rely on inefficient data-driven scenario generation or specific modeling approaches that fail to capture real-world corner cases. The framework treats background vehicles with interference capabilities as adversarial agents (red-team agents) and actively interferes and explores to discover corner cases outside the data distribution. Using the Constraint Graph Representation Markov Decision Process, the adversarial agents are forced to continuously interfere with autonomous vehicles (AVs) while complying with safety rules. In addition, a policy threat zone model is constructed to quantify the threat posed by the adversarial agents to AVs, thereby inducing more extreme behaviors to increase the risk level of the scenario. Experimental results show that the proposed framework significantly affects the safety of AVs’ decision-making and generates various corner cases. This method provides a new direction for the study of safety-critical scenarios.