This paper presents Debate-to-Detect (D2D), a novel misinformation detection framework that overcomes the limitations of existing static classification methods to address the proliferation of misinformation on digital platforms. D2D is based on multi-agent debate (MAD) and reframes misinformation detection as a structured adversarial debate. Each agent is assigned a domain-specific profile and undergoes a five-stage debate process: opening remarks, rebuttals, open discussion, closing remarks, and judgment. Beyond simple binary classification, it introduces a multidimensional evaluation mechanism that evaluates arguments across five dimensions: factuality, source credibility, reasoning quality, clarity, and ethical considerations. Experimental results using GPT-4o on two datasets demonstrate significant performance improvements over existing methods. Case studies highlight D2D's ability to iteratively refine evidence and enhance decision transparency.