This paper addresses the problem that deepfakes, synthetic media generated using cutting-edge AI techniques, exacerbate the spread of disinformation, particularly in politically sensitive contexts. Existing deepfake detection datasets are ineffective for detecting general synthetic images due to limitations such as outdated generation methods, unrealistic images, or reliance on a single face image. This study analyzes social media posts to identify the various ways deepfakes spread disinformation. Furthermore, human perception research demonstrates that recently developed proprietary models generate synthetic images that are difficult to distinguish from real ones. Therefore, this paper presents a comprehensive, politically focused dataset specifically designed to benchmark the detection performance of state-of-the-art generative models. This dataset consists of 3 million real images with descriptive captions and 963,000 high-quality synthetic images generated using a variety of proprietary and open-source models. Recognizing the constantly evolving nature of generative techniques, we introduce an innovative crowdsourcing adversarial platform that encourages participants to generate and submit challenging synthetic images. This ongoing, community-driven initiative ensures that deepfake detection methods are robust and adaptable, proactively protecting public discourse from sophisticated disinformation threats.