This paper challenges the common perception that the European Union (EU)'s regulatory framework for artificial intelligence (AI) is a principled model based on fundamental rights. It argues that while EU AI regulation, centered around the General Data Protection Regulation (GDPR), the Digital Services and Markets Act (DSA), and the AI Act, is often framed in a rights-based discourse, rights are actually leveraged as tools for governance purposes, such as mitigating technological disruption, managing geopolitical risks, and maintaining systemic balance. Through comparative institutional analysis, it situates the EU's AI governance within a long-standing legal tradition shaped by the need to coordinate power across jurisdictions, contrasting it with the US model, which is rooted in decentralized powers, sectoral pluralism, and constitutional preferences for innovation and individual autonomy. Through case studies in five key areas—data privacy, cybersecurity, healthcare, labor, and disinformation—it demonstrates that EU regulation is not, as often claimed, a meaningfully rights-based approach, but rather is built around institutional risk management. In conclusion, we argue that the EU model should not be understood as a normative ideal that other countries should adopt without criticism, but rather as a historically contingent response to its own political conditions.