This paper addresses the growing concern among experts about the potential for increasingly autonomous and capable AI systems to cause catastrophic losses. Drawing on the successful precedents of the nuclear industry, we argue that developers of cutting-edge AI models should be given limited but strict and exclusive third-party liability for damages resulting from extreme AI events (CAIOs)—that is, events that have caused or could easily cause catastrophic losses. We recommend mandatory insurance for CAIO liability to overcome the developer’s inability to adjudicate, mitigate the winner’s curse dynamic, and leverage the quasi-regulatory power of insurers. Drawing on theoretical arguments and analogous examples from the nuclear industry, we anticipate that insurers will engage in a mix of causal risk modeling, monitoring, lobbying for stricter regulation, and providing guidance on loss prevention in the process of providing insurance against heavy-tail risks from AI. While not a substitute for regulation, clear attribution of responsibility and mandatory insurance can help facilitate future regulatory efforts by efficiently allocating resources to risk modeling and safe design.