Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

The 2025 OpenAI Preparedness Framework does not guarantee any AI risk mitigation practices: a proof-of-concept for affordance analyzes of AI safety policies

Created by
  • Haebom

Author

Sam Coggins, Alexander K. Saeri, Katherine A. Daniell, Lorenn P. Ruster, Jessie Liu, Jenny L. Davis

Outline

This paper analyzes the "safety frameworks" announced by major AI companies as part of their self-regulation efforts. Specifically, we analyze OpenAI's "Readiness Framework Version 2" (April 2025) based on affordance theory to assess how it regulates actual AI development and deployment. Utilizing the Mechanisms & Conditions model and the MIT AI Risk Repository, we examine which AI risks OpenAI's framework addresses and which activities it permits, denies, requires, encourages, or inhibits.

Takeaways, Limitations

OpenAI's safety policy requires only a small number of AI risk assessments.
We recommend deploying AI systems with a "medium" level of risk that could unintentionally cause "serious harm" (defined by OpenAI as 1,000 or more deaths or $100 billion or more in damage).
OpenAI's CEO could be allowed to develop AI at a more dangerous level.
Beyond current industry self-regulation, stronger government regulation is needed to mitigate AI risks.
This study presents a reproducible analytical method for assessing the actual tolerance of a safety framework.
👍