This paper analyzes the "safety frameworks" announced by major AI companies as part of their self-regulation efforts. Specifically, we analyze OpenAI's "Readiness Framework Version 2" (April 2025) based on affordance theory to assess how it regulates actual AI development and deployment. Utilizing the Mechanisms & Conditions model and the MIT AI Risk Repository, we examine which AI risks OpenAI's framework addresses and which activities it permits, denies, requires, encourages, or inhibits.