Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Mapping Industry Practices to the EU AI Act's GPAI Code of Practice Safety and Security Measures

Created by
  • Haebom

Author

Lily Stelling, Mick Yang, Rokas Gipi\v{s}kis, Leon Staufer, Ze Shen Chin, Simeon Campos, Ariel Gil, Michael Chen

Outline

This paper provides a detailed comparison between the safety and security measures proposed in the EU AI Bill’s General Purpose AI (GPAI) Code of Conduct (third draft) and the current commitments and practices voluntarily adopted by leading AI companies. As the EU moves towards implementing binding obligations for GPAI model providers, a code of conduct is crucial to serve as a bridge between legal requirements and concrete technical commitments. This analysis focuses on the safety and security sections of the draft (Commitments II.1-II.16) and documents excerpts from current public documentation relevant to each measure. A variety of document types, such as state-of-the-art safety frameworks and model cards from more than a dozen companies, including OpenAI, Anthropic, Google DeepMind, Microsoft, Meta, and Amazon, are systematically reviewed. This report does not represent legal compliance, nor does it take any normative view on codes of conduct or company policies. Instead, it aims to facilitate ongoing dialogue between regulators and general purpose AI model providers by providing precedents for a range of industry measures. Nonetheless, for most of the measures in Promises II.1-II.16, we were able to find relevant citations in documents from at least five companies.

Takeaways, Limitations

Takeaways: Provides an overview of the industry’s current status on safety and security measures outlined in the EU AI Bill’s GPAI Code of Conduct, and contributes to fostering dialogue between regulation and industry. Provides useful information for future regulatory direction by comparing voluntary efforts of major AI companies with legal requirements. Finds relevant content in numerous company documents, providing a glimpse into the feasibility of the Code of Conduct.
Limitations: It does not make any judgments about legal compliance, nor does it provide any normative views on codes of conduct or corporate policies. The analysis is limited to certain companies and may not fully reflect the overall industry. The analysis relies solely on publicly available documents, so non-public information is not taken into account.
👍