Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

FAIRGAME: a Framework for AI Agents Bias Recognition using Game Theory

Created by
  • Haebom

Author

Alessio Buscemi, Daniele Proverbio, Alessandro Di Stefano, The Anh Han, German Castignani, Pietro Di Li o

Outline

FAIRGAME is a framework for recognizing bias in AI agents using game theory. It is used to reveal biased outcomes among AI agents in popular games, based on different LLMs and languages, personality traits of the agents, or strategic knowledge. FAIRGAME enables users to reliably and easily simulate games and scenarios of their choice, systematically discover biases by comparing outcomes between simulation campaigns and game-theoretic predictions, predict novel behaviors that arise from strategic interactions, and enable further research on strategic decision-making using LLM agents.

Takeaways, Limitations

Takeaways:
Provides a standardized framework for detecting and interpreting biases in strategic interactions between AI agents.
Improving behavior prediction and bias analysis of LLM-based AI agents.
Integrating game theory and AI agent simulation to support strategic decision-making research.
Ability to compare and analyze results across different LLMs, languages, and agent characteristics.
Limitations:
Clear restrictions on the types and complexity of games to which FAIRGAME can apply.
Possible subjectivity in interpretation of results depending on the characteristics of the LLM and game used.
Further research is needed to determine generalizability to real-world situations.
Further validation is needed on the scalability of the framework and its applicability to other types of AI agents.
👍