Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

The AI in the Mirror: LLM Self-Recognition in an Iterated Public Goods Game

Created by
  • Haebom

Author

Olivia Long, Carter Teplica

Outline

To enhance our understanding of AI-AI interactions in environments where multiple AI agents interact, this paper conducted experiments using a repeated public goods game. We tested models with and without four different inference capabilities, with two conditions: one where the other agent was "another AI agent" and the other was "themselves." We found that informing a large-scale language model (LLM) that the other agent was themselves significantly influenced their tendency to cooperate. Although this study focused on a simple environment, it offers insights into multi-agent environments where cooperation can increase or decrease unpredictably due to agents "unconsciously" discriminating against each other.

Takeaways, Limitations

Takeaways:
This suggests that self-awareness and awareness of others can have a significant impact on cooperative behavior in interactions between AI agents.
We show that “unconscious” discrimination by AI agents can have unexpected effects on the level of cooperation in multi-agent systems.
In the design and analysis of complex multi-agent systems, we emphasize the need to consider self-awareness and mutual awareness of agents.
Limitations:
Because this study was conducted in a simple game environment (a repetitive public goods game), there are limitations in generalizing the results to complex real-world situations.
The limited number and type of models used limits generalizability across a wide range of AI models.
The concept of “unconscious discrimination” is not clearly defined, and further research is needed.
👍