[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Humans learn to prefer trustworthy AI over human partners

Created by
  • Haebom

Author

Yaomin Jiang, Levin Brinkmann, Anne-Marie Nussberger, Ivan Soraperra, Jean-Fran\c{c}ois Bonnefon, Iyad Rahwan

Outline

This paper presents the results of a study on human partner selection strategies and AI-induced competitive pressures in a situation where large-scale language model (LLM)-based artificial agents compete with humans as cooperative partners. We conducted three experiments (N=975) using a communication-based partner selection game that simulated a mixed society consisting of humans and LLM-based bots. The results show that bots are more prosocial and linguistically distinguishable than humans, but are not preferentially selected when their identities are hidden. Humans tend to misinterpret the bots’ behavior as human behavior, and vice versa. When the bots’ identities were revealed, the bots’ initial selection probability decreased, but they gained a competitive advantage over humans over time by allowing humans to learn about the behaviors of each partner type. In conclusion, AI can reorganize social interactions in mixed societies and provide Takeaways for the design of more effective and cooperative mixed systems.

Takeaways, Limitations

Takeaways:
AI agents demonstrate competitiveness in collaborative relationships with humans.
Humans tend to mistake AI's behavior for human behavior when they are not aware of the AI's identity.
Although revealing AI's identity may be disadvantageous in the beginning, it can increase AI's competitiveness through human learning in the long term.
Provides important Takeaways for designing AI-human collaborative systems.
Limitations:
The experimental environment is limited to a specific game environment, which may limit generalization to real-world social situations.
The characteristics of the LLM used may have influenced the results. Further research using other LLMs is needed.
Generalizability may be limited as the diversity of human participants is not taken into account.
Lack of consideration for long-term interaction and relationship development.
👍