Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

The Influence of Human-inspired Agentic Sophistication in LLM-driven Strategic Reasoners

Created by
  • Haebom

Author

Vince Trencsenyi, Agnieszka Mensfelt, Kostas Stathis

Outline

This paper evaluates the strategic reasoning capabilities of agents based on large-scale language models (LLMs), particularly in game-theoretic situations. Three agent designs—a simple game-theoretic model, an LLM-only agent, and an LLM integrated into a conventional agent framework—are evaluated in a guessing game and compared with human participants. Generalization beyond the training distribution is also assessed using obfuscated game scenarios. Analyzing over 2,000 inference samples across 25 agent configurations, we demonstrate that designs that mimic human cognitive architecture can improve the consistency of LLM agents with human strategic behavior. However, we find that the relationship between agent design complexity and human-likeness is nonlinear, relying heavily on the performance of the underlying LLM and the limitations of simple structural augmentation.

Takeaways, Limitations

Takeaways: Designs that mimic human cognitive structures improve the consistency of LLM agents with human strategic behavior.
Takeaways: The relationship between agent design complexity and human-likeness is nonlinear.
Limitations: The performance of the LLM agent depends heavily on the capabilities of the underlying LLM.
Limitations: Simple structural augmentation has its limits.
👍