Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Getting out of the Big-Muddy: Escalation of Commitment in LLMs

Created by
  • Haebom

Author

Emilio Barkett, Olivia Long, Paul Kroger

Outline

This paper addresses the growing deployment of large-scale language models (LLMs) in autonomous decision-making roles in high-stakes domains. As such, models trained on human-generated data may inherit cognitive biases (e.g., overcommitment) that systematically distort human judgment. To investigate whether these biases are consistent in LLMs or require specific conditions, we conducted 6,500 trials across four experimental conditions (model as investor, model as advisor, multi-agent consultation, and mixed pressure scenarios) using an investment task. The results reveal that the LLM bias is highly context-dependent. While it exhibits rational cost-benefit logic in individual decision-making situations, in multi-agent consultation situations, it exhibits moderate overcommitment in asymmetrical hierarchies and near-total overcommitment in symmetrical peer-based decision-making. Furthermore, it exhibits high levels of overcommitment under both organizational and individual pressures. In conclusion, the LLM bias is not unique but strongly dependent on social and organizational contexts, with important implications for multi-agent systems and unsupervised operational deployments where such conditions can naturally arise.

Takeaways, Limitations

Takeaways:
We reveal that cognitive biases (over-immersion) in LLMs are not unique but depend heavily on social and organizational context.
Highlighting the potential for bias in LLM in multi-agent systems and unsupervised operating environments.
This suggests that social and organizational contexts should be taken into account when designing and deploying LLM-based decision-making systems.
We present the impact of multi-agent interaction structure (hierarchical vs. equal relationship) on decision-making bias in LLM.
Limitations:
The experimental environment is limited to a specific situation, requiring review of generalizability to real-world applications.
Further analysis is needed to determine the impact of the type and size of LLM used on the study results.
Further research is needed on more diverse types of cognitive biases and more complex decision-making tasks.
Although the number of experimental participants (LLMs) is large, there is a lack of specification regarding whether different types of LLMs were used.
👍