Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Getting out of the Big-Muddy: Escalation of Commitment in LLMs

Created by
  • Haebom

Author

Emilio Barkett, Olivia Long, Paul Kroger

Outline

As large-scale language models (LLMs) are increasingly deployed in autonomous decision-making roles in high-stakes domains, this paper investigates whether models trained on human-generated data can inherit cognitive biases (e.g., overinvestment) that systematically distort human judgment. The study examined the expression of LLM biases in four experimental conditions: model as investor, model as advisor, multi-agent consultation, and mixed pressure scenarios using an investment task. Results from 6,500 trials revealed that LLM biases were highly context-dependent. While individual decision contexts exhibited rational cost-benefit logic, multi-agent consultations exhibited prominent hierarchical effects. Specifically, symmetric peer-based decision-making resulted in overinvestment in almost all cases. Similarly, overinvestment rates were high under organizational and individual pressures. These results demonstrate that LLM biases are not inherent but are highly dependent on the social and organizational context, providing important insights for the deployment of multi-agent systems and unsupervised operations where such conditions can naturally occur.

Takeaways, Limitations

Takeaways:
The manifestation of cognitive bias (overinvestment) in LLM depends largely on the social and organizational context rather than the properties of the model itself.
The potential for bias in LLMs to emerge in multi-agent systems and unsupervised operating environments must be considered.
When applying LLM to high-risk decision-making, design and management that considers contextual factors are essential.
A deeper understanding of how social factors such as hierarchy and pressures influence LLM decision-making is needed.
Limitations:
Experimental environments may not fully reflect the complexities of the real world.
This may be limited to certain types of LLM and certain types of assignments.
Further research is needed in more diverse social and organizational contexts.
There is a lack of specific methodologies to mitigate bias in LLM.
👍