Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Biased AI improves human decision-making but reduces trust

Created by
  • Haebom

Author

Shiyang Lai, Junsol Kim, Nadav Kunievsky, Yujin Potter, James Evans

Outline

This paper examines, through a randomized experiment with 2,500 participants, how existing AI systems, which emphasize ideological neutrality, can induce automation bias by suppressing human cognitive engagement in decision-making. Using a politically diverse variant of GPT-4o, participants interacted with the AI on an information evaluation task. Results showed that a biased AI assistant outperformed a neutral AI, increased engagement, and reduced evaluation bias, particularly when participants encountered conflicting views. However, this benefit came at the cost of reduced trust: participants underestimated the biased AI and overestimated the neutral system. The gap between perception and performance was bridged by exposing participants to two AI systems with biases surrounding human perspectives. These results challenge conventional wisdom about AI neutrality and suggest that strategically incorporating diverse cultural biases can foster improved and more resilient human decision-making.

Takeaways, Limitations

Takeaways:
It shows that AI that emphasizes ideological neutrality can actually hinder human cognitive engagement and lead to automation bias.
Suggesting the potential for improving human decision-making performance and increasing resilience through strategic cultural bias integration.
Confirming the effectiveness of biased AI in reducing human evaluation bias and increasing engagement.
There is a gap between human perception of AI bias and actual performance, raising the need to find ways to address this.
Limitations:
The experimental environment may not completely match the real situation.
Lack of detailed description of the degree and type of bias of GPT-4o variants.
Lack of analysis of differences in results based on participants' political leanings and backgrounds.
Lack of specific solutions to the problem of declining trust.
👍