Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Artificial Authority: From Machine Minds to Political Alignments. An Experimental Analysis of Democratic and Autocratic Biases in Large-Language Models

Created by
  • Haebom

Author

Natalia O zegalska-{\L}ukasik, Szymon {\L}ukasik

Outline

This paper analyzes the significant differences in political beliefs across countries, reflecting diverse historical, cultural, and institutional contexts, and the impact of these beliefs on generative AI (particularly large-scale language models, or LLMs). We empirically test whether LLMs exhibit a tendency to align with democratic or authoritarian worldviews. Using psychometric and political orientation measures, we conduct a quantitative and qualitative analysis of key LLMs developed in countries with diverse political backgrounds. The results reveal high variability across models and a strong correlation with the political culture of the countries where they were developed.

Takeaways, Limitations

LLMs may have different political leanings depending on the political culture of the developing country.
More detailed research is needed into the socio-political aspects inherent in AI systems.
The significant variability between models suggests that it is difficult to draw consistent conclusions.
There may be limitations to the specific psychological and political orientation measurement methods used.
Additional consideration is needed of other factors that influence the political orientation of LLMs.
👍