This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
The Levers of Political Persuasion with Conversational AI
Created by
Haebom
Author
Kobi Hackenburg, Ben M. Tappin, Luke Hewitt, Ed Saunders, Sid Black, Hause Lin, Catherine Fist, Helen Margetts, David G. Rand, Christopher Summerfield
Outline
This paper examines concerns about the persuasiveness of conversational AI by evaluating the persuasiveness of 19 large-scale language models (LLMs) on 707 political issues and verifying the factual accuracy of 466,769 claims in a large-scale experiment (N=76,977). Our results show that the persuasiveness of current and near-future AI is due more to post-training and prompting methods (up to 51% and 27%, respectively) than to model size or personalization. Furthermore, these methods enhance persuasiveness by leveraging the unique ability of LLMs to rapidly access and strategically deploy information, and surprisingly, they also systematically decrease factual accuracy while enhancing persuasiveness.
Takeaways, Limitations
•
Takeaways:
◦
The persuasiveness of AI is more influenced by post-training and prompting techniques than by model size.
◦
Post-training and prompting techniques leverage AI’s ability to access and strategically leverage information to enhance persuasiveness.
◦
AI's increased persuasiveness is actually correlated with decreased accuracy.
◦
Contrary to current concerns, this suggests that we should focus on how to utilize large-scale language models rather than on their capabilities themselves.
•
Limitations:
◦
The research subject is limited to political issues, so generalization may be limited.
◦
Further research is needed on the specific mechanisms of post-training and prompting techniques.
◦
Further research on different types of AI models is needed.