Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Debunking with Dialogue? Exploring AI-Generated Counterspeech to Challenge Conspiracy Theories

작성자
  • Haebom

Author

Mareike Lisker, Christina Gottschalk, Helena Mihaljevi c

Outline

This paper focuses on counterspeech (refutation) as a strategy to counter harmful online content (conspiracy theories). Because expert-driven counterspeech struggles to scale, we propose a method utilizing large-scale language models (LLMs). However, we highlight the lack of a counterspeech dataset for conspiracy theories. We evaluate the counterspeech generation capabilities of GPT-4o, Llama 3, and Mistral models using structured prompts based on psychological research. Experimental results show that these models tend to produce generic, repetitive, and superficial results, overemphasize fear, and fabricate facts, sources, and figures. This suggests that prompt-based approaches pose challenges for practical application.

Takeaways, Limitations

Takeaways: This study empirically explores the potential and limitations of generating conspiracy theory counterspeech using the LLM, suggesting future research directions. It clearly demonstrates the current inadequacy of the LLM's ability to generate counterspeech.
Limitations: Difficulty in generalizing due to the lack of a dataset used for evaluation. Lack of real-world validation of the effectiveness of the model-generated counterspeech. While the limitations of LLM (generalized reactions, factual distortion, and fear overrecognition) are clearly revealed, specific solutions to overcome these limitations are lacking.
👍