Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Exploring Content and Social Connections of Fake News with Explainable Text and Graph Learning

Created by
  • Haebom

Author

V itor N. Louren\c{c}o, Aline Paes, Tillman Weyde

Outline

To address the global spread of misinformation and concerns about content credibility, this paper proposes an explainable fact-checking framework that goes beyond content analysis and integrates social media dynamics, such as "likes" and user networks. Rather than simply labeling content as false, it enhances fact-checking by combining content, social media, and graph-based features, and integrates explainability techniques to provide complete and interpretable insights that support classification decisions. Experiments on English, Spanish, and Portuguese datasets demonstrate that multimodal information improves performance over unimodal information. Using a novel protocol, we evaluate the interpretability, reliability, and robustness of the framework's explanations, demonstrating that it effectively generates human-understandable predictive evidence.

Takeaways, Limitations

Takeaways:
We demonstrate that integrating multimodal information (content, social media, graphs) can improve fact-checking performance.
Increase transparency and reliability of classification decisions through an explainable fact-checking system.
We used a multilingual dataset (English, Spanish, Portuguese) to verify applicability in various language environments.
We present a method for assessing the interpretability, reliability, and robustness of explanations through a novel protocol.
Limitations:
Lack of specific information about the size and diversity of the datasets used.
Further verification of the generalization performance of the proposed framework is needed.
Further research is needed to determine the objectivity and generalizability of the explainability assessment protocol.
Lack of consideration for various problems that may arise in actual application (e.g., persistent generation of false information, algorithmic bias, etc.).
👍