[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Leveraging Context for Multimodal Fallacy Classification in Political Debates

Created by
  • Haebom

Author

Alessio Pittiglio

Outline

This paper presents the research results submitted to the MM-ArgFallacy2025 shared challenge. It aims to advance the research of multimodal argument mining focusing on logical fallacies in political debates. It uses a pre-trained Transformer-based model and suggests a context-exploiting method. In the error classification subtask, the macro F1 scores of text, audio, and multimodal models are 0.4444, 0.3559, and 0.4403, respectively. The multimodal model shows similar performance to the text-only model, suggesting the possibility of improvement.

Takeaways, Limitations

Takeaways: Demonstrates the feasibility of logical error classification using multi-modal information. Confirms the utility of Transformer-based models. Suggests the need for further research to improve performance in the future.
Limitations: The performance of the multi-modal model is similar to that of the text-only model, so the effect of utilizing multi-modal information is limited. The macro F1 score is relatively low, so performance improvement is needed. Since the results are for a specific dataset, further verification of generalizability is needed.
👍