Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Comparative Analysis of Transformer Models in Disaster Tweet Classification for Public Safety

Created by
  • Haebom

Author

Sharif Noor Zisad, NM Istiak Chowdhury, Ragib Hasan

Outline

This paper explores the automatic classification of disaster-related tweets on social media platforms like Twitter, a crucial source of real-time information during disasters and public safety emergencies. While conventional machine learning models, such as logistic regression, naive Bayes, and support vector machines, struggle to understand the context or deep meaning of informal, metaphorical, or ambiguous language, we hypothesize and experimentally validate that Transformer-based models (BERT, DistilBERT, RoBERTa, and DeBERTa) will perform better. Experimental results demonstrate that BERT significantly outperforms conventional models (logistic regression and naive Bayes, 82%) with 91% accuracy, demonstrating its ability to better understand nuanced language through contextual embedding and attention mechanisms. Therefore, we demonstrate that the Transformer architecture is more suitable for public safety applications, offering improved accuracy, deeper language understanding, and better generalization to real-world social media texts.

Takeaways, Limitations

Takeaways:
We demonstrate that a Transformer-based model achieves significantly higher accuracy than existing machine learning models in classifying disaster-related tweets.
We demonstrate that the contextual embedding and attention mechanisms of the Transformer model are effective in understanding informal and ambiguous social media language.
This study suggests the potential of transformer-based models in the public safety field and suggests that they can contribute to efficient disaster response.
Limitations:
The focus is on performance evaluation of a specific transformer model (BERT), which may lack detailed comparative analysis between other models.
Additional validation of real-time performance and scalability in actual disaster situations is required.
Further research is needed to determine generalizability across diverse linguistic and cultural backgrounds.
👍