This paper explores the automatic classification of disaster-related tweets on social media platforms like Twitter, a crucial source of real-time information during disasters and public safety emergencies. While conventional machine learning models, such as logistic regression, naive Bayes, and support vector machines, struggle to understand the context or deep meaning of informal, metaphorical, or ambiguous language, we hypothesize and experimentally validate that Transformer-based models (BERT, DistilBERT, RoBERTa, and DeBERTa) will perform better. Experimental results demonstrate that BERT significantly outperforms conventional models (logistic regression and naive Bayes, 82%) with 91% accuracy, demonstrating its ability to better understand nuanced language through contextual embedding and attention mechanisms. Therefore, we demonstrate that the Transformer architecture is more suitable for public safety applications, offering improved accuracy, deeper language understanding, and better generalization to real-world social media texts.