This paper analyzes the impact of adversarial attacks on model interpretability in text classification problems. We develop a machine learning-based classification model for text data, introduce adversarial perturbations, and evaluate classification performance after the attack. We analyze and interpret the model's explainability before and after the attack. This is part of a study examining the vulnerability of deep learning models to adversarial attacks, which can have serious consequences in areas such as autonomous driving, medical diagnosis, and security systems.