This paper presents the results of a study that optimizes and fine-tunes the Transformer-based DistilBERT model to improve phishing email detection performance in response to the increasing threat of phishing emails. We utilize preprocessing techniques to address the imbalanced dataset problem and experimentally demonstrate high accuracy. Furthermore, we ensure transparency by making the model's prediction process explainable through XAI techniques such as LIME and Transformer Interpret.