This paper proposes a hybrid framework that integrates lightweight machine learning (ML)-based attack detection and large-scale language model (LLM)-based natural language explanations to address cybersecurity vulnerabilities such as falsified data injection (FDIA) attacks targeting automatic generation control (AGC) systems in smart grids. A classifier, LightGBM, achieves up to 95.13% attack detection accuracy with an inference latency of 0.004 seconds. Once a cyberattack is detected, an LLM, such as GPT-3.5 Turbo, GPT-4 Turbo, or GPT-4o mini, is invoked to generate a human-readable explanation of the event. Evaluation results show that GPT-4o mini, using 20-shot prompting, achieves 93% target identification accuracy, a mean absolute error of 0.075 pu in attack magnitude estimation, and a mean absolute error of 2.19 seconds in attack onset estimation, effectively balancing real-time detection with interpretable and accurate explanations. This addresses the critical need for actionable AI in smart grid cybersecurity.