Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Large Language Model-Based Framework for Explainable Cyberattack Detection in Automatic Generation Control Systems

Created by
  • Haebom

Author

Muhammad Sharshar, Ahmad Mohammad Saber, Davor Svetinovic, Amr M. Youssef, Deepa Kundur, Ehab F. El-Saadany

Outline

This paper proposes a hybrid framework that integrates lightweight machine learning (ML)-based attack detection and large-scale language model (LLM)-based natural language explanations to address cybersecurity vulnerabilities such as falsified data injection (FDIA) attacks targeting automatic generation control (AGC) systems in smart grids. A classifier, LightGBM, achieves up to 95.13% attack detection accuracy with an inference latency of 0.004 seconds. Once a cyberattack is detected, an LLM, such as GPT-3.5 Turbo, GPT-4 Turbo, or GPT-4o mini, is invoked to generate a human-readable explanation of the event. Evaluation results show that GPT-4o mini, using 20-shot prompting, achieves 93% target identification accuracy, a mean absolute error of 0.075 pu in attack magnitude estimation, and a mean absolute error of 2.19 seconds in attack onset estimation, effectively balancing real-time detection with interpretable and accurate explanations. This addresses the critical need for actionable AI in smart grid cybersecurity.

Takeaways, Limitations

Takeaways:
We demonstrate that combining real-time attack detection using lightweight ML models with explainable AI using LLM can improve the reliability and practicality of smart grid cybersecurity.
Real-time attack detection is possible with high accuracy (95.13%) and low latency (0.004 seconds).
LLM can support operator decision-making by accurately describing the attack target, scale, and time of occurrence.
Limitations:
The descriptive accuracy of LLM is not perfect (there is a mean absolute error), and further improved LLM or prompt engineering techniques may be required.
Further validation of the proposed framework's application to actual smart grid environments is required.
There is a possibility that the use of LLM may increase computational costs and latency.
Additional generalization performance evaluations against various types of FDIA attacks are needed.
👍