[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Too Much to Trust? Measuring the Security and Cognitive Impacts of Explainability in AI-Driven SOCs

Created by
  • Haebom

Author

Nidhi Rastogi, Shirid Pant, Devang Dhanuka, Amulya Saxena, Pranjal Mairal

Outline

This paper highlights the importance of explainable AI (XAI) to enhance the transparency and reliability of AI-based threat detection in security operation centers (SOCs), and points out that determining the appropriate level and form of explanation is a complex and unexplored challenge in environments where rapid decision-making in high-stakes situations is required. Through a three-month mixed-methods study combining an online survey (N1=248) and in-depth interviews (N2=24), we explore how SOC analysts conceptualize AI-generated explanations and what types of explanations are viable and credible across different analyst roles. Our results show that when explanations are perceived as relevant and evidence-based, participants consistently accept XAI outputs, even when predictive accuracy is low. Analysts repeatedly emphasized the importance of understanding the reasons behind AI decisions, and strongly preferred a deeper understanding of the context over simply presenting the results on a dashboard. Based on these insights, this study reevaluates current approaches to explanation in the security context, and demonstrates that role-aware and context-rich XAI designs tailored to SOC workflows can significantly enhance practical utility. These tailored explainability capabilities enhance analyst understanding, increase classification efficiency, and enable more confident responses to evolving threats.

Takeaways, Limitations

Takeaways:
SOC analysts demonstrate a willingness to accept XAI outputs even with lower predictive accuracy when relevant and evidence-based explanations are provided.
Analysts prefer in-depth explanations of the context, that is, understanding the reasons behind AI decisions.
Role-aware and context-rich XAI design can contribute to increased analyst understanding, increased triage efficiency, and increased threat response confidence through integration with SOC workflows.
Limitations:
The study subjects were limited to a specific group of SOC analysts, which may limit generalizability.
The relatively short study period of 3 months may preclude evaluation of long-term effects and continued use.
Further research is needed on the generalizability of explainability to different types of AI-based threat detection systems.
Lack of quantitative evaluation of the effectiveness of specific explanatory approaches.
👍