This paper highlights the importance of explainable AI (XAI) to enhance the transparency and reliability of AI-based threat detection in security operation centers (SOCs), and points out that determining the appropriate level and form of explanation is a complex and unexplored challenge in environments where rapid decision-making in high-stakes situations is required. Through a three-month mixed-methods study combining an online survey (N1=248) and in-depth interviews (N2=24), we explore how SOC analysts conceptualize AI-generated explanations and what types of explanations are viable and credible across different analyst roles. Our results show that when explanations are perceived as relevant and evidence-based, participants consistently accept XAI outputs, even when predictive accuracy is low. Analysts repeatedly emphasized the importance of understanding the reasons behind AI decisions, and strongly preferred a deeper understanding of the context over simply presenting the results on a dashboard. Based on these insights, this study reevaluates current approaches to explanation in the security context, and demonstrates that role-aware and context-rich XAI designs tailored to SOC workflows can significantly enhance practical utility. These tailored explainability capabilities enhance analyst understanding, increase classification efficiency, and enable more confident responses to evolving threats.