This paper presents a Holistic Explainable Artificial Intelligence (HXAI) framework. HXAI is a user-centric framework that integrates explanations into every step of the data analysis workflow and provides personalized explanations to users. It integrates six components—data, analysis setup, learning process, model output, model quality, and communication channels—into a single taxonomy, tailoring each component to the needs of domain experts, data analysts, and data scientists. A list of 112 questions addresses these needs, and a survey of contemporary tools highlights critical gaps in their applicability. Drawing on human explanation theory, human-computer interaction principles, and empirical user research, we identify the characteristics that make explanations clear, actionable, and cognitively manageable. A comprehensive taxonomy embodies these insights, reducing terminological ambiguity and enabling rigorous applicability analysis of existing toolchains. Furthermore, we demonstrate how AI agents, incorporating large-scale language models, can orchestrate diverse explanation techniques to transform technical artifacts into stakeholder-specific explanations, bridging the gap between AI developers and domain experts. Unlike existing surveys or perspective papers, this study combines concepts from various fields, lessons learned from real-world projects, and a critical synthesis of the literature to present a novel end-to-end perspective on transparency, trust, and responsible AI deployment.