Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

A Comprehensive Perspective on Explainable AI across the Machine Learning Workflow

Created by
  • Haebom

Author

George Paterakis, Andrea Castellani, George Papoutsoglou, Tobias Rodemann, Ioannis Tsamardinos

Outline

This paper presents a Holistic Explainable Artificial Intelligence (HXAI) framework. HXAI is a user-centric framework that integrates explanations into every step of the data analysis workflow and provides personalized explanations to users. It integrates six components—data, analysis setup, learning process, model output, model quality, and communication channels—into a single taxonomy, tailoring each component to the needs of domain experts, data analysts, and data scientists. A list of 112 questions addresses these needs, and a survey of contemporary tools highlights critical gaps in their applicability. Drawing on human explanation theory, human-computer interaction principles, and empirical user research, we identify the characteristics that make explanations clear, actionable, and cognitively manageable. A comprehensive taxonomy embodies these insights, reducing terminological ambiguity and enabling rigorous applicability analysis of existing toolchains. Furthermore, we demonstrate how AI agents, incorporating large-scale language models, can orchestrate diverse explanation techniques to transform technical artifacts into stakeholder-specific explanations, bridging the gap between AI developers and domain experts. Unlike existing surveys or perspective papers, this study combines concepts from various fields, lessons learned from real-world projects, and a critical synthesis of the literature to present a novel end-to-end perspective on transparency, trust, and responsible AI deployment.

Takeaways, Limitations

Takeaways:
We present the HXAI framework, which provides explainability across the data analysis workflow.
Meeting the needs of diverse stakeholders through a user-centric approach.
Identifying the scope of explainable AI and suggesting directions for improvement through a list of 112 questions and analysis of modern tools.
Presenting an effective method for generating and delivering explanations using AI agents based on large-scale language models.
Provides a comprehensive perspective that integrates knowledge from various fields.
Limitations:
Lack of concrete case studies on practical implementation and application of the HXAI framework.
Further validation of the comprehensiveness and validity of the 112-item questionnaire is needed.
Further research is needed to explore the scalability of the proposed framework and its applicability to various AI models.
Lack of evaluation of the applicability and effectiveness of the HXAI framework for specific domains.
👍