Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

MetaExplainer: A Framework to Generate Multi-Type User-Centered Explanations for AI Systems

Created by
  • Haebom

Author

Shruthi Chari, Oshani Seneviratne, Prithwish Chakraborty, Pablo Meyer, Deborah L. McGuinness

Outline

MetaExplainer is a neural symbolic framework that generates user-centered explanations. It generates natural language explanations tailored to user questions through a three-step process: question decomposition using LLM, system recommendation generation using model explanation methods, and explanation output summarization. It leverages an explanation ontology to guide LLM and explanation methods, and supports various explanation types (contrastive, counterfactual, evidence-based, case-based, and data-based). Evaluation results using the PIMA Indian diabetes dataset demonstrated a question reconstruction F1-score of 59.06%, model explanation fidelity of 70%, and natural language synthesis context utilization of 67%. User studies confirmed the creativity and comprehensiveness of the generated explanations.

Takeaways, Limitations

Takeaways:
Contributes to improving the reliability of AI systems by presenting a neural symbol framework for generating user-centered explanations.
Leverage LLM and explanatory ontologies to increase adaptability to different explanation types and questions.
Experimentally verified high performance (question reconstruction, model explanation fidelity, natural language synthesis context utilization).
Suggests applicability to various fields.
Limitations:
Since it was evaluated using only the PIMA Indian diabetes dataset, its generalization performance on other datasets or applications requires further study.
Lack of detailed description of the design and construction process of the explanatory ontology.
Lack of analysis of performance changes according to the choice of LLM and explanation method.
👍