Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

On Generating Monolithic and Model Reconciling Explanations in Probabilistic Scenarios

Created by
  • Haebom

Author

Stylianos Loukas Vasileiou, William Yeoh, Alessandro Previti, Tran Cao Son

Outline

This paper proposes a novel framework for explaining AI system decisions in uncertain environments characterized by incomplete information and probabilistic models. We focus on generating two types of explanations: monolithic explanations and model reconciling explanations. Monolithic explanations provide their own justifications for the explanandum, while model reconciling explanations consider the user's knowledge of the explanation recipient. Monolithic explanations incorporate uncertainty by increasing the probability of the explanandum using probabilistic logic. Model reconciling explanations extend a logic-based variant of the model reconciliation problem to consider probabilistic human models, seeking an explanation that minimizes the conflict between the explanation and the probabilistic human model while increasing the probability of the explanandum. To evaluate explanation quality, we present quantitative metrics of explanatory gain and explanatory power. We also present an algorithm that efficiently computes explanations by exploiting the duality between minimal correction sets and minimal unsatisfiable sets. Extensive experimental evaluations on various benchmarks demonstrate the efficiency and scalability of our approach for generating explanations under uncertainty.

Takeaways, Limitations

Takeaways:
We present a novel framework for improving the explainability of AI system decisions in uncertain environments.
Expanding applicability to various situations by providing two types of explanations: single explanation and model-adjusted explanation.
Development of efficient algorithms using probabilistic logic and minimal modification/dissatisfaction sets.
Presentation of indicators that can quantitatively evaluate the quality of explanations.
Limitations:
Further research is needed to determine the practical applicability of the proposed framework.
Generalizability to various types of uncertainty and complex situations needs to be verified.
Consideration needs to be given to the accuracy and reliability of probabilistic human models.
Further research is needed on the interpretability and user-friendliness of the description.
👍