Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

I-CEE: Tailoring Explanations of Image Classification Models to User Expertise

Created by
  • Haebom

Author

Yao Rong, Peizhu Qian, Vaibhav Unhelkar, Enkelejda Kasneci

Outline

This paper emphasizes that effectively explaining the decisions of black-box machine learning models is crucial for responsible deployment of AI systems, and presents an I-CEE framework for user-centered explainable AI (XAI). I-CEE provides users with a subset of training data (example images), their local explanations, and model decisions to explain the decisions of image classification models. Unlike previous studies, I-CEE models the informativeness of example images based on user expertise, providing different examples for different users. Through simulations and experiments with 100 participants, we demonstrate that it improves the prediction accuracy (simplicity) of model decisions for users.

Takeaways, Limitations

Takeaways:
We present a user-centered XAI approach, demonstrating that understanding and simulation can be improved by providing explanations tailored to the user's expertise.
We experimentally demonstrate that the I-CEE framework improves users' understanding of the model and enables them to better predict the model's decisions.
It overcomes the limitations of the existing “one-size-fits-all” approach to XAI and emphasizes the importance of user-tailored explanations.
Limitations:
Currently, it has only been applied to image classification models, and its generalizability to other types of models requires further study.
Further research may be needed on how to accurately assess user expertise.
The results of the trial with 100 participants are limited in size and further studies with more diverse populations may be needed.
👍