This paper emphasizes that effectively explaining the decisions of black-box machine learning models is crucial for responsible deployment of AI systems, and presents an I-CEE framework for user-centered explainable AI (XAI). I-CEE provides users with a subset of training data (example images), their local explanations, and model decisions to explain the decisions of image classification models. Unlike previous studies, I-CEE models the informativeness of example images based on user expertise, providing different examples for different users. Through simulations and experiments with 100 participants, we demonstrate that it improves the prediction accuracy (simplicity) of model decisions for users.