Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Feature-Guided Neighbor Selection for Non-Expert Evaluation of Model Predictions

Created by
  • Haebom

Author

Courtney Ford, Mark T. Keane

Outline

To address the challenge of explainable AI (XAI) methods, which struggle to produce clear and interpretable results for users without domain expertise, this paper proposes Feature-Guided Neighbor Selection (FGNS), a post hoc method that selects representative examples of classes using both local and global feature importance. In a user study (N=98) evaluating Kanji script classification, FGNS significantly improved non-experts' ability to identify model errors while maintaining reasonable agreement with accurate predictions. Participants made faster and more accurate decisions than those given traditional k-NN explanations. Quantitative analysis demonstrates that FGNS selects neighbors that better reflect class characteristics, rather than simply minimizing feature space distance, leading to more consistent selection and denser clustering around class prototypes. These results suggest that FGNS could be a step toward more human-centered model evaluation, but further research is needed to bridge the gap between explanation quality and perceived trust.

Takeaways, Limitations

Takeaways:
We demonstrate that FGNS is an effective XAI method for improving the model error identification ability of non-experts.
Supports faster and more accurate decision-making than existing k-NN explanations.
It provides more consistent results by selecting neighbors that better reflect class characteristics.
We present a new direction for human-centric model evaluation.
Limitations:
Further research is needed to bridge the gap between description quality and perceived trust.
👍