To address the challenge of explainable AI (XAI) methods, which struggle to produce clear and interpretable results for users without domain expertise, this paper proposes Feature-Guided Neighbor Selection (FGNS), a post hoc method that selects representative examples of classes using both local and global feature importance. In a user study (N=98) evaluating Kanji script classification, FGNS significantly improved non-experts' ability to identify model errors while maintaining reasonable agreement with accurate predictions. Participants made faster and more accurate decisions than those given traditional k-NN explanations. Quantitative analysis demonstrates that FGNS selects neighbors that better reflect class characteristics, rather than simply minimizing feature space distance, leading to more consistent selection and denser clustering around class prototypes. These results suggest that FGNS could be a step toward more human-centered model evaluation, but further research is needed to bridge the gap between explanation quality and perceived trust.