Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Expert-Guided Explainable Few-Shot Learning for Medical Image Diagnosis

Created by
  • Haebom

Author

Ifrat Ikhtear Uddin, Longwei Wang, KC Santosh

Outline

Medical image analysis suffers from limited expert-annotated data, hindering model generalization and clinical applicability. This study proposes an expert-guided, explainable, multi-pass learning framework that integrates radiologists-provided regions of interest (ROIs) into model training to simultaneously improve classification performance and interpretability. Grad-CAM is utilized for spatial attention supervision, introducing an explanation loss based on Dice similarity to focus the model's attention on diagnostically relevant regions during training. This explanation loss is optimized with a standard prototypical network objective, encouraging the model to focus on clinically relevant features even under data constraints. The framework is evaluated on two datasets: BraTS (MRI) and VinDr-CXR (thoracic X line), demonstrating improved accuracy from 77.09% to 83.61% on BraTS and from 54.33% to 73.29% on VinDr-CXR. Grad-CAM visualizations confirm that expert-led training consistently focuses attention on the diagnostic domain, improving both predictive reliability and clinical trustworthiness. These results demonstrate the effectiveness of integrating expert-led attention supervision to bridge the gap between performance and interpretability in multiple training medical imaging diagnoses.

Takeaways, Limitations

Takeaways:
We demonstrate that a few expert-led, explainable learning frameworks can simultaneously improve the performance and interpretability of medical image analysis.
By leveraging Grad-CAM and Dice similarity-based explanation losses, we can effectively focus the model's attention on diagnosis-relevant areas.
High accuracy can be achieved even under limited data conditions.
We verify the effectiveness of the proposed method through experimental results on the BraTS and VinDr-CXR datasets.
Limitations:
Performance may be affected by the quality of the ROI provided by the expert.
Additional evaluation of generalization performance across various medical image types and diseases is needed.
A solution is needed to address the cost and time-consuming issues of expert annotation work.
👍