Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

From Explainable to Explained AI: Ideas for Falsifying and Quantifying Explanations

Created by
  • Haebom

Author

Yoni Schirris, Eric Marcus, Jonas Teuwen, Hugo Horlings, Efstratios Gavves

Outline

This paper proposes a method for explaining deep learning models for clinical applications in medical image analysis systems. We note that existing techniques, such as GradCAM, can identify influential features but fail to provide explanations themselves. Therefore, we propose a human-machine-Vision-Language Model (VLM) interaction system specifically for explaining classifiers in histopathology. It involves multi-instance learning on entire slide images and quantitatively evaluates the predictive power of explanations using an AI-integrated slide viewer and a standard VLM. Experimental results demonstrate that the proposed system can qualitatively verify explanation claims and quantitatively distinguish competing explanations. This presents a practical approach for advancing explainable AI to explained AI in digital pathology and beyond.

Takeaways, Limitations

Takeaways:
A novel approach to improving the explainability of deep learning models in medical image analysis is presented.
Human-machine-VLM interaction systems enable qualitative and quantitative evaluation of explanations.
Presenting a practical approach to advancing explainable AI into explained AI in the field of digital pathology.
Full-slide image analysis possible using multi-instance learning.
Reproducibility and extensibility through open code and prompts.
Limitations:
Performance evaluation of the proposed system may be limited to a specific dataset.
It depends on the performance of VLM, and the limitations of VLM may affect the performance of the system.
Further research is needed to determine generalizability across different types of medical imaging and deep learning models.
Further review of the interpretation and reliability of the description is needed.
👍