Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene Classification

Created by
  • Haebom

Author

Jonas Klotz, Tom Burgert, Beg um Demir

Outline

This study explores the development of explainable artificial intelligence (xAI) methods for scene classification problems in remote sensing (RS). To investigate the suitability of directly applying xAI methods and evaluation metrics developed in computer vision (CV) to remote sensing, we conducted methodological and experimental analyses by applying five feature attribution methods (Occlusion, LIME, GradCAM, LRP, and DeepLIFT) and ten explanation metrics to three remote sensing (RS) datasets.

Takeaways, Limitations

Takeaways:
We provide guidance on the selection of description methods, metrics, and hyperparameters in RS image scene classification.
Robustness metrics and randomization metrics showed stability.
Limitations:
Perturbation-based methods (Occlusion, LIME) rely heavily on the perturbation baseline and spatial characteristics of the RS scene.
Gradient-based methods such as GradCAM struggle when there are multiple labels for the same image.
Relevance propagation methods such as LRP can distribute relevance unevenly compared to the spatial extent of the classes.
Faithfulness metrics share the same problems as perturbation-based methods.
Localization metrics and complexity metrics are unreliable for classes with wide spatial range.
👍