Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

ITL-LIME: Instance-Based Transfer Learning for Enhancing Local Explanations in Low-Resource Data Settings

Created by
  • Haebom

Author

Rehan Raza, Guanjin Wang, Kok Wai Wong, Hamid Laga, Marco Fisichella

Outline

To address the reliability and stability issues of explainable artificial intelligence (XAI) methods in data-poor environments, this paper proposes Instance-based Transfer Learning (ITL-LIME), which integrates instance-based transfer learning into the LIME framework. To address the locality and instability issues caused by random perturbation and sampling in conventional LIME, we leverage real instances from related source domains to assist in explaining target domains. We cluster source domains, retrieve relevant instances from clusters with prototypes most similar to the target instances, and combine them with neighboring instances of the target instance. We weight instances using a contrastive learning-based encoder, and train a surrogate model using the weighted source and target instances to generate explanations.

Takeaways, Limitations

Takeaways:
Improving the reliability and stability of LIME explanations in data-poor environments.
A Novel LIME Framework Utilizing Transfer Learning
Improving explanation accuracy through weighting based on contrastive learning.
Generating realistic explanations using perturbation methods that leverage real data.
Limitations:
Dependence on similarity between source and target domains
Performance sensitivity to clustering and prototype selection methods
Dependence on the performance of contrastive learning-based encoders
Need to verify generalization performance for various types of data and models
👍