Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Say My Name: a Model's Bias Discovery Framework

Created by
  • Haebom

Author

Massimiliano Ciranni, Luca Molinaro, Carlo Alberto Barbano, Attilio Fiandrotti, Vittorio Murino, Vito Paolo Pastore, Enzo Tartaglione

Outline

This paper introduces "Say My Name" (SaMyNa), a text-based tool for identifying bias in deep learning models. SaMyNa semantically analyzes the biases learned by the model, enhancing its explainability for non-experts. This provides semantic information about biased features that conventional bias removal methods cannot. SaMyNa can be applied both during training and post-validation, separating task-related information from biases for use in model diagnostics.

Takeaways, Limitations

Takeaways:
Presenting the first tool to semantically identify bias in deep learning models.
Analyze the model's bias in an explanatory manner so that even non-experts can understand it.
Flexibility applicable to both training and post-validation.
Demonstrated effective performance in bias detection and removal.
Presenting a wide range of applicability for model diagnostics.
Limitations:
Lack of description of specific bias removal methodology.
Further research is needed to determine applicability in real-world environments beyond benchmark evaluations.
Further validation of the tool's accuracy and effectiveness is needed.
👍