Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Measuring the Measures: Discriminative Capacity of Representational Similarity Metrics Across Model Families

Created by
  • Haebom

Author

Jialin Wu, Shreya Saha, Yiqing Bo, Meenakshi Khosla

Outline

This paper presents a systematic comparison of the discriminative power of various metrics measuring the similarity between model representations in neuroscience and artificial intelligence. We present a quantitative framework for assessing the discriminative power across model families, encompassing diverse architectures (CNNs, Vision Transformers, Swin Transformers, and ConvNeXt) and training methods (supervised vs. self-supervised). Using three separability metrics from signal detection theory—dprime, silhouette coefficient, and receiver-operating-area-curve (ROC-AUC)—we evaluate the discriminative power of widely used metrics, including RSA, linear prediction, Procrustes, and soft matching. Specifically, we show that the more stringent the alignment constraints, the higher the separability.

Takeaways, Limitations

Takeaways:
We systematically compare for the first time the relative sensitivity of metrics for measuring representational similarity between models.
Soft-matching showed the highest separability, followed by Procrustes alignment and linear prediction.
Non-conforming methods such as RSA also showed high separability.
Provides guidance on metric selection for large-scale model and brain comparisons.
Limitations:
No specific reference to Limitations is specified in the paper.
👍