This paper presents a systematic comparison of the discriminative power of various metrics measuring the similarity between model representations in neuroscience and artificial intelligence. We present a quantitative framework for assessing the discriminative power across model families, encompassing diverse architectures (CNNs, Vision Transformers, Swin Transformers, and ConvNeXt) and training methods (supervised vs. self-supervised). Using three separability metrics from signal detection theory—dprime, silhouette coefficient, and receiver-operating-area-curve (ROC-AUC)—we evaluate the discriminative power of widely used metrics, including RSA, linear prediction, Procrustes, and soft matching. Specifically, we show that the more stringent the alignment constraints, the higher the separability.