This paper addresses the significant impact of image quality on the performance of deep neural networks (DNNs). DNNs are widely known to be sensitive to changes in imaging conditions. While traditional image quality assessment (IQA) attempts to measure and align quality with human perceptual judgments, metrics that are sensitive to imaging conditions and also align well with DNN sensitivity are often needed. This paper first questions how informative existing IQA metrics are for DNN performance. We demonstrate theoretically and experimentally that existing IQA metrics are weak predictors of DNN performance for image classification. Using a causal framework, we develop metrics that exhibit strong correlations with DNN performance, enabling effective estimation of the quality distribution of large-scale image datasets for target vision tasks.