This paper presents a novel method for robustness auditing of deep neural networks (DNNs). Existing robustness audits focus on individual image distortions, failing to adequately reflect the complex distortions found in real-world environments. In this paper, we propose a method that utilizes causal inference to measure the impact of multiple factors in the image processing process on DNN performance. A causal model explicitly represents domain-relevant factors and their interactions, and reliably estimates the causal effects of each factor using only observed data. This approach directly links DNN vulnerabilities to observable properties of the image pipeline, contributing to reducing the risk of unpredictable DNN errors in real-world environments. We validate the effectiveness of the proposed method through experiments on various vision tasks using natural and rendered images.