Developing reliable AI requires understanding the internal computation of models. Mechanistic Interpretability (MI) aims to uncover the algorithmic mechanisms of model behavior. This paper argues that interpretability methods, such as circuit discovery, suffer from variance and robustness issues due to their reliance on statistical estimation. Through a systematic stability analysis of EAP-IG, a state-of-the-art circuit discovery methodology, we evaluate various controlled perturbations, including input resampling, prompt reconfiguration, hyperparameter variation, and noise injection within causal analysis. Across various models and tasks, EAP-IG exhibits high structural variance and hyperparameter sensitivity, raising questions about the robustness of the results. Based on these findings, we recommend regular reporting of stability metrics to enhance the scientific rigor of interpretability studies.