Language models (LMs) have emerged as a powerful source of evidence for linguists seeking to develop syntactic theories. This paper argues that applying causal interpretability methods to LMs can significantly enhance the value of this evidence by characterizing the abstract mechanisms that LMs learn to use. We conduct experiments focusing on filler-gap dependency structures in English (e.g., questions, relative clauses). Using experiments based on distributed exchange interventions, we demonstrate that LMs converge on a similar abstract analysis of these structures. This analysis can reveal previously overlooked factors related to frequency, filler type, and surrounding context, potentially leading to changes in standard linguistic theory. Overall, our findings suggest that mechanistic internal analysis of LMs can advance linguistic theory.