This paper points out the lack of conceptual foundations and integration of explainable AI (XAI) research with broader discourses on scientific explanation, and presents a new XAI study that bridges this gap by drawing on explanatory strategies from a variety of scientific and philosophy of science literature. In particular, we present a mechanistic strategy for explaining the functional composition of deep learning systems, which involves identifying the mechanisms that drive decision-making in order to explain opaque AI systems. In the case of deep neural networks, this means identifying functionally relevant components such as neurons, layers, circuits, or activation patterns, and decomposing, localizing, and reconstructing them to understand their roles. Through proof-of-concept case studies in image recognition and language modeling, we connect this theoretical approach to recent research from AI research labs such as OpenAI and Anthropic, and suggest that a systematic approach to model construction can facilitate more thoroughly explainable AI by uncovering elements that individual explainability techniques may overlook.