This paper argues that explainable AI (XAI) can support artistic engagement, modifiability, and ongoing practice beyond transparency in creative contexts. While traditional refined datasets and human-scale model training can offer artists greater autonomy and control, large-scale generative models, such as text-to-image diffusion systems, often obscure this potential. This paper proposes that even large-scale models can be treated as creative materials if their internal structure is exposed and manipulable. We propose a technology-based explainability approach rooted in long-term, direct engagement, akin to Schön's "reflection-in-action," and demonstrate its application through model bending and inspection plugins integrated into ComfyUI's node-based interface. We demonstrate that by interactively manipulating different parts of a generative model, artists can develop intuition about how each component influences the output.