This paper presents artistic and technical research on the attention mechanism of video diffusion converters. Inspired by early video artists who manipulated analog video signals to create new visual aesthetics, this study proposes a method for extracting and visualizing cross-attention maps from generative video models. Built on the open-source Wan model, this tool provides an interpretable window into the temporal and spatial behavior of attention in text-to-video generation. Through exploratory research and artistic case studies, we explore the potential of utilizing attention maps as both an analytical tool and raw artistic material. This research contributes to the growing field of Explainable AI for Art (XAIxArts), inviting artists to reclaim the inner workings of AI as a creative medium.