This paper proposes a fundamental distinction regarding the transferability of adversarial attacks. While adversarial example transfer between image classifiers and text exfiltration between language models are successful, recent research has shown that image exfiltration between vision-language models (VLMs) is not successful. To explain this difference, the authors hypothesize that attack transferability is limited to attacks in the input data space, while attacks in the model representation space do not transfer without geometric alignment. This hypothesis is supported by mathematical proof, representation space attacks, data space attacks, and an analysis of the latent geometric structure of the VLM. Ultimately, they demonstrate that the transferability of adversarial attacks is not an inherent property of all attacks, but rather depends on their operational domains: the shared data space and the unique representation space of the model.