This study proposes Persistent Homology (PH), a topological data analysis tool, to analyze the impact of adversarial inputs on the internal representation space of large-scale language models (LLMs). It overcomes the limitations of existing interpretability methodologies, which focus on linear directions or isolated features, and focuses on understanding high-dimensional, nonlinear relational geometries. By analyzing six state-of-the-art models under two adversarial environments, including indirect prompt injection and backdoor fine-tuning, we identify consistent topological features of adversarial influence. Our results reveal that adversarial inputs induce "topological compression" of the latent space, simplifying its structure.