This paper introduces graph probing, a method for revealing functional connectivity in large-scale language models (LLMs) and linking it to language generation performance. Experiments with various LLM models and scales reveal that neural network topology alone can predict next-token prediction performance. Specifically, probing for neural network topology outperforms probing for activations, providing evidence that LLMs leverage this topological information. Based on this, we demonstrate that LLMs can improve their efficiency, reliability, and security through applications such as model pruning, hallucination detection, and LLM fingerprinting.