This paper investigates the phenomenon of neuron universality in independently trained GPT-2 Small models. Five GPT-2 models are analyzed at three checkpoints (100k, 200k, and 300k steps), and activation correlation analysis on a 5-million-token dataset identifies universal neurons (neurons with consistently correlated activations across models). Using ablation experiments to measure loss and KL divergence, we reveal the significant functional impact of universal neurons on model predictions. Furthermore, we quantify neuron persistence, demonstrating the high stability of universal neurons across training checkpoints, particularly in deeper layers. These results suggest the emergence of a stable and universal representational structure during neural network training.