This paper provides an empirical analysis of gradient dynamics, which plays a pivotal role in determining the stability and generalization ability of deep neural networks. We analyze the evolution of the variance and standard deviation of gradients in convolutional neural networks, which exhibit consistent changes at both layer-by-layer and global scales. Based on these observations, we propose a hyperparameter-free gradient regularization method that aligns gradient scaling with the natural evolutionary process. This method prevents unintended amplification, stabilizes optimization, and maintains convergence guarantees. Experiments on the challenging CIFAR-100 benchmark using ResNet-20, ResNet-56, and VGG-16-BN demonstrate that the method maintains or improves test accuracy even under strong generalization. In addition to demonstrating substantial performance improvements, this study highlights the importance of directly tracking gradient dynamics to bridge the gap between theoretical expectations and empirical behavior and to provide insights for future optimization research.