In this paper, we propose a temporal regularization training (TRT) method that introduces a time-dependent regularization mechanism to solve the overfitting problem of spiking neural networks (SNNs), which are effective for event-based neuromorphic data processing. Directly trained SNNs suffer from severe overfitting due to the limited size of neuromorphic datasets and the gradient mismatch problem, and TRT alleviates this problem by imposing stronger constraints on the early time steps. We compare the performance of TRT with state-of-the-art methods on CIFAR10/100, ImageNet100, DVS-CIFAR10, and N-Caltech101 datasets, and verify the effectiveness of TRT through ablation studies including loss landscape visualization and learning curve analysis. In addition, we present a theoretical interpretation of the temporal regularization mechanism of TRT based on the results of Fisher information analysis, and reveal the temporal information concentration (TIC) phenomenon by tracking Fisher information during TRT training. This is a phenomenon in which Fisher information is gradually concentrated in the early time steps, and it shows that the time-decay regularization mechanism of TRT improves the generalization performance of the model by inducing the network to learn strong features in the early time steps with rich information. The source code is available on GitHub.