This paper analyzes the mechanisms by which data augmentation improves the performance of deep neural networks using random matrix theory. We analyze the impact of increased data diversity on the spectral distribution of the weight space and compare data augmentation, dropout, and weight decay techniques. We reveal that data diversity alters the weight spectral distribution similarly to other regularization techniques, but exhibits a pattern more similar to that observed with dropout. Furthermore, based on these insights, we propose a metric to explain and compare the benefits of diversity achieved through traditional data augmentation and synthetic data.