This paper proposes a novel method called cross-regularization, which automates complexity control to prevent overfitting in model regularization. Unlike conventional manual tuning methods, cross-regularization directly adjusts regularization parameters by utilizing the gradients of validation data, so that training data contributes to feature learning and validation data contributes to complexity control. Through this, we prove that it converges to the cross-validation optimum, and show that high noise tolerance and architecture-specific regularization can be naturally obtained when implemented by noise injection into neural networks. In addition, it is easy to integrate with data augmentation, uncertainty correction, and dataset augmentation, and maintains single-run efficiency through a gradient-based approach.