This paper proposes "anti-regularization," a novel technique that intentionally boosts the expressive power of models in small-data environments. Anti-regularization introduces a reversed reward term into the loss function, enhancing the expressive power of models at small sample sizes and fading out interventions as the sample size increases, following a power-law decay schedule. We formulate spectral safety conditions and trust region constraints and design a lightweight safety mechanism combining a projection operator and gradient clipping to ensure stable interventions. The theoretical analysis extends to linear smoothing and neural tangent kernel regimes, providing practical guidance on the selection of a decay exponent through an empirical trade-off between risk and variance. Experimental results demonstrate that anti-regularization mitigates underfitting in both regression and classification while maintaining generalization performance and improving calibration. Further analysis confirms that the decay schedule and safety mechanism are essential for avoiding overfitting and instability. Furthermore, we propose a degree-of-freedom target schedule that maintains constant per-sample complexity. Denormalization is a simple and reproducible procedure that integrates seamlessly into standard empirical risk minimization pipelines, enabling robust learning under limited data and resource constraints by intervening only when necessary and discarding otherwise.