This paper proposes a novel regularization loss that guides samples to conform to a standard Gaussian distribution to facilitate various subsequent tasks, including optimization in the latent space of text-to-image models. We treat the elements of high-dimensional samples as one-dimensional standard Gaussian variables in the spatial domain and define a composite loss that combines moment-based regularization in the spatial domain and power-spectrum-based regularization in the spectral domain. Because the expected values of the moment and power-spectrum distributions are analytically known, this loss facilitates consistency with these properties. To ensure permutation invariance, the loss is applied to randomly permuted inputs. Notably, existing Gaussian-based regularizations are integrated within our unified framework. While some correspond to moment losses of a certain order, previous covariance matching losses are equivalent to our spectral loss but incur higher time complexity due to spatial-domain computation. In this paper, we demonstrate the application of our regularization in generative modeling for test-time compensation alignment using text-to-image models, focusing specifically on improving aesthetics and text alignment. The proposed regularization outperforms existing Gaussian regularization, effectively preventing compensation hacking, and speeding up convergence.