This paper investigates the limitations of the common assumption in probabilistic generative models for image generation that new images can be generated through sampling simply by learning the global data distribution. Focusing on the observation that learning the global distribution leads to memorization rather than generative behavior, we propose two theoretical frameworks: the mutually exclusive probability space (MEPS) and the local dependence hypothesis (LDH). MEPS stems from the observation that deterministic mappings involving random variables (e.g., neural networks) reduce the redundancy coefficients between related random variables, thereby promoting exclusivity. We propose a lower bound on the redundancy coefficients and introduce the binary latent autoencoder (BL-AE), which encodes images into a coded binary latent representation. LDH formalizes dependencies within a finite observation radius, which motivates the development of the γ-autoregressive random variable model (γ-ARVM), an autoregressive model with a variable observation range γ. The γ-ARVM is an autoregressive model that predicts the histogram for the next token. We observe that as the observation range increases, the autoregressive model gradually shifts toward memorization. Within the limitations of global dependence, this model operates as a pure memory device when using binary latent variables generated by BL-AE. Extensive experiments and discussions support these findings.