This paper demonstrates a novel asymptotic unequal partitioning property for the perplexity of long texts generated by language models and presents experimental evidence using an open-source model. Specifically, we show that the log perplexity of all large texts generated by a language model asymptotically converges to the average entropy of the token distribution. This defines a "typical set" to which all long synthetic texts generated by a language model should belong. We refine the concept of a "typical set" to include only grammatically correct texts and show that, under a very general definition of grammar, this refined typical set represents a very small subset of all possible grammatically correct texts. This means that language models are strongly constrained in their range of possible actions and outputs. Since this work does not make simplifying assumptions, such as normality of the statistics of language model outputs, it can be directly applied to real-world models without approximations. We discuss potential applications of the concept of a typical set to problems such as synthetic text detection and membership inference in training datasets.