We investigated the lack of diversity in the outputs of aligned large-scale language models (LLMs) from the perspective of the concentration of their probability distributions. To quantify this, we introduced the Branching Factor (BF), a token invariant that measures the number of possible tokens in the next stage. Experimental results show that (1) the BF decreases as the generation progresses, making the LLM more predictable, and (2) alignment tuning significantly narrows the model's output distribution, reducing the BF. This finding explains why aligned models are less sensitive to decoding strategies and why aligned CoT models generate long inference chains and produce stable outputs. Alignment tuning does not fundamentally change the model's behavior, but rather induces the model to select style tokens that open up low-entropy trajectories already present in the base model.