To improve the interpretability of Transformer models, this paper proposes the Entropy-Lens framework, which generates an entropy profile by calculating the Shannon entropy of the token distribution at each layer. Instead of analyzing the latent representation, we analyze the evolution of the token distribution directly in the vocabulary space to summarize the model's computational process from an information-theoretic perspective. This entropy profile reveals the model's computational patterns and is used to reveal correlations with prompt type, task format, and output accuracy. Experiments are conducted on various Transformer models and α values to verify the stability and generality of the Shannon entropy. This is achieved without the need for traditional gradient descent, fine-tuning, or access to internal information within the model.