This paper provides a mechanistic analysis of how the post-training process, essential for transforming a pre-trained large-scale language model (LLM) into a more useful and aligned post-trained model, restructures the internal structure of the LLM. We compare and analyze the base model and the post-trained models across model families and datasets from four perspectives: factual knowledge storage locations, knowledge representations, truth and rejection representations, and confidence levels. We conclude that: First, post-training develops new knowledge representations while adapting the knowledge representations of the base model without altering the factual knowledge storage locations. Second, truth and rejection can be represented as vectors in the hidden representation space, and the truth orientation is highly similar between the base model and the post-trained models and effectively transfers to interventions. Third, the rejection orientation differs between the base model and the post-trained models, exhibiting limited transferability. Fourth, the confidence differences between the base model and the post-trained models cannot be attributed to entropy neurons. This study provides insight into the underlying mechanisms that are maintained and changed during post-training, facilitates subsequent work such as model tuning, and potentially informs future research on interpretability and LLM post-training.