Pre-trained molecular encoders have become essential tools in computational chemistry, such as property prediction and molecule generation. However, existing approaches that rely solely on final layer embeddings can discard valuable information. In this study, we analyzed the information flow of five molecular encoders and found that intermediate layers preserve more general features, while the final layer specializes and compresses information. Layer-by-layer evaluations on 22 property prediction tasks revealed that using fixed embeddings from optimal intermediate layers improved performance by an average of 5.4% (up to 28.6%) compared to the final layer. Furthermore, fine-tuning the encoders with intermediate depth truncation yielded even greater improvements, by an average of 8.5% (up to 40.8%), achieving new state-of-the-art performance results across multiple benchmarks.