In this paper, we propose MoleVers, a multi-objective pre-trained molecular model for predicting diverse molecular features in environments lacking experimentally validated data. MoleVers employs a two-stage pre-training strategy. In the first stage, molecular representations are learned from unlabeled data via masked atom prediction and extreme noise removal, enabled by a novel branching encoder architecture and dynamic noise scale sampling. In the second stage, these representations are improved via auxiliary feature predictions derived from computational methods such as density functional theory or large-scale language models. Evaluation results on 22 small experimentally validated datasets show that MoleVers achieves state-of-the-art performance, highlighting the effectiveness of the two-stage framework in generating generalizable molecular representations for diverse sub-features.