This paper demonstrates that deep learning (DL) models underperform gradient boosting decision trees (GBDTs) on outlier data in existing research, pointing out that this phenomenon is limited to ideal settings. Considering the complexity of real-world scenarios, we demonstrate that DL models can outperform GBDTs in label-sparse tabular learning-ranking (LTR) problems. Specifically, tabular LTR applications, such as search and recommendation, often lack labels but are also rich in unlabeled data. This paper demonstrates that DL ranking models can leverage this unlabeled data through unsupervised pretraining. Extensive experiments on public and proprietary datasets demonstrate that pretrained DL ranking models consistently outperform GBDT ranking models on ranking metrics (up to a 38% improvement).