This paper presents an efficient text classification method for handling the growing volume of scientific literature. We fine-tune pre-trained language models (PLMs) such as BERT, SciBERT, BioBERT, and BlueBERT on the Web of Science (WoS-46985) dataset and apply them to scientific text classification. We expand the dataset by adding 1,000 papers per category, matching the major categories of WoS-46985, by executing seven targeted queries on the WoS database. We use PLMs to predict labels for unlabeled data and combine the predictions using a hard-voting strategy to improve accuracy and confidence. Fine-tuning on the expanded dataset using dynamic learning rates and early stopping significantly improves classification accuracy, especially in specialized domains. We demonstrate that domain-specific models such as SciBERT and BioBERT consistently outperform general-purpose models such as BERT. These results highlight the effectiveness of dataset augmentation, inference-based label prediction, hard-voting, and fine-tuning techniques in creating a robust and scalable solution for automated academic text classification.