This paper compares and analyzes the fairness between machine learning (ML) models and human evaluators using data from 870 college admissions applicants. Predictions were made using three ML models: XGB, Bi-LSTM, and KNN, along with BERT embeddings. The human evaluators were comprised of experts from diverse backgrounds. To assess individual fairness, we introduced a consistency metric that measures the agreement between the ML models and human evaluators' decisions. The analysis results showed that the ML models outperformed the human evaluators by 14.08% to 18.79% in fairness consistency. This demonstrates the potential for leveraging ML to improve fairness in the admissions process while maintaining high accuracy, and we propose a hybrid approach that combines human judgment and ML models.