In this paper, we propose a novel Random Forest classifier that removes features with little information and selectively generates uncorrelated trees to address the high inference cost and model redundancy issues of Random Forest (RF), which is widely used in various fields for its robust classification performance. The proposed model is iteratively improved by removing features with little information, analytically determining the number of new trees, and removing redundant trees through correlation-based clustering. Experimental results using eight different benchmark datasets (including binary and multi-class) show that the proposed model achieves better accuracy than the standard RF.