In this paper, we propose DCRF-BiLSTM, a speech emotion recognition (SER) model that recognizes various emotions (neutral, happy, sad, angry, fear, disgust, and surprise). We train the model using five datasets: RAVDESS, TESS, SAVEE, EmoDB, and Crema-D, and achieve high accuracies on individual datasets (RAVDESS 97.83%, SAVEE 97.02%, CREMA-D 95.10%, and TESS and EmoDB 100%). In particular, when all three datasets (R+T+S) are combined, the accuracy reaches 98.82%, outperforming previous studies. In addition, this is the first study to evaluate all five benchmark datasets integratedly, achieving a high overall accuracy of 93.76%, verifying the robustness and generalization performance of the DCRF-BiLSTM framework.