This paper proposes a novel framework for improving the expected calibration error (ECE) based on subjective logic to assess the reliability of neural networks. Existing metrics such as accuracy and precision have limitations in adequately reflecting trust, confidence, and uncertainty, and in particular, fail to address the problem of overconfidence. The proposed method clusters predicted probabilities and comprehensively measures trust, distrust, and uncertainty using appropriate fusion operators. Experimental results using the MNIST and CIFAR-10 datasets demonstrate improved reliability after calibration. This framework provides interpretability and precise evaluation of AI models in sensitive areas such as healthcare and autonomous systems.