This paper proposes a new metric for evaluating the robustness of neural networks: the Neuron Coverage Change Rate (NCCR). NCCR measures the attack resistance and resilience of a neural network to adversarial examples by monitoring the change in the output of a specific neuron when the input changes. A smaller change is considered a more robust neural network. Experimental results on image recognition and speaker recognition models demonstrate that NCCR effectively assesses the robustness of a neural network or input and enables the detection of adversarial examples, as adversarial examples are always less robust.