Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

NCCR: to Evaluate the Robustness of Neural Networks and Adversarial Examples

Created by
  • Haebom

Author

Shi Pu, Fu Song, Wenjie Wang

Outline

This paper proposes a new metric for evaluating the robustness of neural networks: the Neuron Coverage Change Rate (NCCR). NCCR measures the attack resistance and resilience of a neural network to adversarial examples by monitoring the change in the output of a specific neuron when the input changes. A smaller change is considered a more robust neural network. Experimental results on image recognition and speaker recognition models demonstrate that NCCR effectively assesses the robustness of a neural network or input and enables the detection of adversarial examples, as adversarial examples are always less robust.

Takeaways, Limitations

Takeaways:
A new metric, NCCR, for evaluating the robustness of neural networks is presented.
Presenting the possibility of detecting adversarial examples using NCCR.
Validating the utility of NCCR in image recognition and speaker recognition models.
Limitations:
Further research is needed on the generalization performance of the proposed NCCR indicator.
The need for NCCR performance analysis against various attack types and defense techniques.
Analysis of the computational costs and efficiency of NCCR calculations is needed.
👍