Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Efficient Preimage Approximation for Neural Network Certification

Created by
  • Haebom

Author

Anton Bj orklund, Mykola Zaitsev, Marta Kwiatkowska

Outline

The increasing reliance on safety- and security-critical artificial intelligence (AI) has made the effectiveness of neural network authentication increasingly crucial. In particular, "patch attacks," such as adversarial patches or lighting conditions that obscure portions of an image, such as traffic signs, are challenging real-world use cases. PREMAP has achieved significant progress in authentication against patch attacks by utilizing under- and over-approximations of a preimage, a set of inputs that lead to a given output. While versatile, the PREMAP approach is currently limited to medium-dimensional fully connected neural networks. To address a broader range of real-world use cases, we present novel algorithmic extensions to PREMAP that incorporate tighter bounds, adaptive Monte Carlo sampling, and an improved branching heuristic. These efficiency improvements significantly surpass the original PREMAP and enable extension to previously intractable convolutional neural networks. Furthermore, we demonstrate the potential of the preimage approximation methodology for analyzing and verifying reliability and robustness in diverse use cases, such as computer vision and control.

Takeaways, Limitations

We have improved the efficiency of PREMAP and extended it to previously intractable convolutional neural networks.
We demonstrate the potential of preimage approximation methodology for reliability and robustness analysis and verification in the fields of computer vision and control.
Despite the extension of PREMAP, current research focuses on specific architectures (e.g., fully connected neural networks and convolutional neural networks), and its applicability to other neural network architectures and complex real-world scenarios requires further research.
While algorithmic improvements have improved efficiency, they can still be computationally expensive, and scaling to large models or high-dimensional data remains a challenge.
👍