Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

PBCAT: Patch-based composite adversarial training against physically realizable attacks on object detection

Created by
  • Haebom

Author

Xiao Li, Yiming Zhu, Yifan Huang, Wei Zhang, Yingzhe He, Jie Shi, Xiaolin Hu

Outline

To address the problem of object detectors being vulnerable to physically feasible attacks (e.g., adversarial patches and textures), this paper proposes a unified adversarial training method, Patch-Based Composite Adversarial Training (PBCAT). PBCAT optimizes the model by combining small-area gradient-based adversarial patches and fine-grained global adversarial perturbations that cover the entire image. Unlike previous studies that only focused on defending against adversarial patch attacks, PBCAT aims to defend against various physically feasible attacks. Experimental results show that PBCAT significantly improves the robustness against various physical attacks compared to existing state-of-the-art defense methods, and in particular, improves the detection accuracy by 29.7% against a recently proposed texture adversarial attack.

Takeaways, Limitations

Takeaways:
We present a novel defense technique (PBCAT) that effectively addresses the vulnerability of object detectors to various physically feasible adversarial attacks (adversarial patches, adversarial textures, etc.).
It outperforms existing defense methods, achieving significant performance improvements especially against certain adversarial texture attacks.
We present a novel adversarial learning strategy that combines adversarial patches and global adversarial perturbation to improve generalization performance against various attacks.
Limitations:
The effectiveness of the proposed PBCAT may be limited to specific datasets and attack methods. Additional experiments on various datasets and attack types are needed.
Since PBCAT is based on adversarial learning, the learning process may require significant computational costs. Research is needed to improve computational efficiency.
The generalization performance against new, unknown physical attacks has not been fully verified yet. Additional research on more diverse and powerful attacks is needed.
👍