Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Clean-Label Physical Backdoor Attacks with Data Distillation

Created by
  • Haebom

Author

Thinh Dao, Khoa D Doan, Kok-Seng Wong

Outline

This paper presents Clean-Label Physical Backdoor Attack (CLPBA), a novel approach to physical backdoor attacks that leverages physical objects as triggers, unlike conventional digital trigger-based backdoor attacks. While conventional physical backdoor attacks require label manipulation, resulting in low stealth, CLPBA introduces a backdoor into the model by applying subtle perturbations without label changes. Framing the problem as a dataset distillation, we propose three variations—Parameter Matching, Gradient Matching, and Feature Matching—to generate effective toxicity data in both linear search and fully fine-tuned learning environments. We experimentally demonstrate the effectiveness of CLPBA on two collected physical backdoor datasets for face recognition and animal classification, demonstrating its superior performance over conventional dirty-label attacks, particularly in challenging scenarios where backdoor generalization in the physical world is required.

Takeaways, Limitations

Takeaways:
A new methodology for performing physical backdoor attacks without label manipulation is presented.
Generating effective toxicity data using dataset distillation techniques.
Achieve superior performance over existing methods in a variety of physical world scenarios.
Presenting experimental results demonstrating the dangers of real-world physical backdoor attacks.
Reproducibility achieved through open source code
Limitations:
The effectiveness of CLPBA may vary depending on the dataset and model used. Further experiments on various datasets and models are needed.
The attack success rate may be sensitive to the size and location of the inserted perturbation. Further research is needed to determine the optimal perturbation generation method.
In real-world applications, the robustness of attacks against environmental changes must be evaluated.
It is possible that more effective CLPBA variants exist beyond the three currently proposed.
👍