Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

FFCBA: Feature-based Full-target Clean-label Backdoor Attacks

작성자
  • Haebom

Author

Yangxu Yin, Honglong Chen, Yudong Gao, Peng Sun, Liantao Wu, Zhe Li, Weifeng Liu

Outline

This paper proposes a novel clean-label-based multi-target backdoor attack technique, Feature-based Full-target Clean-label Backdoor Attacks (FFCBA), to overcome the high data poisoning rate and low detectability of existing multi-target backdoor attacks. FFCBA consists of two paradigms: Feature-Spanning Backdoor Attacks (FSBA) and Feature-Migrating Backdoor Attacks (FMBA). FSBA generates effective and consistent triggers by utilizing a class-conditional autoencoder to generate noisy triggers, while FMBA uses a two-stage class-conditional autoencoder training process to generate effective triggers for heterogeneous model attacks. Experimental results demonstrate that FFCBA outperforms existing state-of-the-art backdoor defense techniques in terms of performance and stability.

Takeaways, Limitations

Takeaways:
A new paradigm for clean label-based multi-target backdoor attacks.
Higher attack success rate and robustness than existing methods
Efficiency and powerful cross-model attack capabilities through the combination of FSBA and FMBA.
Performance verification through experiments on various datasets and models
Limitations:
FSBA's cross-model attack capability is relatively weak (FMBA compensates for this).
Due to the inherent limitations of clean-label backdoor attacks, it may be difficult to guarantee complete concealment. (Future research is needed to explore more stealthy attack techniques.)
👍