Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Backdoor Attack with Invisible Triggers Based on Model Architecture Modification

Created by
  • Haebom

Author

Yuan Ma, Jiankang Wei, Yilun Lyu, Kehao Chen, Jingtong Huang

Outline

This paper presents a novel backdoor attack method that overcomes the limitations of existing backdoor attacks that rely on data manipulation or model structure modification. While existing backdoor attacks based on structural modification require visible triggers, the proposed method inserts a backdoor within the model structure to create a stealthy, unobtrusive trigger. This allows the attacker to modify and redistribute pre-trained models, posing a threat to users. The effectiveness of the attack and the stealthiness of the trigger are verified through standard computer vision benchmark experiments. We emphasize that the attack is undetectable by both manual inspection and advanced detection tools.

Takeaways, Limitations

Takeaways:
We present a new backdoor attack method that is more stealthy and difficult to detect than existing backdoor attacks.
We demonstrate that attacks can be performed by exploiting pre-trained models.
It exposes the limitations of current backdoor detection technology.
This suggests the need to develop more powerful backdoor attack defense technologies.
Limitations:
Further research is needed to determine the practical applicability and scalability of the proposed attack method.
There is a lack of research on effective defense techniques against the attacks presented in this paper.
More experimental results on different model architectures and datasets are needed.
👍