This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
This paper presents "TrojanRobot," a stealthy and effective backdoor attack technique against robot manipulation policies operating in real-world environments. While previous backdoor attack research has been limited to simulators, hindering real-world application, TrojanRobot utilizes a module-poisoning technique to insert a backdoor module into the robot policy's visual recognition module, enabling a backdoor attack that controls the entire robot policy. Specifically, the basic implementation utilizes a fine-tuned Vision-Language Model (VLM) as the backdoor module, while the Large Vision-Language Model (LVLM)-as-a-backdoor paradigm presents three types of advanced attacks—permutation, stagnation, and intentional attacks—to enhance generalization performance in physical environments. Extensive experiments using the UR3e manipulator demonstrate the effectiveness and stealth of TrojanRobot.
Takeaways, Limitations
•
Takeaways:
◦
Demonstrates the risk of backdoor attacks on robotic systems operating in real physical environments.
◦
A new backdoor attack method using module-poisoning techniques is presented.
◦
Demonstrates the potential for various types of advanced backdoor attacks through the LVLM-as-a-backdoor paradigm.
◦
Emphasizes the need for enhanced security for real-world robotic systems.
•
Limitations:
◦
The currently proposed attack technique may be limited to specific robot systems and VLMs.
◦
Further research is needed on attack success rates in various environments and situations.
◦
More research is needed on defense techniques against backdoor attacks.