This paper investigates the security vulnerabilities of large-scale language models (LLMs)-based robotic systems. We highlight that LLMs' vulnerability to jailbreak attacks, which transform robot commands into executable policies, poses a serious security risk from the digital to the physical domain. We investigate the applicability of existing LLM jailbreak attacks to robotic systems and propose a novel attack technique, POlicy Executable (POEX). POEX uses hidden-layer gradient optimization and a multi-agent evaluator to derive executable harmful policies, and its effectiveness is verified through real-world robotic systems and simulations. Finally, we propose prompt-based and model-based defense techniques to mitigate jailbreak attacks.