This paper examines the risks of rapidly evolving physically embodied AI (EAI) systems and presents policy responses. While EAI systems can exist, learn, reason, and act in the physical world, their malicious use can pose serious risks, including physical harm, mass surveillance, and economic and social disruption. Because existing regulations for industrial robots and autonomous vehicles fail to adequately address the risks posed by EAI systems, this paper provides a taxonomy of the physical, informational, economic, and social risks posed by EAI systems and analyzes policies in the US, EU, and UK to highlight the limitations of existing frameworks. Finally, it presents policy recommendations for the safe and beneficial deployment of EAI systems, including mandatory testing and certification systems, a clear accountability framework, and strategies to manage the potential economic and social impacts of EAI.