This paper provides a comprehensive analysis of the safety of physically embodied AI (e.g., AI) based on large-scale language models (LLMs), particularly in the area of navigation. Given the nature of navigation, which requires perceiving, interacting, and adapting to an unfamiliar environment while navigating to a target, ensuring safety in real-world deployment is crucial. Therefore, this paper comprehensively analyzes the safety threats, defense mechanisms, and evaluation methodologies of navigation systems. In addition to reviewing various datasets and metrics for evaluating existing safety issues, mitigation techniques, effectiveness, and robustness, this paper explores unresolved issues and future research directions, including potential attack vectors, mitigation strategies, more reliable evaluation techniques, and the implementation of a verification framework. Ultimately, the goal is to provide insights for the development of safer and more reliable navigation systems, contributing to improved societal safety and industrial efficiency.