This paper highlights the growing use of commercial large-scale language models (LLMs) in US national security settings and proposes two simple, non-technical interventions to mitigate the excessive risk-taking tendencies of LLMs, as previously suggested. Applying these interventions to existing war game designs, the researchers demonstrated a significant reduction in risk escalation throughout the game. Therefore, the argument that LLMs should be restricted in national security settings is premature, and practical measures must be developed to ensure their safe use.