This paper explores a vulnerability in large-scale language models (LLMs): "jailbreak" attacks, which bypass LLM security measures by translating malicious questions into rare or underrepresented languages. We highlight the lack of prior research on LLM security in multilingual environments and propose a novel learning method, Multilingual Collaborative Defense (MCD). MCD automatically optimizes continuous and soft safety prompts to enhance multilingual LLM security. It offers three key advantages: enhanced security performance in multilingual environments, robust generalization, and low rejection rates, while mitigating the security inconsistencies caused by imbalanced LLM training corpora. We evaluate the effectiveness and transferability of MCD by adapting existing benchmarks such as MaliciousInstruct and AdvBench to include underrepresented languages, demonstrating that it outperforms existing methods. The code is available on GitHub.