This paper presents a controlled evaluation framework for assessing the ability of large-scale language models (LLMs) to consistently and logically ground their consistency in multilingual environments. We generate synthetic, logic-based premise-hypothesis pairs translated into a morphologically diverse set of languages and conduct tests under both monolingual and mixed-language (code-switching) conditions. We demonstrate the surprising result that code-switching can improve performance rather than degrade it, suggesting that translation-induced lexical changes can serve as regulatory signals. We verify the fidelity of translated pairs using embedding-based similarity analysis and cross-language alignment visualization. In conclusion, we demonstrate the potential and vulnerabilities of current cross-language inference in LLMs and present code-switching as a promising approach for improving multilingual robustness.