In this paper, we propose CHAI, a novel framework for improving the code-mixed language understanding capability of multilingual large-scale language models (LLMs). To solve the problem that existing multilingual LLMs are not effective in code-mixed language translation tasks, we perform accurate annotation generation using LLMs, preference data generation through reinforcement learning, and experimental evaluation. CHAI shows 25.66% better performance than the state-of-the-art open source LLMs in code-mixed translation tasks.