This paper presents a novel method for efficiently integrating a new language into an existing large-scale language model (LLM). We trained a small, open-source, English-based model, Kuwain, with 1.5 billion parameters, by injecting Arabic into it. We achieved an average 8% improvement in Arabic performance while preserving existing knowledge, offering a cost-effective alternative to training a comprehensive model for both English and Arabic. This demonstrates the potential for efficient, goal-oriented scaling of language models without extensive retraining or resource-intensive processes.