This study explores how to enhance the translation quality of low-resource languages (LRLs) in India by enhancing cross-linguistic similarity within specific internal layers of a decoder-only multilingual large-scale language model (LLM). To address the resource constraints of low-resource languages (LRLs), we propose TRepLiNa, which combines centered kernel alignment (CKA), a similarity measure that encourages representation alignment, and REPINA, a regularization method that constrains parameter updates to be close to the pre-trained model. Using the shared MMLoSo working language pair (Mundari, Santali, Bhili) as a pivot point, we experimented with zero-shot, few-shot, and fine-tuning settings using the Aya-23 8B model and QLoRA. Our results demonstrate that aligning mid-level layers using TRepLiNa (CKA+REPINA) is a practical and cost-effective approach for improving LRL translation, especially in data-poor environments.