This paper highlights the importance of cross-language representation alignment in multilingual large-scale language models (mLLMs), and presents a data-efficient alternative to computationally expensive fine-tuning: model interventions. In particular, we analyze the effect of manipulating the activation of mLLMs to improve cross-language representation alignment using an intervention method called “finding experts.” Specifically, we identify target neurons for manipulation for specific languages, and analyze the embedding spaces of mLLMs before and after the manipulation to show that cross-language alignment is improved. Furthermore, we experimentally demonstrate that altering the embedding space leads to improved performance on retrieval tasks, achieving up to a 2x improvement in top-1 accuracy in cross-language retrieval.