This paper investigates whether there exists a universal concept representation that is independent of language in a multilingual language model (LLM). In a transformer-based LLM, we analyze latent representations (latent variables) during word translation tasks, extracting latent variables from source translation prompts and inserting them into the forward propagation of target translation prompts. We find that the output language is encoded in the latent variables at an earlier layer than the concepts to be translated. Based on this insight, we show that it is possible to change concepts by patching activations while preserving the language and vice versa. We also show that patching concepts with average representations across languages does not affect the model’s translation ability, but rather improves it. Finally, we generalize to multi-token generation, showing that the model can generate natural language explanations for these average representations. Our results provide evidence that there exists a language-independent concept representation in the investigated model.