In this paper, we present a novel method, called ALIGNed-LLM, to address the hallucination problem in language models by efficiently integrating knowledge graphs (KGs) into the latent space of language models. Inspired by the original LLaVA, we align entities and text embeddings using pre-trained knowledge graph embedding (KGE) models such as TransE and a learnable projection layer. This allows the language model to distinguish similar entities, improve factual grounding, and reduce hallucinations. We conduct experiments on three question answering benchmark datasets and language models of various sizes, and demonstrate significant performance improvements. We also apply the method to a real-world financial use case from a large European central bank and verify the improved accuracy.