KROMA is a novel OM framework that dynamically enriches the semantic context of ontology matching (OM) tasks with structural, lexical, and definitional knowledge by leveraging large-scale language models (LLMs) within a retrieval augmented generation (RAG) pipeline. It is designed to address the limited adaptability of existing OM systems, and integrates similarity-based concept matching and a lightweight ontology refinement step to eliminate candidate concepts and significantly reduce the communication overhead caused by LLM invocations to improve performance and efficiency. Experiments on several benchmark datasets demonstrate that integrating knowledge retrieval and context-rich LLMs significantly improves the ontology matching performance, outperforming existing OM systems and state-of-the-art LLM-based approaches, while maintaining a similar communication overhead. This study highlights the feasibility and benefits of the proposed optimization techniques (targeted knowledge retrieval, prompt enrichment, and ontology refinement) for large-scale ontology matching.