This paper proposes a novel approach that combines LLM and a knowledge graph (GraphRAG) to overcome the limitations of large-scale language models (LLMs) in knowledge-intensive tasks. To address the challenge of knowledge graph generation, a key challenge faced by existing GraphRAG approaches, we propose a method for constructing a triple-layer knowledge graph using a sophisticated ontology of domain-specific concepts and a concept-based dictionary analysis of source documents. This involves linking complex domain-specific objects and their associated text segments. LLM prompt generation is formulated as an unsupervised node classification problem, optimizing information density, coverage, and prompt length. Experimental evaluations in the medical field demonstrate that the proposed method optimizes the information density, coverage, and arrangement of LLM prompts, while reducing their length, resulting in cost savings and more consistent and reliable LLM output.