Existing language models for cross-domain generalization have shown reasoning ability for specific tasks, but bottom-up learning using general corpora has limitations in acquiring the abstraction ability required for deep domain expertise. In this paper, we present a top-down approach that constructs domain basic concepts into more complex concepts. A knowledge graph (KG) represents domain basic concepts as head-relation-tail edges, and paths provide a compositional structure that encodes high-level concepts. This study presents a pipeline that generates tasks directly from KG basic concepts so that the model can acquire them and construct them for inference. Focusing on the medical field, we curate 24,000 reasoning tasks and thought processes derived from various medical basic concepts using medical KG, and fine-tune the QwQ-32B model with this curriculum to obtain the QwQ-Med-3 model, which shows progress toward medical superintelligence. We also introduce the ICD-Bench, an assessment set that quantifies reasoning ability in 15 medical domains. Experimental results show that QwQ-Med-3 outperforms state-of-the-art inference models on the ICD-Bench category, and in particular, widens the performance gap by leveraging the ground truth learned from difficult tasks. In the medical question-answering benchmark evaluation, QwQ-Med-3 also demonstrates the transferability of the learned expertise to improve the performance of the underlying model. While industry approaches to artificial general intelligence (AGI) emphasize broad expertise, this work suggests a future where AGI emerges from the configurable interactions of efficient domain-specific superintelligent agents.