Large-scale language models (LLMs) demonstrate robust performance in static medical question answering (QA) tasks, but their inference performance degrades in multi-turn clinical conversations, where patient information is distributed across turns. This paper presents TriMediQ, a triple-layered approach that enhances the inference reliability of LLMs through explicit knowledge integration. TriMediQ first uses a fixed triplet-extracted LLM to transform patient responses into clinically relevant triplets, ensuring factual accuracy through constrained prompting. These triplets are then integrated into a patient-specific knowledge graph (KG), and a learnable projection module, consisting of a graph encoder and a projector, captures relational dependencies while keeping all LLM parameters fixed. During the inference process, the projection module guides multi-hop inference on the KG, enabling consistent clinical conversation understanding. Experimental results from two conversational medical QA benchmarks demonstrate that TriMediQ achieves up to a 10.4% accuracy improvement over five existing baselines on the iMedQA dataset. These results demonstrate that structuring patient information into triplets can effectively improve the inference ability of LLM in multi-turn medical QA.