This paper proposes Ontology-Guided Reverse Thinking (ORT), a novel framework for improving the performance of Knowledge Graph Question Answering (KGQA) on large-scale language models (LLMs). To address the problem that existing KGQA methods rely on entity vector matching and struggle to answer questions requiring multi-step inference, we propose ORT, which constructs an inference path from a goal to a condition, inspired by human reverse reasoning. ORT consists of three steps: extracting goal and condition labels using LLMs, generating a label inference path based on knowledge graph ontology, and retrieving knowledge using the label inference path. Experimental results on the WebQSP and CWQ datasets demonstrate that ORT achieves state-of-the-art performance and significantly enhances the KGQA capability of LLMs.