Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Ontology-Guided Reverse Thinking Makes Large Language Models Stronger on Knowledge Graph Question Answering

Created by
  • Haebom

Author

Runxuan Liu, Bei Luo, Jiaqi Li, Baoxin Wang, Ming Liu, Dayong Wu, Shijin Wang, Bing Qin

Outline

This paper proposes Ontology-Guided Reverse Thinking (ORT), a novel framework for improving the performance of Knowledge Graph Question Answering (KGQA) on large-scale language models (LLMs). To address the problem that existing KGQA methods rely on entity vector matching and struggle to answer questions requiring multi-step inference, we propose ORT, which constructs an inference path from a goal to a condition, inspired by human reverse reasoning. ORT consists of three steps: extracting goal and condition labels using LLMs, generating a label inference path based on knowledge graph ontology, and retrieving knowledge using the label inference path. Experimental results on the WebQSP and CWQ datasets demonstrate that ORT achieves state-of-the-art performance and significantly enhances the KGQA capability of LLMs.

Takeaways, Limitations

Takeaways:
A novel approach to solving multi-step inference problems in LLM-based KGQA.
Reduce information loss and redundancy by utilizing reverse inference methods.
Improve accuracy by generating ontology-based inference paths.
Achieving state-of-the-art performance on WebQSP and CWQ datasets.
Limitations:
Further research is needed to determine the generality of the proposed ORT framework and its applicability to other knowledge graphs or question types.
Since it depends on the performance of LLM, the limitations of LLM may affect the performance of ORT.
Need to evaluate processing performance for complex or ambiguous questions
👍