Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

From Query to Logic: Ontology-Driven Multi-Hop Reasoning in LLMs

Created by
  • Haebom

Author

Haonan Bian, Yutao Qi, Rui Yang, Yuanxi Che, Jiaqian Wang, Heming Xia, Ranran Zhen

Outline

This paper presents ORACLE (Ontology-driven Reasoning and Chain for Logical Elucidation), a training-free framework that combines the structural advantages of knowledge graphs with the generative capabilities of LLMs to overcome the limitations of large-scale language models (LLMs) in complex multi-stage question answering (MQA) tasks. ORACLE consists of three steps: dynamically generating a question-specific knowledge ontology, transforming it into a first-order logical reasoning chain, and decomposing the original question into logically coherent subquestions. Experimental results on several MQA benchmarks demonstrate that ORACLE achieves competitive performance with state-of-the-art models such as DeepSeek-R1, demonstrating the effectiveness of each component and generating a more logical and interpretable reasoning chain than existing approaches.

Takeaways, Limitations

Takeaways:
Presenting a new framework that contributes to improving LLM's multi-step question-answering ability.
Combining the strengths of knowledge graphs and LLM to improve MQA performance.
Generate logical and interpretable reasoning processes.
No training required, reducing data dependency and increasing application flexibility.
Achieve competitive performance with cutting-edge models.
Limitations:
Further validation of the generalization performance of the proposed framework is needed.
The possibility of dependencies on specific types of questions or knowledge graphs.
Potential increase in computational cost due to the complexity of the ontology creation and inference process.
Limited applicability to complex and ambiguous questions in the real world.
👍