Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

DAMR: Efficient and Adaptive Context-Aware Knowledge Graph Question Answering with LLM-Guided MCTS

Created by
  • Haebom

Author

Yingxu Wang, Shiqi Fan, Mengzhu Wang, Siyang Gao, Chao Wang, Nan Yin

Dynamically Adaptive MCTS-based Reasoning (DAMR)

Outline

This paper proposes Dynamically Adaptive MCTS-based Reasoning (DAMR), a novel framework for Knowledge Graph Question Answering (KGQA). DAMR integrates LLM-based Monte Carlo Tree Search (MCTS) with adaptive path evaluation to enable efficient and context-aware KGQA. Based on MCTS, DAMR effectively reduces the search space by selecting the top k semantically relevant relations at each expansion step through an LLM-based planner. Furthermore, it introduces a lightweight Transformer-based score that jointly encodes question and relation sequences via cross-attention, thereby capturing subtle semantic changes during multi-hop inference and improving evaluation accuracy. Furthermore, to mitigate the lack of high-quality supervision, DAMR incorporates a dynamic pseudo-path refinement mechanism that periodically generates training signals from partial paths explored during search, allowing the score to continuously adapt to the evolving inference trajectory distribution. Extensive experiments on several KGQA benchmarks demonstrate that DAMR significantly outperforms state-of-the-art methods.

Takeaways, Limitations

Efficient search space reduction and context-aware reasoning possible through LLM-based MCTS.
Improved accuracy with lightweight Transformer-based scores.
Mitigating the lack of supervision problem with a dynamic similarity path refinement mechanism.
Outstanding performance compared to SOTA methodology.
LLM dependency (results may vary depending on the performance of the LLM).
Computational complexity of MCTS.
👍