Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Affordable AI Assistants with Knowledge Graph of Thoughts

Created by
  • Haebom

Author

Maciej Besta, Lorenzo Paleari, Jia Hao Andrea Jiang, Robert Gerstenberger, You Wu, J on Gunnar Hannesson, Patrick Iff, Ales Kubicek, Piotr Nyczyk, Diana Khimey, Nils Blach, Haiqiang Zhang, Tao Zhang, Peiran Ma, Grzegorz Kwa sniewski, Marcin Copik, Hubert Niewiadomski, Torsten Hoefler

Outline

In this paper, we propose an innovative architecture called Knowledge Graph of Thoughts (KGoT) to address the high operational cost of large-scale language model (LLM)-based AI assistants and their low success rates on complex benchmarks (e.g., GAIA). KGoT integrates LLM inference with dynamically generated knowledge graphs (KGs) to extract and structure task-related knowledge through external tools such as mathematical solvers, web crawlers, and Python scripts. This structured knowledge representation enables low-cost models to effectively solve complex tasks and minimize bias and noise. It achieves a 29% improvement in success rate compared to Hugging Face Agents using GPT-4o mini on the GAIA benchmark, and reduces the operational cost by more than 36x compared to GPT-4o. Similar performance gains are also observed on other models such as Qwen2.5-32B, Deepseek-R1-70B, and other benchmarks such as SimpleQA. KGoT provides AI assistant solutions that are scalable, cost-effective, versatile, and high-performance.

Takeaways, Limitations

Takeaways:
Presenting a novel architecture that effectively solves the high operating cost problem of LLM-based AI assistants
Presenting a method to improve the performance of LLM on complex tasks (29% success rate improvement on GAIA benchmark)
Presenting a method to achieve high performance using an inexpensive model (over 36x operational cost reduction compared to GPT-4o)
We present a method to minimize bias and noise by utilizing dynamically generated knowledge graphs.
Demonstrated excellent performance across a variety of models and benchmarks
Limitations:
Further research is needed on the generalization performance of the proposed architecture and its applicability to various task types.
Due to the high dependency on external tools, there is a possibility of performance degradation when the tools are unavailable.
Additional validation is needed on the efficiency and scalability of dynamic knowledge graph creation and management.
Since only results for specific benchmarks are presented, further research is needed on generalization performance on other benchmarks.
👍