Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Chain or tree? Re-evaluating complex reasoning from the perspective of a matrix of thought

Created by
  • Haebom

Author

Fengxiao Tang, Yufeng Li, Zongzong Wu, Ming Zhao

Outline

This paper proposes a novel thinking framework, called Matrix of Thought (MoT), to address the issue of large-scale language models (LLMs) declining accuracy when handling complex and abstract tasks. MoT explores problems horizontally and vertically through a column-cell communication mechanism, enabling multiple strategies and deep thinking while reducing redundancy. Furthermore, it introduces a fact-correction mechanism that constructs knowledge units and corrects errors using knowledge graph triplets retrieved through RAG and source text. Experimental results on three tasks—a 24-point game, question-answer evaluation, and proposition generation—show that the proposed framework outperforms existing methods, with inference time only 14.4% of that of the baseline method.

Takeaways, Limitations

Takeaways:
MoT presents a new thinking framework that enhances the reasoning ability of LLM.
Enables multi-strategy and deep thinking through a heat-cell communication mechanism.
Reduce redundancy and increase efficiency.
Correct errors through fact-correction mechanisms.
It showed superior performance than existing methods in various tasks.
We achieved an efficient inference time (14.4% of baseline).
Limitations:
No mention of specific Limitations is made directly in the paper.
Further research may be needed to determine the generalizability of the proposed method and its applicability to other complex problems.
There may be aspects that depend on the quality of the RAG and knowledge graph.
👍