This paper proposes a novel thinking framework, called Matrix of Thought (MoT), to address the issue of large-scale language models (LLMs) declining accuracy when handling complex and abstract tasks. MoT explores problems horizontally and vertically through a column-cell communication mechanism, enabling multiple strategies and deep thinking while reducing redundancy. Furthermore, it introduces a fact-correction mechanism that constructs knowledge units and corrects errors using knowledge graph triplets retrieved through RAG and source text. Experimental results on three tasks—a 24-point game, question-answer evaluation, and proposition generation—show that the proposed framework outperforms existing methods, with inference time only 14.4% of that of the baseline method.