This paper proposes the Chain-of-Associated-Thoughts (CoAT) framework, which replaces the "fast thinking" approach of traditional LLMs with a "slow thinking" approach more closely resembling human thought processes. CoAT significantly expands the exploration space of LLMs by combining the Monte Carlo Tree Search (MCTS) algorithm with a novel key information integration mechanism called "associative memory." Leveraging the structural exploration capabilities of MCTS and the adaptive learning capabilities of associative memory, CoAT explores multiple inference paths and dynamically updates the knowledge base in real time. This allows it to review and improve previous inferences and adaptively integrate evolving information to produce accurate and comprehensive final results. We achieve performance gains of over 10% (open source datasets) and over 15% (CRB dataset) on open-source multi-stage inference datasets such as HotpotQA and MuSiQue, as well as on our own CRB dataset.