In this paper, we propose CortexDebate, an improved version of the multi-agent debate (MAD) method to address the problems of hallucination and inference insufficiency of a single large-scale language model (LLM). To address the excessive input context and overconfidence problems of the existing MAD, CortexDebate uses the McKinsey-based Debate Matter (MDM) module, which acts like the white matter of the brain, to build a sparse and dynamically optimized debate graph among LLM agents. MDM integrates the McKinsey trust formula, a trustworthiness measure in sociology, to guide graph optimization through reliable evaluation. We demonstrate the effectiveness of CortexDebate through extensive experiments on eight datasets and four types of tasks.