This paper proposes a two-stage causal framework, **CAMA (CAusal MAthematician)**, to enhance the complex mathematical reasoning capabilities of large-scale language models (LLMs). CAMA combines a causal discovery algorithm for question-answer pair datasets with prior knowledge of the LLM to generate a mathematical causal graph (MCG). During the learning phase, the MCG is a high-dimensional representation of solution strategies, containing core knowledge and their causal dependencies. During the inference phase, when a new question is presented, relevant subgraphs are dynamically extracted from the MCG based on the question content and the LLM's intermediate inference processes, guiding the LLM's inference process. Experimental results demonstrate that CAMA significantly improves the LLM's performance on challenging mathematical problems, that structured guidance outperforms unstructured guidance, and that incorporating asymmetric causal relationships yields greater improvements than using only symmetric associations.