DeepSeek-Prover-V2 is an open-source large-scale language model designed for formal theorem proving in Lean 4. It is initialized with data collected through a recursive theorem proving pipeline based on DeepSeek-V3. It performs cold-start training by leveraging DeepSeek-V3’s ability to decompose complex problems into multiple sub-goals. The proofs of solved sub-goals are combined with step-by-step inferences from DeepSeek-V3 to generate initial data for reinforcement learning. This integrates informal and formal mathematical reasoning. The resulting model, DeepSeek-Prover-V2-671B, achieves state-of-the-art performance in neural network theorem proving, achieving an 88.9% passing rate on MiniF2F-test and solving 49 out of 658 PutnamBench problems. We introduce a new metric, ProverBench (325 formalized problems), which includes 15 problems from the recent AIME competition (24-25). The evaluation results show that it solves 6 of the 15 AIME problems, which shows that the gap between formal and informal mathematical reasoning is significantly reduced compared to the majority voting method of DeepSeek-V3 (which solves 8 problems).