This paper presents a novel approach to improving the deductive reasoning ability of large-scale language models (LLMs). Building on previous research combining test time extension and outcome or process compensation models, we propose outcome compensation models (ORMs) specialized for deductive reasoning. To train ORMs, we generate data through Chain-of-Thought (CoT) using single and multi-samples, and propose a novel "echo generation technique" that utilizes the error propensity of LLMs to generate additional training data. This technique generates training data containing a wider variety of error types than conventional CoT methods. Experimental results show that ORMs trained with CoT and echo-augmented data improve the performance of four different LLMs on the FOLIO, JustLogic, and ProverQA datasets.