To address the limitations of large-scale language models (LLMs) that excel at complex tasks with advanced prompting techniques such as Chain-of-Thought (CoT) and Tree-of-Thought (ToT), this paper presents a Mixture of Reasoning (MoR) training framework that integrates various reasoning strategies into LLMs for unsupervised and task-adaptive reasoning without external prompt engineering. MoR consists of two steps: a ‘Thought Generation’ step, which generates inference chain templates using a model such as GPT-4, and a ‘SFT Dataset Construction’ step, which pairs the templates with a benchmark dataset to perform supervised learning. Experimental results show that MoR achieves a performance improvement of 0.730 (2.2% improvement) over CoT prompting and 0.734 (13.5% improvement) over the baseline, providing a general solution for robust reasoning across a wide range of tasks. MoR eliminates the need for task-specific prompts.