Building on previous research demonstrating that reinforcement learning (RL) alone cannot create large-scale language models (LLMs) with reasoning capabilities, this paper proposes ThinkTuning, a novel method for training models lacking reasoning capabilities. ThinkTuning is a GRPO-based interactive learning approach that enhances the rollout of a student model guided by a teacher model. The teacher model presents problems and provides corrective feedback on the student model's answers, thereby improving the student model's reasoning ability. Experimental results show that ThinkTuning improves performance by an average of 3.85% over the zero-shot baseline on various benchmarks, and by 2.08%, 2.23%, and 3.99% on MATH-500, AIME, and GPQA-Diamond, respectively. The source code is available on GitHub.