This paper presents the ZeroTIR framework, which performs Tool-Integrated Reasoning (TIR) using reinforcement learning (RL) from outcome-based rewards. ZeroTIR trains a pre-trained large-scale language model (LLM) to spontaneously generate and execute Python code for mathematical problems, without supervised learning examples of tool usage. Experimental results show a strong positive correlation between increasing RL training steps and the frequency of spontaneous code execution, average response length, and final task accuracy. This quantitatively demonstrates the relationship between the computational effort invested in training and the emergence of effective tool-augmented reasoning strategies. We also demonstrate that ZeroTIR significantly outperforms existing tool-less ZeroRL baseline models on mathematical benchmarks. By providing a robust framework and reproducible benchmarks, we contribute to future research.