This paper addresses software issue localization, the process of identifying the code locations that require modification to resolve software problems. The semantic gap between natural language issue descriptions and faulty code requires complex, multi-step reasoning via code dependencies. Existing LLM-based agents attempt to address this problem by integrating repository search tools, but this translates into a challenging task known as "Repo Deep Search," requiring LLMs to effectively leverage multiple repository search tools throughout the multi-step inference and exploration process. To address this challenge, this paper presents ToolTrain, a two-step tool-integration training framework that combines rejection-sampled supervised fine-tuning and tool-integrated reinforcement learning. Experimental results demonstrate that models trained with ToolTrain achieve state-of-the-art performance, with the 32B model outperforming Claude-3.7 in function-level localization. Furthermore, we demonstrate that improved localization performance translates into improved end-to-end issue resolution, demonstrating that training for issue localization is a viable and effective strategy for improving automated software development.