This paper proposes Atom-Searcher, a novel framework for enhancing the complex problem-solving capabilities of large-scale language models (LLMs). To overcome the limitations of existing augmented search generation (RAG) approaches, we focus on agent-based deep learning, where LLMs autonomously perform inference, search, and information synthesis. To address the inherent challenges of outcome-based reinforcement learning (RL) approaches, such as conflicting gradients and reward sparsity, we present Atomic Thought, a novel approach that decomposes the inference process into fine-grained functional units. This approach accelerates convergence to efficient inference paths by leveraging Reasoning Reward Models (RRMs) and Atomic Thought Rewards (ATRs), which provide fine-grained guidance for the inference process. A curriculum-based reward schedule prioritizes process-level ATRs and gradually transitions to outcome-level rewards. Through seven benchmark experiments, we demonstrate that our approach outperforms existing state-of-the-art methods, demonstrating the scalability of test-time calculations, providing a supervision criterion for RRMs, and demonstrating more interpretable and human-like reasoning patterns.