This paper introduces ASearcher, an open-source project for enhancing the search capabilities of large-scale language model (LLM)-based agents. Existing LLM-based agents rely heavily on external tools, particularly search tools, to handle complex tasks. However, they fall short in achieving expert-level search intelligence (e.g., resolving ambiguous questions, generating accurate responses, analyzing results, and performing thorough exploration). To overcome these limitations, ASearcher presents a scalable and efficient asynchronous reinforcement learning (RL)-based training framework. The LLM agent generates its own high-quality question-and-answer (QA) dataset and can perform long-term searches (over 40 turns, with over 15k output tokens). Experimental results demonstrate that it outperforms existing open-source 32B agents on xBench and GAIA benchmarks. The model, training data, and code are publicly available.