This paper introduces ASearcher, an open-source project for enhancing the search capabilities of large-scale language model (LLM)-based agents. While existing LLM-based agents excel at handling complex, knowledge-intensive tasks, they fall short in expert-level search intelligence (e.g., resolving ambiguous questions, generating accurate retrievals, analyzing results, and thoroughly exploring). To overcome these limitations, ASearcher presents a scalable and efficient asynchronous reinforcement learning-based training framework. Specifically, ASearcher outperforms existing open-source agents on the xBench and GAIA benchmarks through scalable asynchronous reinforcement learning (RL) training that enables long-horizon search and a prompt-based LLM agent that automatically generates a high-quality question-answering (QA) dataset. It also demonstrates extreme long-term search capabilities, with tool calls exceeding 40 turns and outputs exceeding 150,000 tokens. The model, training data, and code are publicly available.