In this paper, we present a Retrieval Augmented Generation (RAG) method that enhances the retrieval and inference capabilities of a model through reinforcement learning (RL) to address the limitations of large-scale language models (LLMs) that tend to generate hallucinatory or outdated responses due to static internal knowledge. To address the training stability issues, significant inference time, and limited functionality due to single-query mode of existing RAG methods, we propose a novel training framework, called RAG-R1. RAG-R1 is designed to enable LLMs to adaptively utilize internal and external knowledge during the inference process, and extends the generation and retrieval process from single-query mode to multi-query parallel processing, thereby reducing the inference time and enhancing the functionality of the model. Extensive experiments on seven question-answering benchmarks demonstrate that the proposed method outperforms the best-performing baseline models by up to 13.2%, while reducing the inference time by 11.1%.