This paper introduces a KG-Augmented Retrieval Generation (KG-RAG) system that combines a large-scale language model (LLM) and a structured knowledge graph (KG) to reduce hallucinations and expose the inference process. To address the complexity of existing KG-RAG systems, we propose KG-R1, a reinforcement learning (RL)-based KG-RAG framework in which a single agent interacts with the KG environment, performs retrieval at each step, and integrates the retrieved information into inference and generation. KG-R1 is optimized through end-to-end RL and demonstrates its efficiency and transferability on the KGQA benchmark using the Qwen-2.5-3B model.