This paper introduces KG-R1, a novel framework for augmented generation (RAG) of knowledge graphs (KGs). KG-R1 utilizes reinforcement learning (RL) to allow a single agent to interact with a KG, retrieving information at each step and incorporating it into inference and generation. This process is optimized through end-to-end RL. On the KGQA benchmark, KG-R1 demonstrates efficiency and transferability, achieving higher accuracy than existing methods despite a smaller model size using the Qwen-2.5-3B model. Furthermore, after training, KG-R1 can be applied to new KGs without modification.