Multi-stage agent search systems based on large-scale language models (LLMs) have demonstrated outstanding performance in complex information retrieval tasks, but they suffer from generating inconsistent intermediate queries and inefficient search trajectories. To address these challenges, this paper proposes DynaSearcher, an innovative search agent that leverages dynamic knowledge graphs and multi-reward reinforcement learning (RL). DynaSearcher guides the search process by explicitly modeling entity relationships using the knowledge graph as external structured knowledge, ensuring factual consistency of intermediate queries and mitigating bias caused by irrelevant information. Furthermore, a multi-reward RL framework allows for fine-grained control over training objectives such as search accuracy, efficiency, and response quality. Experimental results demonstrate that DynaSearcher achieves state-of-the-art answer accuracy on six multi-hop question-answering datasets, rivaling state-of-the-art LLMs with a small model and limited computational resources. Furthermore, it demonstrates strong generalization and robustness across diverse search environments and larger models, demonstrating its broad applicability.