In this paper, we present DRAGON (Dynamic RAG Benchmark On News), the first dynamic RAG (Retrieval-Augmented Generation) benchmark for the Russian language. DRAGON is based on a regularly updated corpus of Russian news and public documents, and provides a comprehensive evaluation of both retrieval and generation components. It automatically generates questions using a knowledge graph generated from the corpus, and extracts four core question types based on subgraph patterns. We publish a complete evaluation framework, including an automatic question generation pipeline, evaluation scripts (reusable across languages and multilingual environments), and benchmark data, along with a public leaderboard to encourage community participation and comparison. It overcomes the limitations of existing English-centric static RAG benchmarks and provides a resource for evaluating Russian RAG systems that reflects the dynamic nature of real-world environments.