OmniBench RAG is a new platform that automatically evaluates the performance of Retrieval Augmented Generation (RAG) systems across various domains. It was developed to overcome the limitations of existing RAG evaluation methods (lack of domain coverage, lack of precision measures, failure to consider computational trade-offs, and lack of a standardized framework). It covers nine knowledge domains (culture, geography, health, etc.) and uses two standardized metrics—improvements and transformations—to enable reproducible comparisons between models and tasks. It features dynamic test generation, a modular evaluation pipeline, and automatic knowledge base construction, demonstrating domain-specific variability in RAG effectiveness, with significant performance gains in the culture domain and performance degradations in mathematics. The source code and dataset are available on GitHub.