This paper proposes "Evaluation Agent," a novel framework for the efficient evaluation of recently developed visual generative models. Existing visual generative model evaluation methods require numerous image or video samples, resulting in high computational costs. Furthermore, they fail to address user-specific needs and often provide only simple numerical results. The Evaluation Agent utilizes a human-like strategy to perform dynamic and efficient multi-round evaluations with only a small number of samples per round, providing customized analysis results. Experiments demonstrate that this approach reduces evaluation time by 10% compared to existing methods while delivering comparable results. This open-source framework is expected to contribute to the advancement of research on visual generative models and their efficient evaluation.