This paper presents requirements and task-specific metrics for evaluating ontology for automatic question generation (AQG) in educational environments. While previous research has shown that ontology quality influences the effectiveness of AQG, comprehensive research on which ontology features influence AQG and how they affect it has been lacking. Therefore, this paper uses the ROMEO methodology to derive ontology evaluation metrics based on questions generated through expert evaluation. The metrics are then applied to ontologies used in previous studies to validate the findings. The analysis results demonstrate that ontology characteristics significantly impact the effectiveness of AQG, and that performance varies across ontologies. This highlights the importance of assessing ontology quality in AQG tasks.