Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Evaluating the Fitness of Ontologies for the Task of Question Generation

Created by
  • Haebom

Author

Samah Alkhuzaey, Floriana Grasso, Terry R. Payne, Valentina Tamma

Outline

This paper presents requirements and task-specific metrics for evaluating ontology for automatic question generation (AQG) in educational environments. While previous research has shown that ontology quality influences the effectiveness of AQG, comprehensive research on which ontology features influence AQG and how they affect it has been lacking. Therefore, this paper uses the ROMEO methodology to derive ontology evaluation metrics based on questions generated through expert evaluation. The metrics are then applied to ontologies used in previous studies to validate the findings. The analysis results demonstrate that ontology characteristics significantly impact the effectiveness of AQG, and that performance varies across ontologies. This highlights the importance of assessing ontology quality in AQG tasks.

Takeaways, Limitations

Takeaways:
We contribute to the qualitative evaluation of ontologies by presenting a systematic framework (ROMEO) for ontology evaluation and task-specific indicators for AQG.
By empirically demonstrating the impact of ontology characteristics on AQG performance, we provide important Takeaways for AQG system development.
We provide a method to effectively evaluate the AQG suitability of an ontology using the presented indicators.
Limitations:
Further research is needed to determine the generalizability of the presented indicators.
The evaluation results may be limited to a specific question generation model.
A comprehensive analysis of different types of ontologies may be lacking.
👍