Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Automated Model Evaluation for Object Detection via Prediction Consistency and Reliability

Created by
  • Haebom

Author

Seungju Yoo, Hyuk Kwon, Joong-Won Hwang, Kibok Lee

AutoEval: An Automated Model Evaluation Framework for Object Detection

Outline

This paper describes an automated model evaluation framework (AutoEval) developed to reduce the manual annotation effort required to evaluate the performance of object detection models. This framework proposes a novel metric, Prediction Consistency and Reliability (PCR). PCR estimates object detection performance without ground truth by measuring spatial consistency between bounding boxes before and after non-maximum suppression (NMS) and the reliability of overlapping boxes. To achieve a more realistic and scalable evaluation, we constructed a metadata dataset by applying various levels of image corruption.

Takeaways, Limitations

Takeaways:
Development of an AutoEval framework to efficiently evaluate the performance of object detection models without manual annotation.
A PCR metric is proposed to estimate performance by utilizing bounding box information before and after NMS.
It shows more accurate performance estimation results than the existing AutoEval method.
More realistic evaluation is possible by building a metadata set that applies various image damages.
Limitations:
There is no specific mention of Limitations in the paper.
👍