Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Is Complex Query Answering Really Complex?

Created by
  • Haebom

Author

Cosimo Gregucci, Bo Xiong, Daniel Hernandez, Lorenzo Loconte, Pasquale Minervini, Steffen Staab, Antonio Vergari

Outline

This paper argues that current benchmarks for complex query answering (CQA) over knowledge graphs (KGs) do not adequately reflect their real-world complexity. A significant proportion of queries (up to 98%) in existing benchmarks can be reduced to simpler problems such as link prediction, and state-of-the-art CQA models show significant degradation on such non-simplifiable queries. Therefore, in this paper, we propose a set of more challenging benchmarks that require multi-hop inference and better reflect real-world KG configurations, thereby exposing the limitations of existing CQA methods.

Takeaways, Limitations

Takeaways: It suggests the direction of development of CQA research by revealing the limitations of existing CQA benchmarks and suggesting new benchmarks that reflect more realistic difficulties. It provides a foundation for more accurately evaluating the actual performance of existing CQA models.
Limitations: It cannot be assumed that the proposed new benchmark perfectly reflects the complexity of all real-world KGs. Additional validation of the scale and diversity of the new benchmark may be necessary. The possibility of bias toward certain types of KGs should also be considered.
👍