Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

GDGB: A Benchmark for Generative Dynamic Text-Attributed Graph Learning

Created by
  • Haebom

Author

Jie Peng, Jiarui Ji, Runlin Lei, Zhewei Wei, Yongchao Liu, Chuntao Hong

Outline

In this paper, we propose a new benchmark, Generative DyTAG Benchmark (GDGB), for dynamic text attribute graph (DyTAG) generation. To improve the low text quality and discriminative task-centered research of existing DyTAG datasets, we construct eight DyTAG datasets with high-quality text attributes and define two new generation tasks, Transductive Dynamic Graph Generation (TDGG) and Inductive Dynamic Graph Generation (IDGG). TDGG generates DyTAG based on a given set of source and destination nodes, while IDGG models dynamic graph expansion including new node creation. We present multifaceted metrics that evaluate structural, temporal, and textual qualities, and GAG-General, an LLM-based multi-agent generation framework for DyTAG generation, to enable rigorous evaluation. Experimental results show that GDGB enables rigorous evaluation of TDGG and IDGG, and demonstrates the interaction of structural and textual features in DyTAG generation.

Takeaways, Limitations

Takeaways:
Contribute to the advancement of DyTAG generation research by providing the DyTAG dataset GDGB with high-quality text properties.
Expanding the scope of research by presenting new DyTAG generation tasks, TDGG and IDGG.
Rigorous and reproducible benchmarking possible with multi-faceted assessment metrics and LLM-based generative framework GAG-General.
Provides important insights into the interplay of structural and textual features in DyTAG generation.
Limitations:
The number and diversity of datasets included in GDGB need to be expanded in the future.
It may be necessary to explore ways to further increase the comprehensiveness of the evaluation indicators presented.
Further research may be needed to improve the performance and generalizability of GAG-General.
👍