Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Can Large Language Models Act as Ensembler for Multi-GNNs?

Created by
  • Haebom

Author

Hanqi Duan, Yao Cheng, Jianxiang Yu, Yao Liu, Xiang Li

Outline

This paper highlights that graph neural networks (GNNs), which are effective at learning graph-structured data, lack the ability to understand the semantic properties of rich text node attributes. We observe that existing GNN models fail to consistently perform well across diverse datasets. To address this, we propose the LensGNN model, which utilizes a large-scale language model (LLM) as an ensemble of multiple GNNs. LensGNN maps the representations of multiple GNNs to the same space, aligns the spaces between the GNNs and LLMs through LoRA fine-tuning, and injects graph tokens and textual information into the LLM. This ensemble of multiple GNNs leverages the strengths of LLMs to deepen the understanding of textual semantics and graph structural information. Experimental results demonstrate that LensGNN outperforms existing models. This research advances text attribute graph ensemble learning by providing a robust and superior solution for integrating semantic and structural information. The code and data are available on GitHub.

Takeaways, Limitations

Takeaways:
We present a novel method for effectively ensembling multiple GNNs using LLM.
Improving GNN performance by integrating text semantic information and graph structure information.
Overcoming the Limitations of existing GNN models and achieving excellent performance
Contributing to the advancement of text attribute graph ensemble learning.
Limitations:
Consideration needs to be given to the computational cost and resource consumption of LLM.
Need to verify generalization performance on various types of graph datasets
Further research is needed to optimize the parameters for LoRA fine-tuning.
Further research is needed on the dependence on specific LLMs and the applicability of other LLMs.
👍