Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Can Large Language Models Act as Ensembler for Multi-GNNs?

Created by
  • Haebom

Author

Hanqi Duan, Yao Cheng, Jianxiang Yu, Yao Liu, Xiang Li

Outline

This paper proposes LensGNN, a novel model that leverages a large-scale language model (LLM) to address the Limitations of graph neural networks (GNNs), which have emerged as powerful models for learning graph-structured data. Existing GNNs have limitations due to their inability to understand the semantics of rich text node attributes, and we observed that a specific GNN does not consistently perform well across diverse datasets. LensGNN aligns the representations of multiple GNNs by mapping them to the same space, and then aligns the space between the GNNs and LLMs through LoRA fine-tuning. By injecting graph tokens and text information into the LLM, multiple GNNs are ensembled and leverage the strengths of the LLM, enabling a deeper understanding of text semantics and graph structure. Experimental results demonstrate that LensGNN outperforms existing models, providing a powerful and superior solution for integrating semantic and structural information, advancing text attribute graph ensemble learning.

Takeaways, Limitations

Takeaways:
A novel method for improving the performance of various GNNs by leveraging LLM is presented.
Development of a model that effectively integrates text semantic information and graph structural information.
Successfully overcoming the limitations of existing GNN models and improving performance.
Contributing to the advancement of text attribute graph ensemble learning.
Limitations:
Further research is needed on the generalization performance of LensGNN presented in this paper.
More experimental results on various types of graph datasets are needed.
Consideration should be given to the computational cost and resource consumption of LLM.
Further research is needed on optimization and parameter setting of the LoRA fine-tuning process.
👍