Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited

Created by
  • Haebom

Author

Zheng Wang, Hongming Ding, Li Pan, Jianhua Li, Zhiguo Gong, Philip S. Yu

Outline

This paper theoretically discusses the relationship between conventional cluster-based shallow learning methods and the recently emerging graph convolutional neural networks (GCNs) in graph-based semi-supervised learning (GSSL) within a unified optimization framework. Specifically, we demonstrate that, unlike existing methods, conventional GCNs may not consider both graph structure and label information at each layer. Based on this, we propose three novel graph convolutional methods: OGC, a supervised learning method that utilizes label information; GGC, an unsupervised learning method that preserves graph structure; and its multi-scale version, GGCM. We demonstrate their effectiveness through extensive experiments. The source code is openly available.

Takeaways, Limitations

Takeaways:
We theoretically elucidate the Limitations of the existing GCN and propose new GCN-based methods to improve it.
We propose new GSSL methods that effectively utilize label information and graph structure information.
We experimentally verified the superiority of the proposed methods and increased reproducibility by making the source code public.
Limitations:
The datasets used to compare the performance of the proposed methods may lack diversity.
Performance evaluation on more complex and large-scale graph data is required.
The theoretical analysis of the proposed methods needs to be further in-depth.
👍