[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Multi-View Node Pruning for Accurate Graph Representation

Created by
  • Haebom

Author

Jiseong Park, Hanjin Kim, Seojin Kim, Jueun Choi, Doheon Lee, Sung Ju Hwang

Outline

In this paper, we propose a novel method, Multi-View Pruning (MVP), to improve the efficiency of graph pooling. While existing graph pooling methods tend to remove nodes mainly based on the degree of the node, MVP solves this problem by considering the importance of the node from multiple viewpoints. Specifically, MVP splits the input graph into graphs with multiple views and learns the score of each node by considering both the reconstruction loss and the task loss. We demonstrate improved performance compared to existing graph pooling methods on various benchmark datasets, and we confirm through analysis that multi-view encoding and consideration of the reconstruction loss are the key factors for the performance improvement.

Takeaways, Limitations

Takeaways:
We effectively solve the simple node removal problem of __T77050__ in existing graph pooling by considering multiple viewpoints and reconstruction loss.
It is highly versatile as it is compatible with various graph pooling methods.
The experimental results verify the superiority of the proposed method and the importance of its key elements.
Effectively identifies low-importance nodes based on domain knowledge.
Limitations:
The effectiveness of the proposed MVP may vary depending on the graph pooling method used and the dataset.
Additional research may be needed on how to generate different perspectives (e.g., optimal number of perspectives, perspective generation strategies, etc.).
Further research may be needed on adjusting the weights of reconstruction loss and task loss.
Further validation of its applicability and efficiency for large-scale graphs is needed.
👍