Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Graph Unlearning: Efficient Node Removal in Graph Neural Networks

Created by
  • Haebom

Author

Faqian Guan, Tianqing Zhu, Zhoutian Wang, Wei Ren, Wanlei Zhou

Outline

This paper proposes three novel node unlearning methods to efficiently remove sensitive training data from graph neural network (GNN) models and reduce privacy risks. To address the limitations of existing methods, including limitations in the GNN architecture, insufficient utilization of graph topology, and trade-offs between performance and complexity, we propose three methods: Class-based Label Replacement, Topology-guided Neighbor Mean Posterior Probability, and Class-consistent Neighbor Node Filtering. Specifically, Topology-guided Neighbor Mean Posterior Probability and Class-consistent Neighbor Node Filtering utilize topological features of the graph to perform effective node unlearning. We evaluate the performance of these three methods on three benchmark datasets based on model utility, unlearning utility, and unlearning efficiency, and confirm that they outperform existing methods. This research contributes to improving the privacy and security of GNN models and provides valuable insights into the field of node unlearning.

Takeaways, Limitations

Takeaways:
We present an efficient node unlearning method that contributes to improving privacy and security of GNN models.
Overcoming the Limitations of existing methods by utilizing graph topology.
Experimentally verifying the superiority of three new node unlearning methods.
A comprehensive evaluation was performed that took into account model utility, unlearning utility, and unlearning efficiency.
Limitations:
Further research is needed on the generalization performance of the proposed methods.
Experiments need to be expanded to cover various types of GNN structures and datasets.
Performance and efficiency verification in real application environments is required.
👍