Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Efficient Knowledge Graph Unlearning with Zeroth-order Information

Created by
  • Haebom

Author

Yang Xiao, Ruimeng Ye, Bohan Liu, Xiaolong Ma, Bo Hui

Outline

This paper presents an efficient knowledge graph (KG) unlearning algorithm to address the cost of full retraining, as regulations like Right to be Forgotten increase the need to remove training data and their influence from models. We highlight that KG unlearning is a nontrivial problem due to the unique structure of knowledge graphs and the semantic relationships between entities. Estimating the influence of removed components and performing unlearning incurs significant computational overhead when applied to large knowledge graphs. Therefore, we define an influence function for KG unlearning and propose a method to approximate model sensitivity without the costly computation of first and second derivatives for parameter updates. Specifically, we use Taylor expansion to estimate parameter changes due to data removal. Considering that the first and second gradients dominate the computational load, we approximate the inverse Hessian vector product using Fisher matrices and zero-order optimization without constructing a computational graph. Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art graph unlearning baselines in terms of both unlearning efficiency and quality. The code will be released at https://github.com/NKUShaw/ZOWFKGIF .

Takeaways, Limitations

Takeaways:
We present an efficient unlearning algorithm for large-scale knowledge graphs, contributing to compliance with regulations such as Right to be Forgotten.
We present a novel method to effectively reduce computational costs by utilizing Fisher matrices and zero-order optimization.
We experimentally demonstrate that the proposed method outperforms existing methods in terms of unlearning efficiency and quality.
Limitations:
The performance of the proposed method depends on the approximation accuracy of the Taylor expansion, and the approximation error may be large under certain conditions.
The experiments are limited to a specific knowledge graph dataset, and generalization performance to other types of knowledge graphs or datasets requires further research.
There is a lack of analysis of the potential inefficiencies in the optimization process that may arise from using zero-order optimization.
👍