Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

PRUNE: A Patching Based Repair Framework for Certifiable Unlearning of Neural Networks

Created by
  • Haebom

Author

Xuran Li, Jingyi Wang, Xiaohan Yuan, Peixin Zhang

Outline

In this paper, we propose a novel approach to overcome the limitations of existing unlearning methods by selectively “forgetting” certain data by applying “patches” to the neural network. While existing methods are expensive and difficult to verify because they retrain the model with the remaining data, our method guarantees data deletion by finding and applying a lightweight minimal patch. To delete multiple data points or entire classes, we use a method to selectively and iteratively delete representative data points. Experimental results on several datasets show that the proposed method is efficient and consumes less memory while maintaining model performance.

Takeaways, Limitations

Takeaways:
Instead of the traditional costly retraining approach, we present an efficient and verifiable data deletion method.
A lightweight “patch” allows you to selectively delete only targeted data.
It can effectively perform deletion of multiple data points or entire classes.
It is competitive in terms of efficiency and memory consumption while minimizing model performance degradation.
Limitations:
Further research is needed to evaluate the generalization performance of the proposed patching method.
The applicability to various types of neural network models and datasets needs to be further verified.
When applying to large datasets, efficiency and scalability issues must be considered.
An analysis of the complexity and computational cost of the patch generation process is required.
👍