Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Tackling Federated Unlearning as a Parameter Estimation Problem

Created by
  • Haebom

Author

Antonio Balordi, Lorenzo Manini, Fabio Stella, Alessio Merlo

Outline

Data sanitization is required in deep learning models to comply with privacy regulations. This is particularly problematic in federated learning (FL) environments where data is distributed across clients, making retraining or coordinated updates difficult. In this study, we present an efficient federated unlearning framework based on information theory. This framework models information leakage as a parameter estimation problem and uses quadratic Hessian information to selectively reset the parameters most sensitive to the data to be sanitized. This is followed by minimal federated retraining. The proposed model supports categorical and client-side unlearning, and the server does not need access to the client's raw data after initial information aggregation. Evaluation on benchmark datasets demonstrates robust privacy protection (MIA success rate close to random, categorical knowledge removal) and high performance (normalized accuracy ≈ 0.9 compared to the retraining benchmark). Furthermore, it effectively neutralizes malicious triggers in targeted backdoor attack scenarios, restoring model integrity.

Takeaways, Limitations

Takeaways:
Providing an efficient framework for data deletion in federated learning environments.
Achieving strong privacy and high performance
Demonstrated ability to defend against backdoor attacks
Model-agnostic approach
Categorical and client-side unlearning support
Limitations:
Aims to improve efficiency compared to full retraining, but lacks specific efficiency metrics (e.g., time, computational complexity)
Lack of detailed information on the specific implementation method or hyperparameter settings of the study.
Further research is needed to determine the generalizability of the proposed framework (to different datasets, model architectures, and attack types).
👍