Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Cutting Through Privacy: A Hyperplane-Based Data Reconstruction Attack in Federated Learning

Created by
  • Haebom

Author

Francesco Diana, Andre Nusser, Chuan Xu, Giovanni Neglia

Outline

This paper presents a novel data reconstruction attack method to address the vulnerability of a malicious central server in Federated Learning (FL) where it can reconstruct a client's private data. We overcome the limitations of existing methods, which rely on assumptions about the client data distribution and have poor efficiency at small batch sizes. By leveraging a novel geometric perspective on fully connected layers, we generate malicious model parameters that can perfectly reconstruct data batches of arbitrary size without prior knowledge of the client data. Experiments on image and tabular datasets demonstrate that our method outperforms existing methods, achieving perfect reconstruction of data batches two orders of magnitude larger than the previous best-performing method.

Takeaways, Limitations

Takeaways:
It poses a serious challenge to existing beliefs about data privacy guarantees in federated learning.
We present a new attack technique that overcomes the limitations of existing data reconstruction attacks.
Enables efficient data reconstruction for large data batches.
Provides important Takeaways for enhancing the security of federated learning systems.
Limitations:
The presented attack technique focuses on a specific type of neural network architecture (fully connected layers) and may not be applicable to neural networks with other architectures.
It may not fully reflect the complexity of real-world federated learning environments.
The success of an attack may depend on the characteristics of the dataset.
👍