Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Privacy-Preserving Federated Learning via Homomorphic Adversarial Networks

Created by
  • Haebom

Author

Wenhan Dong, Chao Lin, Xinlei He, Shengmin Xu, Xinyi Huang

Outline

This paper studies privacy-preserving federated learning (PPFL), which trains a global model using data from multiple clients while preserving privacy. To overcome the limitations of existing PPFL protocols, such as poor accuracy, the need for key sharing, and the need for cooperation during key generation or decryption, we propose a novel PPFL protocol utilizing neural networks. This protocol incorporates homomorphic adversarial networks (HANs) that integrate an aggregatable hybrid encryption scheme tailored to the requirements of PPFL. It performs tasks similar to multi-key homomorphic encryption (MK-HE) while solving key distribution and collaborative decryption problems. Experimental results demonstrate that HANs are robust against privacy attacks, exhibit minimal accuracy loss (up to 1.35%) compared to non-privacy-preserving federated learning, and achieve a 6,075x increase in encryption aggregation speed compared to existing MK-HE schemes, but with a 29.2x increase in communication overhead.

Takeaways, Limitations

Takeaways:
Overcoming the limitations of existing methods by first proposing a PPFL protocol using neural networks
Privacy protection without key distribution or collaborative decryption
Minimized accuracy loss (up to 1.35%)
6,075x faster encryption aggregation speed
Limitations:
Communication overhead increases 29.2 times compared to the existing MK-HE method.
👍