Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Byzantine-Robust Federated Learning Using Generative Adversarial Networks

Created by
  • Haebom

Author

Usama Zafar, Andre MH Teixeira, Salman Toor

Outline

This paper presents a novel defense framework that enhances robustness against Byzantine actions, such as data and model poisoning, in federated learning (FL). Existing defense techniques rely on robust aggregation rules or heuristics, which have increasing lower bounds on errors as client heterogeneity increases, or require a reliable external dataset for validation. In this paper, we propose a defense framework that synthesizes representative data for validating client updates on the server using a conditional generative adversarial network (cGAN). This approach eliminates reliance on external datasets, adapts to various attack strategies, and seamlessly integrates into standard FL workflows. Extensive experiments on benchmark datasets demonstrate that the proposed framework accurately distinguishes malicious from benign clients while maintaining overall model accuracy. In addition to Byzantine robustness, we investigate the representativeness of the synthetic data, the computational cost of training the cGAN, and the transparency and scalability of the approach.

Takeaways, Limitations

Takeaways:
Validate client updates without external datasets
Adaptable to various attack strategies
Seamless integration into standard FL workflows
Accurately distinguish between malicious and benign clients
Maintain model accuracy
Limitations:
Computational cost of cGAN training
Further research is needed on the representativeness of synthetic data.
Further research is needed on the transparency and scalability of the approach.
👍