Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Optimizing Privacy-Utility Trade-off in Decentralized Learning with Generalized Correlated Noise

Created by
  • Haebom

Author

Angelo Rodio, Zheng Chen, Erik G. Larsson

Outline

In this paper, we propose CorN-DSGD, a novel framework for enhancing privacy in distributed learning environments. In distributed learning, model sharing between agents poses a risk of privacy leakage, and the existing random noise addition method causes performance degradation due to noise accumulation. CorN-DSGD is a covariance-based framework that optimizes noise removal across the network by generating correlated noise between agents. It utilizes network topology and mixing weights, and removes noise more effectively than the existing two-way correlation method, thereby improving model performance under formal privacy guarantees.

Takeaways, Limitations

Takeaways:
We present a novel method to improve the trade-off between privacy and model performance in distributed learning environments.
Overcome the limitations of existing methods and improve noise removal efficiency by utilizing network topology and mixing weights.
Provides a general framework that integrates several state-of-the-art methods into special cases.
Experimentally demonstrating model performance improvement under formal privacy guarantees.
Limitations:
The performance of CorN-DSGD may depend on the network topology and mixing weights. Further research may be needed to determine the optimal topology and weight settings.
Additional experimental validation is needed on various distributed learning environments and datasets.
There is a lack of consideration of the costs and complexities that may arise when applied to real-world large-scale distributed learning systems.
👍