Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Enhancing Model Privacy in Federated Learning with Random Masking and Quantization

Created by
  • Haebom

Author

Zhibo Xu, Jianhao Zhu, Jingwen Xu, Changze Lv, Zisu Huang, Xiaohua Wang, Muling Wu, Qi Qian, Xiaoqing Zheng, Xuanjing Huang

Outline

This paper highlights that while existing federated learning approaches have focused on data privacy, the emergence of large-scale language models (LLMs) has heightened the importance of intellectual property (IP) protection. Therefore, a novel federated learning approach capable of protecting both sensitive data and proprietary models is needed. To address this, we propose a novel federated learning method, FedQSN. FedQSN randomly masks some model parameters and quantizes the remaining parameters, enabling the model transmitted by the server to the client to act as a privacy-preserving proxy. Experimental results across various models and tasks demonstrate that FedQSN enhances model parameter protection compared to existing methods while maintaining robust model performance in a federated learning environment.

Takeaways, Limitations

Takeaways:
We present a novel approach to intellectual property protection in federated learning of large-scale language models.
We present an effective method to enhance the privacy protection of model parameters.
We experimentally demonstrate that privacy can be improved without compromising the performance of existing federated learning.
Limitations:
There is a lack of theoretical analysis on the safety of the proposed method.
Further evaluation of resistance to various attack scenarios is needed.
The experimental results are limited to specific models and tasks and require further research to determine their generalizability.
👍