[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

FedWSQ: Efficient Federated Learning with Weight Standardization and Distribution-Aware Non-Uniform Quantization

Created by
  • Haebom

Author

Seung-Wook Kim, Seongyeol Kim, Jiah Kim, Seowon Ji, Se-Ho Lee

Outline

FedWSQ is a novel FL framework proposed to address the limitations of federated learning (FL), which suffers from poor performance due to the major challenges of data heterogeneity and communication constraints. FedWSQ works by integrating weight standardization (WS) and distribution-aware non-uniform quantization (DANUQ). WS improves the robustness of the model to data heterogeneity and unreliable client participation by filtering out biased components from local updates during learning. DANUQ minimizes quantization errors by exploiting the statistical properties of local model updates. As a result, FedWSQ significantly reduces the communication overhead while maintaining excellent model accuracy. Extensive experiments on various FL benchmark datasets demonstrate that FedWSQ consistently outperforms existing FL methods in a variety of challenging FL settings, including extreme data heterogeneity and ultra-low bit rate communication scenarios.

Takeaways, Limitations

Takeaways:
We present an effective method to improve the performance of federated learning in environments with severe data heterogeneity and communication constraints.
Achieving both reduced communication overhead and improved model accuracy through a combination of Weight Normalization (WS) and Distribution-Aware Non-Uniform Quantization (DANUQ).
The superiority of the proposed method is verified through various experiments.
Limitations:
Further studies are needed to determine the practical applicability of the proposed method.
Since the results are for a specific dataset and setting, further validation of generalizability is needed.
Further research may be needed to determine the optimal parameters for DANUQ.
👍