[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Efficient Federated Learning with Heterogeneous Data and Adaptive Dropout

Created by
  • Haebom

Author

Ji Liu, Beichen Ma, Qiaolin Yu, Ruoming Jin, Jingbo Zhou, Yang Zhou, Huaiyu Dai, Haixun Wang, Dejing Dou, Patrick Valduriez

Outline

In this paper, we propose a FedDHAD framework to address the performance degradation issues caused by data heterogeneity and limited resources of edge devices in federated learning (FL). FedDHAD combines two novel methods, dynamic heterogeneous model aggregation (FedDH) and adaptive dropout (FedAD). FedDH dynamically adjusts the weights of each local model according to the degree of data heterogeneity to address the non-IID data problem, and FedAD performs adaptive computation at the neuron level according to heterogeneous devices to improve accuracy and efficiency. Experimental results show that FedDHAD outperforms existing state-of-the-art methods in terms of accuracy (up to 6.7% improvement), efficiency (up to 2.02x improvement), and computational cost (up to 15.0% reduction).

Takeaways, Limitations

Takeaways:
We present a novel framework that effectively addresses data heterogeneity and limited resource of edge devices in federated learning.
Combining FedDH and FedAD to simultaneously improve accuracy, efficiency, and computational cost.
Overcoming the performance limitations of existing federated learning and increasing its practical applicability.
Limitations:
The performance of the proposed method may depend on specific datasets and environments.
Additional experiments and analysis are needed for various types of edge devices and network environments.
Further research is needed on hyperparameter optimization of FedDH and FedAD.
👍