Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Scaling Decentralized Learning with FLock

Created by
  • Haebom

Author

Zehua Cheng, Rui Sun, Jiahao Sun, Yike Guo

Outline

This paper proposes FLock, a decentralized framework for distributed training of large-scale language models (LLMs). While traditional federated learning (FL) is vulnerable to single points of failure and malicious attacks from centralized servers, FLock integrates a blockchain-based trust layer and economic incentives to provide a secure and auditable collaboration protocol among untrusted participants. We present the first empirical validation of fine-tuning a 70B-parameter LLM in a secure, multi-domain, decentralized environment, experimentally demonstrating a more than 68% reduction in malicious attack success rate and superior cross-domain generalization performance compared to independently trained models.

Takeaways, Limitations

Takeaways:
We present FLock, a novel framework for safe and efficient distributed fine-tuning of 70B-parameter LLMs.
Implementing a decentralized collaboration protocol using a blockchain-based trust layer and economic incentives.
Demonstration of defense against backdoor malicious attacks, a vulnerability in existing federated learning.
Improved cross-domain generalization performance and reduced malicious attack success rate were confirmed.
Limitations:
Further research is needed on the practical application and scalability of FLock.
Generalizability to different sizes and types of LLMs needs to be verified.
Further analysis is needed on the performance and cost-effectiveness of blockchain-based systems.
👍