This paper proposes FLock, a decentralized framework for distributed training of large-scale language models (LLMs). While traditional federated learning (FL) is vulnerable to single points of failure and malicious attacks from centralized servers, FLock integrates a blockchain-based trust layer and economic incentives to provide a secure and auditable collaboration protocol among untrusted participants. We present the first empirical validation of fine-tuning a 70B-parameter LLM in a secure, multi-domain, decentralized environment, experimentally demonstrating a more than 68% reduction in malicious attack success rate and superior cross-domain generalization performance compared to independently trained models.