In this paper, we propose FLock, a decentralized framework for secure and efficient collaborative fine-tuning of large-scale language models (LLMs) in distributed environments. While traditional federated learning (FL) suffers from a single point-of-attack vulnerability of a central server, FLock provides a secure and auditable collaboration protocol among untrusted parties by incorporating a blockchain-based trust layer and economic incentives. In this paper, we present the first experimental validation results of fine-tuning a 70B-parameter LLM in a secure and multi-domain decentralized environment, and show that the FLock framework defends against backdoor attacks that corrupt the standard FL optimizer and promotes synergistic knowledge transfer. The resulting model reduces the adversarial attack rate by more than 68% and exhibits better cross-domain generalization performance than independently trained models.