This paper proposes Field-Based Federated Learning (FBFL), a novel federated learning (FL) method for training machine learning models in distributed environments. To address the scalability and performance degradation of existing FL, particularly those caused by non-independently and identically distributed (non-IID) data distributions, FBFL leverages macroprogramming and field coordination. Specifically, it mitigates the non-IID data problem by performing personalization through spatially distributed leader election and builds a self-organizing hierarchical structure to address bottlenecks and single points of failure in centralized architectures. Experimental results using the MNIST, FashionMNIST, and Extended MNIST datasets demonstrate that FBFL performs similarly to FedAvg under IID data conditions and outperforms existing state-of-the-art methods such as FedProx and Scaffold under non-IID data conditions. Furthermore, we demonstrate the robustness of FBFL's self-organizing hierarchical architecture against server failures.