This paper focuses on domain generalization (DG) in distributed environments to address the data distribution variation problem inherent in existing federated learning (FL). Specifically, we aim to overcome the limitations of previous studies, which lack a formal mathematical analysis of the DG objective function and are limited to star-topologies. To this end, we propose StyleDDG, a distributed DG algorithm based on style information sharing between devices. StyleDDG achieves DG by sharing style information inferred from datasets among devices in a peer-to-peer network. Furthermore, we present the first systematic approach to analyzing style-based DG learning in distributed networks. We model StyleDDG by incorporating existing centralized DG algorithms into the proposed framework and derive analytical conditions that ensure its convergence. Experiments using various DG datasets demonstrate that StyleDDG significantly improves accuracy across multiple target domains with less communication overhead compared to conventional distributed gradient descent.