This paper introduces the FedSA-GCL framework, a proposed framework for Federated Graph Learning (FGL) that leverages large-scale subgraphs in distributed environments. To address the inefficiencies of existing FGL methods' synchronous communication, we adopt a semi-synchronous approach and introduce the ClusterCast mechanism, which leverages differences in label distributions across clients and graph topological features. Using the Louvain and Metis partitioning algorithms, we compare our proposed framework with nine baseline models on a real-world graph dataset, demonstrating an average performance improvement of 2.92% (Louvain) and 3.4% (Metis), along with strong robustness.