This paper proposes RapidGNN, a novel framework for improving the efficiency of distributed learning of Graph Neural Networks (GNNs) on large-scale graphs. While existing sampling-based approaches reduce computational load, communication overhead remains a problem. RapidGNN enables efficient cache construction and remote feature prefetching through deterministic sampling-based scheduling. Evaluation results on benchmark graph datasets show that RapidGNN improves end-to-end learning throughput by an average of 2.46x to 3.00x compared to existing methods, and reduces remote feature fetching by 9.70x to 15.39x. Furthermore, it achieves near-linear scalability with increasing compute units, and improves energy efficiency by 44% and 32%, respectively, compared to existing methods on both CPUs and GPUs.