This paper presents the first comprehensive investigation of group fairness in federated learning (FL). Group fairness is a particularly important issue, as the heterogeneous data distributions of federated learning can exacerbate bias. This paper analyzes key challenges in achieving group fairness in FL, presents practical approaches for identification and benchmarking, and proposes a novel taxonomy based on criteria such as data partitioning, location, and strategy. We also discuss how to handle the complexity of various sensitive attributes, common datasets and applications, and the ethical, legal, and policy implications of group fairness in FL. Finally, we highlight the need for more methods to address the complexities of achieving group fairness in federated systems and suggest key areas for future research.