In this paper, we present FeDa4Fair, a library for benchmarking fair FL methods under heterogeneous client data distributions, and four datasets and benchmarks with heterogeneous biases to address the fairness issue in Federated Learning (FL). Unlike previous studies that focus on a single binary sensitive attribute, FeDa4Fair supports more robust and reproducible fairness studies by considering diverse and potentially conflicting fairness demands from clients. FeDa4Fair generates tabular datasets for evaluating fair FL methods under various client biases and provides functions for evaluating fairness results.