Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation

Created by
  • Haebom

Author

Xenia Heilmann, Luca Corbucci, Mattia Cerrato, Anna Monreale

Outline

This paper highlights the need for research to improve model fairness in Federated Learning (FL) environments, focusing on addressing the issue of bias across diverse clients. It highlights the limitations of existing fairness-enhancing FL solutions, highlighting the problems of failing to consider multiple sensitive attributes or overlooking unfairness at the individual client level. To address these issues, we propose a comprehensive benchmarking framework for fairness-aware FL at both the global and client levels. This framework provides the \fairdataset library, which enables fairness assessment in client environments with varying biases, four bias-heterogeneous datasets, and fairness assessment functions for these datasets.

Takeaways, Limitations

Takeaways:
We emphasize the need for fairness research in FL environments and present a benchmarking framework to address practical issues.
Increase the reproducibility and comparability of fairness research by providing datasets and assessment tools with diverse biases.
By assessing fairness at the global and client levels, we can more accurately identify unfairness.
Limitations:
Further research is needed on applicability and scalability to complex real-world scenarios beyond single or binary sensitive attributes.
The provided dataset may not perfectly represent all real-world biases.
Further research is needed on compatibility and performance evaluation with various FL models and fairness improvement methodologies.
👍