This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
Curvature-Aligned Federated Learning (CAFe): Harmonizing Loss Landscapes for Fairness Without Demographics
Created by
Haebom
Author
Shaily Roy, Harshit Sharma, Asif Salekin
Outline
This paper presents a novel approach to achieve fairness while preserving privacy in federated learning (FL), called curvature-aligned federated learning (CAFe). Unlike existing FL fairness methods that rely on sensitive attribute information, CAFe introduces the concept of "fairness without demographics (FWD)" to achieve fairness without sensitive attribute information. CAFe aligns curvatures within and between clients through loss terrain curvature regularization during local learning and loss terrain sharpness-aware aggregation across clients to enhance the trade-off between fairness and performance. We verify the effectiveness and practicality of CAFe through experiments in a real FL deployment environment with various real-world datasets and resource-constrained devices.
Takeaways, Limitations
•
Takeaways:
◦
We present a novel method (CAFe) to achieve fairness in federated learning without sensitive attribute information.
◦
Improve fairness by leveraging loss terrain curvature to address inter-client imbalances.
◦
We verify its effectiveness and practicality through experiments on real-world datasets and real FL deployment environments.
◦
Perform sensitivity analysis on various systemic factors (data volume, client sampling, communication overhead, resource cost, execution time performance).
•
Limitations:
◦
The performance and fairness improvements of CAFe may vary depending on the dataset or environment.
◦
Additional computational costs and complexity may arise in actual implementation and application.
◦
It may only be effective against certain types of bias, and further research is needed on its generalizability across different types of bias.