Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Not All Clients Are Equal: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients

Created by
  • Haebom

Author

Minhyuk Seo, Taeheon Kim, Hankook Lee, Jonghyun Choi, Tinne Tuytelaars

FedMosaic: Jointly Addressing Data and Model Heterogeneity in Personalized Federated Learning

Outline

As AI becomes more personalized, such as in agent AI, the need to personalize models for diverse use cases increases. This paper proposes a method for personalized federated learning (PFL) that allows each client to leverage the knowledge of other clients to better adapt to their tasks of interest without privacy risks. To overcome the limitations of existing PFL methods, which are limited to simplified scenarios where data and models are identical across clients, we propose FedMosaic. FedMosaic reduces parameter interference through a model aggregation strategy that considers task relevance and enables knowledge sharing across heterogeneous architectures without computational costs using dimensionally invariant modules. Furthermore, we propose a multi-modal PFL benchmark covering 40 different tasks, including time-varying distributions, to mimic real-world task diversity. Experimental results demonstrate that FedMosaic outperforms state-of-the-art PFL methods, demonstrating superior personalization and generalization capabilities in challenging, realistic scenarios.

Takeaways, Limitations

Takeaways:
Presenting a PFL methodology that simultaneously addresses data and model heterogeneity.
Facilitates knowledge sharing in heterogeneous environments by utilizing model aggregation strategies that take task relevance into account and dimensionally invariant modules.
Emulate real-world work environments with multi-mode PFL benchmarks
Outperforms existing methodologies in both personalization and generalization capabilities.
Limitations:
There is no specific mention of Limitations in the paper.
👍