As AI becomes more personalized, such as in agent AI, the need to personalize models for diverse use cases increases. This paper proposes a method for personalized federated learning (PFL) that allows each client to leverage the knowledge of other clients to better adapt to their tasks of interest without privacy risks. To overcome the limitations of existing PFL methods, which are limited to simplified scenarios where data and models are identical across clients, we propose FedMosaic. FedMosaic reduces parameter interference through a model aggregation strategy that considers task relevance and enables knowledge sharing across heterogeneous architectures without computational costs using dimensionally invariant modules. Furthermore, we propose a multi-modal PFL benchmark covering 40 different tasks, including time-varying distributions, to mimic real-world task diversity. Experimental results demonstrate that FedMosaic outperforms state-of-the-art PFL methods, demonstrating superior personalization and generalization capabilities in challenging, realistic scenarios.