This paper focuses on federated learning (FL), which enables training multilingual large-scale language models (LLMs) using diverse and distributed multilingual data, especially for low-resource languages. Personalization using parameter-efficient fine-tuning (PEFT) modules, such as LoRA, is commonly used to improve client-specific performance. This involves personalization strategies (PSs), such as designing PEFT adapter structures (e.g., layers to add LoRA and their ranks) and selecting hyperparameters for fine-tuning (e.g., learning rates). Instead of manually configuring PSs, this paper proposes FedP²EFT, a federated learning-personalization method for multilingual LLMs in a cross-device FL setting. FedP²EFT jointly learns an optimal personalized PEFT structure for each client via Bayesian sparse rank selection. Evaluations on simulated and real multilingual FL benchmarks demonstrate that FedP²EFT significantly outperforms existing personalized fine-tuning methods and complements other existing FL methods.