This paper presents a study on privacy-preserving adaptation of large-scale language models (LLMs) in sensitive domains (e.g., mental health). To balance model utility, security, and strict confidentiality, we propose FedMentor, a federated fine-tuning framework that integrates Low-Rank Adaptation (LoRA) and domain-aware differential privacy (DP). FedMentor allows each client (domain) to apply a customized DP noise scale proportional to its data sensitivity, and the server adaptively reduces the noise when utility falls below a threshold. Experiments on three mental health datasets demonstrate that FedMentor improves security and reduces toxicity while maintaining utility compared to standard federated learning (FL). The framework scales to a backbone with up to 1.7 billion parameters on a single GPU client, requiring less than 173 MB of communication per round.