This paper describes Federated LoRA with Nearly Accurate Estimation (FLoRA-NA), a proposed approach to address the limitations of Federated Low-Rank Adaptation (FedLoRA), which is used to fine-tune foundation models in distributed environments. FLoRA-NA leverages local LoRA matrices on the server to estimate aggregated matrices and distributes these to clients for local updates. This approach bridges the gap between local personalization and global generalization without adding communication overhead, addressing a key limitation of existing FedLoRA approaches. Extensive evaluations on various tasks (natural language understanding, mathematical reasoning, and code solving) demonstrate that FLoRA-NA achieves state-of-the-art global performance while maintaining low communication overhead.