This paper presents the FediLoRA framework, a parameter-efficient fine-tuning (PEFT) method for distributed environments, specifically to address the Limitations of low-rank adaptation (LoRA). While existing federated learning-based LoRA methods assume a homogeneous rank configuration and a single modal input, FediLoRA breaks this assumption and considers the realistic challenges of heterogeneous client resources (different LoRA ranks) and missing modal information. FediLoRA rebalances LoRA update weights without information dilution through a dimension-wise aggregation strategy, and improves both client and global model performance by improving local components through a lightweight layer-wise model editing method. Experimental results using various modal benchmark datasets demonstrate that FediLoRA outperforms competing techniques, especially when modal information is incomplete.