In this paper, we propose a novel adaptive class assignment framework, FedARA, to address the __T197472_____ challenge of parameter-efficient fine-tuning (PEFT) of pre-trained language models (PLMs) in distributed environments. FedARA uses Truncated SVD to enhance similar feature representations to mitigate performance degradation due to cross-device data heterogeneity, leverages dynamic class assignment to improve communication efficiency, and applies class-based module pruning to reduce computational cost and memory usage. Experimental results on various datasets and models show that FedARA outperforms existing methods by an average of 6.95% to 8.49%, and improves communication efficiency by a factor of 2.40. In addition, experiments on various edge devices demonstrate up to 48.90% and 46.95% reduction in training time and energy consumption, respectively.