This paper proposes a novel model aggregation method, FedPALS, to address the performance degradation caused by label shift between client and target domains in federated learning. FedPALS leverages label distribution information from a central server to adjust model aggregation based on the target domain, achieving robust generalization across diverse client data with label shift. FedPALS guarantees distortion-free updates under federated stochastic gradient descent (SGD), and extensive experiments on image classification tasks demonstrate its superior performance over existing methods. Specifically, we demonstrate that existing federated learning methods suffer severe performance degradation when client labels are severely insufficient, highlighting the importance of target domain-aware aggregation proposed by FedPALS.