FedWSQ is a novel FL framework proposed to address the limitations of federated learning (FL), which suffers from poor performance due to the major challenges of data heterogeneity and communication constraints. FedWSQ works by integrating weight standardization (WS) and distribution-aware non-uniform quantization (DANUQ). WS improves the robustness of the model to data heterogeneity and unreliable client participation by filtering out biased components from local updates during learning. DANUQ minimizes quantization errors by exploiting the statistical properties of local model updates. As a result, FedWSQ significantly reduces the communication overhead while maintaining excellent model accuracy. Extensive experiments on various FL benchmark datasets demonstrate that FedWSQ consistently outperforms existing FL methods in a variety of challenging FL settings, including extreme data heterogeneity and ultra-low bit rate communication scenarios.