This paper presents a method to improve the robustness of federated learning (FL), which enables collaborative model learning across distributed clients without sharing raw data. Existing defense techniques suffer from fundamental limitations, such as relying on robust aggregation rules or heuristics whose error lower bounds increase as client heterogeneity increases, or detection-based methods that require a reliable external dataset for validation. In this paper, we present a defense framework that synthesizes representative data for validating client updates on the server using a conditional generative adversarial network (cGAN). This method eliminates reliance on external datasets, adapts to various attack strategies, and seamlessly integrates into standard FL workflows. Extensive experiments on benchmark datasets demonstrate that the proposed framework accurately distinguishes between malicious and benign clients while maintaining overall model accuracy. In addition to Byzantine robustness, we investigate the representativeness of the synthetic data, the computational cost of cGAN training, and the transparency and scalability of the approach.