This paper presents a novel defense framework that enhances robustness against Byzantine actions, such as data and model poisoning, in federated learning (FL). Existing defense techniques rely on robust aggregation rules or heuristics, which have increasing lower bounds on errors as client heterogeneity increases, or require a reliable external dataset for validation. In this paper, we propose a defense framework that synthesizes representative data for validating client updates on the server using a conditional generative adversarial network (cGAN). This approach eliminates reliance on external datasets, adapts to various attack strategies, and seamlessly integrates into standard FL workflows. Extensive experiments on benchmark datasets demonstrate that the proposed framework accurately distinguishes malicious from benign clients while maintaining overall model accuracy. In addition to Byzantine robustness, we investigate the representativeness of the synthetic data, the computational cost of training the cGAN, and the transparency and scalability of the approach.