This paper addresses a client environment susceptible to Byzantine attacks in Federated Learning (FL). We assume a trusted server possesses a separate, trusted dataset. This could imply the presence of data held by the server prior to federated learning, or the existence of a trusted client temporarily acting as a server. The proposed method operates effectively with just one honest client and the server, without requiring prior knowledge of the number of malicious clients. Theoretical analysis demonstrates that the proposed algorithm exhibits bounded optimality gaps even under strong Byzantine attacks. Experimental results demonstrate that the proposed algorithm significantly outperforms existing robust FL baseline algorithms, such as Mean, Trimmed Mean, Median, Krum, and Multi-Krum, under various attack strategies (label flipping, sign flipping, and Gaussian noise addition) on the MNIST, FMNIST, and CIFAR-10 benchmarks. The proposed algorithm uses the Flower framework.