This paper proposes a novel algorithm for improving the robustness of Federated Learning (FL) against Byzantine attacks. Conventional federated learning systems, while not sharing data between individual clients, are vulnerable to attacks from malicious clients. In this paper, we assume a trusted server and a single trusted client, and utilize the server's trusted dataset to propose a robust federated learning algorithm that is robust against malicious client attacks. This algorithm operates without requiring prior knowledge of the number of malicious clients. Through theoretical analysis and experimental results, we demonstrate that our algorithm outperforms existing robust federated learning algorithms (Mean, Trimmed Mean, Median, Krum, and Multi-Krum). Experiments using the MNIST, FMNIST, and CIFAR-10 datasets demonstrate that our proposed algorithm effectively defends against various attack strategies, including label flipping, sign flipping, and Gaussian noise addition.