In this paper, we propose FLTG, a novel defense algorithm against Byzantine attacks that occur during model aggregation in federated learning (FL). FLTG integrates angle-based defense and dynamic reference selection to address the issues of high malicious client ratio and sensitivity to non-iid data. It filters clients based on ReLU-clipped cosine similarity by leveraging clean datasets on the server side, and dynamically selects reference clients based on prior global models to mitigate non-iid bias. In addition, we assign aggregation weights that are inversely proportional to the angle deviation, and normalize the update size to suppress malicious scaling. Evaluation results on datasets of various complexity and five common attacks show that FLTG outperforms state-of-the-art methods under extreme bias scenarios, and remains robust even under high malicious client ratios (>50%).