Differential privacy (DP) has emerged as a core framework for protecting sensitive data in machine learning, but standard DP-SGD suffers from significant accuracy loss due to injected noise. To address this limitation, this paper presents the FFT-Enhanced Kalman Filter (FFTKF), a differential privacy optimization method that improves gradient quality while maintaining the $(\varepsilon, \delta)$-DP guarantee. FFTKF applies frequency-domain filtering to shift privacy-preserving noise to high-frequency components with less information, preserving low-frequency gradient signals that carry most of the training information. A scalar gain Kalman filter using a finite difference Hessian approximation further improves the denoised gradients. This method has a per-iteration complexity of $\mathcal{O}(d \log d)$ and achieves higher test accuracies than DP-SGD and DiSK on MNIST, CIFAR-10, CIFAR-100, and Tiny-ImageNet using CNNs, Wide ResNets, and Vision Transformers. Theoretical analysis shows that FFTKF provides a stronger privacy-utility trade-off through variance reduction and controlled bias while guaranteeing equivalent privacy.