In this paper, we investigate the relationship between quantum noise channels and differential privacy (DP) as a way to enhance the security against adversarial attacks on quantum machine learning (QML) models. We present this relationship by constructing a set of noise channels called $(\alpha, \gamma)$-channels, which are essentially ε-DPs. Through this, we successfully replicate the ε-DP bounds observed in depolarization and random rotation channels, verifying the generality of our framework. Furthermore, we construct optimally robust channels using semidefinite programs, and show through small-scale experimental evaluation that using optimal noise channels rather than depolarization noise is useful in improving the adversarial accuracy. Finally, we evaluate the effects of variables α and γ on the certifiable robustness and the effects of different encoding methods on the robustness of the classifier.