In this paper, we propose CorN-DSGD, a novel framework for enhancing privacy in distributed learning environments. In distributed learning, model sharing between agents poses a risk of privacy leakage, and the existing random noise addition method causes performance degradation due to noise accumulation. CorN-DSGD is a covariance-based framework that optimizes noise removal across the network by generating correlated noise between agents. It utilizes network topology and mixing weights, and removes noise more effectively than the existing two-way correlation method, thereby improving model performance under formal privacy guarantees.