This paper presents DP2Unlearning, a novel unlearning framework to solve the problem of remembering and leaking personal information or copyright information included in the training data of large-scale language models (LLMs). Existing retraining methods have the limitations of excessive cost, and approximate unlearning has the limitation of insufficient forgetting guarantee. DP2Unlearning applies ε-differential privacy (DP) to the training data, and uses the trained LLM to enable efficient unlearning along with the guarantee of preventing information leakage according to the selected ε. Experimental results show that DP2Unlearning shows similar performance to the retraining method, performs unlearning at about half the cost, and outperforms the approximate unlearning method in terms of model usefulness preservation and target information forgetting.