This paper demonstrates that large-scale language models (LLMs) can reconstruct personal information even from texts subjected to differential privacy (DP) techniques. The researchers propose two attacks, black-box and white-box, depending on the accessibility of the LLM. They experimentally demonstrate the link between DP-processed text and the training data for privacy-preserving LLMs. Experiments were conducted on word- and sentence-level DPs using various LLMs, including LLaMA-2, LLaMA-3, and ChatGPT, as well as datasets such as WikiMIA and Pile-CC, and the results confirmed high reconstruction success rates. For example, black-box attacks on word-level DP on the WikiMIA dataset achieved success rates of 72.18% for LLaMA-2 (70B), 82.39% for LLaMA-3 (70B), 91.2% for ChatGPT-4o, and 94.01% for Claude-3.5. This reveals the security vulnerabilities of existing DP techniques and suggests that LLMs themselves pose a new security threat.