This paper highlights the limitations of unlearning techniques to address the potential leak of sensitive information from the training data of large-scale language models (LLMs). Specifically, in a real-world deployment environment where both pre- and post-unlearning logit APIs are exposed, we propose a novel data extraction attack that leverages signals from the pre-unlearned model to extract patterns from deleted data from the post-unlearned model. This attack significantly improves the data extraction success rate by combining model guidance and token filtering strategies, and we highlight the real-world risks through a medical diagnosis dataset. This study suggests that unlearning may actually increase the risk of personal information leakage and suggests evaluating unlearning techniques against a broader threat model, including adversarial approaches to the pre-unlearned model.