This paper addresses unlearning, an emerging technology for supporting data privacy, regulatory compliance, and ethical AI deployment in large-scale language models (LLMs). Recent techniques often rely on obfuscation, which suppresses knowledge by injecting incorrect or irrelevant information. This approach, however, often adds knowledge rather than removes it, leaving the model vulnerable to scrutiny. This paper formally distinguishes between unlearning and obfuscation and presents a scrutiny-based evaluation framework to assess whether existing approaches truly remove target information. Furthermore, we propose DF-MCQ, a novel unlearning method that effectively removes knowledge about target individuals by flattening the model's prediction distribution for automatically generated multiple-choice questions using KL-divergence, thereby inducing appropriate rejection behavior. Experimental results demonstrate that DF-MCQ achieves a rejection rate of over 90% and achieves unlearning with a level of uncertainty significantly higher than that achieved by random selection.