This paper discusses several challenges associated with machine unlearning of large-scale language models (LLMs) and proposes an improved approach. Because LLMs can raise privacy and legal issues due to their ability to memorize sensitive or copyrighted content, machine unlearning, which removes specific content while maintaining overall performance, is gaining attention. To address the inadequate evaluation issues of existing machine unlearning, we propose three additional metrics: token diversity, sentence semantics, and factual accuracy. Furthermore, we categorize unlearning methods into untargeted and targeted methods and discuss their respective challenges (e.g., the unpredictable behavior of untargeted unlearning and the insufficient regularization of targeted unlearning). To mitigate these challenges, we propose using the entropy maximization (ME) objective for untargeted unlearning and the answer preservation (AP) loss for targeted unlearning as regularization. Experimental results for three scenarios—fictional unlearning, continuous unlearning, and real unlearning—demonstrate the effectiveness of the proposed approach.