Existing unlearning (information removal) methods for large-scale language models (LLMs) optimize models by including the information to be removed in the fine-tuning data, which risks exposing sensitive data and violates the principle of minimal use. To address this, this paper proposes Partial Model Collapse (PMC), a novel method that does not include the unlearning objective in the unlearning objective. PMC exploits the phenomenon of model collapse (distribution collapse) when training a generative model with its own output, resulting in the removal of information. PMC performs machine unlearning by intentionally inducing model collapse on the data to be removed. Theoretically, we demonstrate that PMC converges to the desired results, overcomes three key limitations of existing unlearning methods, and experimentally demonstrates that it more effectively removes private information from model output while maintaining general model utility.