This paper proposes OBLIVIATE , a robust unlearning framework, to address the problem of large-scale language models (LLMs) trained on massive datasets memorizing sensitive, copyrighted, or otherwise objectionable content. OBLIVIATE follows a structured process of target token extraction, maintenance dataset construction, and fine-tuning using a custom loss function comprised of three components: masking, knowledge distillation, and world knowledge. It utilizes a low-rank adapter (LoRA) to maintain efficiency without compromising unlearning quality. Experiments are conducted on various datasets, including the Harry Potter series, WMDP, and TOFU, using comprehensive metrics such as forgetting quality, model usefulness, and fluency, including a novel document-level recall score. OBLIVIATE demonstrates resistance to membership inference attacks, minimal impact on maintenance data, and robust performance across a variety of scenarios.