This paper addresses the problem of machine unlearning, which maintains model performance while removing specific data in neural information retrieval (IR) systems. Applying existing machine unlearning methods to IR can either reduce retrieval efficiency or inadvertently expose the unlearning task by removing specific items from the search results presented to the user. In this paper, we formalize corrective unranking, which extends machine unlearning in the context of (neural network-based) IR by incorporating alternative documents to maintain rank integrity, and propose a novel teacher-student framework, Corrective UnRanking Distillation (CuRD), for this task. CuRD (1) facilitates forgetting by tuning the (trained) neural network IR model so that the output relevance scores of the to-be-forgotten samples mimic the scores of the lower-ranked unsearchable samples, (2) enables correction by fine-tuning the relevance scores of the alternative samples to match the scores of the corresponding to-be-forgotten samples, and (3) attempts to preserve the performance of the non-to-be-forgotten samples. We evaluate CuRD on four neural network IR models (BERTcat, BERTdot, ColBERT, and PARADE) using the MS MARCO and TREC CAR datasets. Experiments with forget set sizes of 1% and 20% of the training dataset show that CuRD outperforms seven state-of-the-art baselines in terms of forgetting and correction while maintaining model retention and generalization ability.