In this paper, we propose a novel method, UnLearn and ReLearn (ULRL), that can defend against backdoor attacks even with a limited amount of normal data. ULRL uses a two-step approach to identify and retrain neurons that are oversensitive to backdoor triggers. In the first step, Unlearning, we intentionally maximize the loss of the network on a small set of normal data to find neurons that are sensitive to backdoor triggers. In the second step, Relearning, we retrain these suspicious neurons using target re-initialization and cosine similarity regularization to neutralize the backdoor influence and maintain the model performance on normal data. Through extensive experiments on 12 types of backdoor attacks on various datasets such as CIFAR-10, CIFAR-100, GTSRB, Tiny-ImageNet, and various architectures such as PreAct-ResNet18, VGG19-BN, and ViT-B-16, we show that ULRL can significantly reduce the attack success rate without compromising the normal accuracy. In particular, we have verified that it is effective even when only 1% of normal data is used for defense.