The proliferation of text-to-image diffusion models has raised privacy and security concerns related to copyright infringement and the creation of harmful images. To address these issues, concept deletion (defense) methods have been developed to "forget" specific concepts. However, recent concept restoration (offensive) methods have shown that these deleted concepts can be restored using adversarially crafted prompts, exposing a critical vulnerability in current defense mechanisms. In this study, we first investigate the root cause of this adversarial vulnerability and reveal that this vulnerability is pervasive in the prompt embedding space of concept deletion models, a characteristic inherited from the original pretrained model. We also introduce RECORD, a novel coordinate descent-based restoration algorithm that consistently outperforms existing restoration methods by up to 17.8x . We conduct extensive experiments to evaluate the computational-performance tradeoff and propose acceleration strategies.