This paper identifies security vulnerabilities in Large Language Model (LLM) unlearning techniques and proposes StableUN, a novel framework to address them. Existing unlearning methods appear to remove sensitive or harmful information, but they are vulnerable to retraining attacks. This vulnerability stems from the tendency to position model parameters at sharp minima in the loss function. StableUN proposes a bidirectional feedback-based optimization framework that leverages neighborhood information to address these vulnerabilities. This framework integrates forgetting feedback, which explores parameter neighborhoods using adversarial perturbation, and remembering feedback, which preserves model utility, aligning the two objectives through gradient projection. Experiments on the WMDP and MUSE benchmarks demonstrate that StableUN exhibits stronger resistance to retraining and jailbreaking attacks while maintaining competitive utility performance.