This paper highlights a security vulnerability in current LLM unlearning methods, which are vulnerable to "relearning" attacks. We demonstrate that existing methods induce model parameters from sharp minima in the loss landscape, creating unstable regions, which can easily be recovered with only a small amount of fine-tuning data. To address this, we propose StableUN, a bi-level feedback-guided optimization framework that explores more stable parameter regions through neighborhood-aware optimization. StableUN integrates forgetting feedback, which explores parameter neighborhoods using adversarial perturbation, and remembering feedback, which preserves model utility, aligning these two objectives through gradient projection. On the WMDP and MUSE benchmarks, we demonstrate that StableUN exhibits stronger resistance to relearning and jailbreaking attacks while maintaining competitive utility performance.