Large-scale language models (LLMs) require iterative updates to maintain information, and LLM unlearning for selective removal becomes crucial during this process. Existing unlearning methods rely on fine-tuning, resulting in low accuracy. They also struggle to balance unlearning effectiveness and general performance in large-scale and sequential environments. In this study, we propose UniErase, a novel unlearning framework that achieves a balance between precision and performance. Unlearning Tokens are introduced to guide LLMs into the forgetting space, and Unlearning Edits efficiently link unlearning targets to these meta-tokens. UniErase demonstrates outstanding performance in batch, sequential, and precision unlearning tasks. In the TOFU benchmark, it outperforms the existing best-performing unlearning method by 4.01x in model performance and 35.96% in unlearning effectiveness.