This paper presents SEAT, a novel method to address the fatal forgetting problem that arises during fine-tuning of large-scale language models (LLMs). Unlike previous studies that focused on maintaining performance on existing data, this paper focuses on the loss of essential capabilities acquired during alignment, particularly the ability to accurately represent model uncertainty (ignorance awareness). The authors formalize the concept of ignorance awareness and show that existing fine-tuning methods can impair ignorance awareness by inducing activation drift, leading to undesirable behaviors such as hallucinations. SEAT integrates sparse tuning, which limits activation drift, and a novel entity perturbation method to resolve knowledge entanglement, effectively acquiring new knowledge while simultaneously maintaining aligned ignorance awareness. Experimental results show that SEAT outperforms existing methods in both ignorance awareness retention and fine-tuning performance on both real and synthetic datasets.