This paper addresses the problem of high-dimensional artifacts arising from features in Vision Transformer-based models and proposes SiNGER (Singular Nullspace-Guided Energy Reallocation), a novel distillation framework designed to address this issue. While widely used in vision, Vision Transformer generates high-dimensional artifacts, degrading representation quality. During the knowledge distillation process, these artifacts can impact the student model, leading to overfitting on artifacts rather than useful signals. SiNGER aims to preserve useful signals while suppressing artifacts through teacher feature refinement. Specifically, it utilizes nullspace-guided perturbation to preserve information and is efficiently implemented via a LoRA-based adapter. Through extensive experiments, we demonstrate that SiNGER improves the performance of the student model, achieves state-of-the-art performance on multiple downstream tasks, and produces clearer and more interpretable representations.