This paper proposes the Self-Evolving Distillation (SEED) technique to address the hallucination problem in large-scale vision-language models (LVLMs). SEED identifies and removes hallucinations from the internal knowledge of LVLMs, then distills the refined knowledge back into the model, allowing the model to evolve on its own. To address the gap problem of existing distillation methods, a mode-search evolutionary approach is used to capture the dominant modes of the refined knowledge distribution, and an hallucination-removal adapter is used to correct incorrect knowledge in the original model. Experimental results on the LLaVA-1.5 and InternVL2 models demonstrate that SEED is effective in alleviating the hallucination problem, and in particular, it improves the F1 score of LLaVA-1.5 from 81.3 to 88.3 based on the POPE-Random metric.