This paper proposes Self Logits Evolution Decoding (SLED), a novel decoding framework for improving the output reliability and factual accuracy of large-scale language models (LLMs). SLED leverages latent knowledge within the LLM to improve the factual accuracy of the output, without requiring an external knowledge base or additional fine-tuning. It compares the output logits of the final and initial layers and uses an approximate gradient approach to allow the latent knowledge to self-improve the output. Extensive experiments on various model families and sizes (1B to 45B), including Gemma, Qwen, Mixtral, and gpt-oss, as well as advanced architecture configurations such as MoE, demonstrate that SLED consistently improves factual accuracy compared to existing decoding methods while maintaining natural language fluency and incurring negligible latency overhead. Furthermore, it can be flexibly combined with other decoding methods to further enhance performance.