Diffusion Large-Scale Language Models (dLLMs) generate text through iterative denoising, but current decoding strategies discard rich intermediate predictions for the final output. This study uncovers a temporal oscillation phenomenon where correct answers emerge during the intermediate stage and are subsequently overwritten during the denoising stage. To address this issue, we propose two complementary methods that leverage temporal consistency. First, Temporal Self-Consistency Voting (TSV), a training-free test-time decoding strategy, aggregates predictions from the denoising stage to select the most consistent output. Second, Temporal Consistency Reinforcement (TCR), a post-training method that encourages stable generation using Temporal Semantic Entropy (TSE), a measure of semantic stability in intermediate predictions, as a reward signal. Experimental results on several benchmarks demonstrate the effectiveness of the proposed method. Using only negative TSE compensation, we observe a remarkable average performance improvement of 24.7% over the existing dLLM on the Countdown dataset. Combined with accuracy compensation, we achieved absolute performance improvements of 2.0% on GSM8K, 4.3% on MATH500, 6.6% on SVAMP, and 25.3% on Countdown. These results highlight the untapped potential of dLLM's temporal dynamics and provide two simple yet effective tools for exploiting it.