This paper proposes Spiffy, a novel inference algorithm that improves the inference speed of Diffusion Language Models (dLLMs). Spiffy preserves the output distribution of dLLMs while improving inference speed by 2.8-3.1 times. The algorithm utilizes a self-guessing method to generate draft states by exploiting the dLLM distribution and proposes a novel directed draft graph that utilizes a bidirectional block-based dLLM generation method. Furthermore, it determines high-quality graph configurations through an efficient offline correction algorithm, thereby increasing the acceptance rate. When combined with other parallel decoding algorithms, such as KV-caching and multi-token unmasking, Spiffy can achieve up to 7.9 times faster inference.