To address the inference speed limitations of autoregressive (AR) language models, we propose a diffusion-based language model capable of parallel decoding of multiple tokens. To address the long decoding window problem (lack of relevance and repetition of tokens located far from the input context), a key issue with existing diffusion language models, we propose convolutional decoding (Conv), a boundary-free regularization-based method, to improve fluency and flexibility. Furthermore, we introduce Rejecting Rule-based Fine-Tuning (R2FT) to better align tokens located far from the context. The proposed methods outperform existing diffusion language models on open generative benchmarks, improving both speed and quality.