This paper presents a quantization technique for efficiently building diffusion-based large-scale language models (DLLMs). Existing post-training quantization (PTQ) techniques, when applied to DLLMs, suffer from accuracy degradation and generalization degradation due to conflicts with core DLLM mechanisms such as dynamic masking, iterative generation, and bidirectional attention. Therefore, in this paper, we propose the DLLMQuant framework, which includes three novel techniques: TMAS, a compensation technique that considers temporal and mask factors; IA-AQ, which dynamically allocates quantization resources by leveraging the interaction signal of bidirectional attention; and CGQ, which utilizes mask states and token scores for error correction. Experimental results demonstrate that DLLMQuant achieves significant performance improvements along with improved efficiency.