This paper presents the first systematic study of low-bit quantization of diffusion-based large-scale language models (dLLMs). Unlike autoregressive (AR) LLMs, dLLMs utilize full attention and denoising-based decoding strategies. However, their large parameter size and high resource requirements hinder their deployment on edge devices. This study uncovers the outlier problem in activation values in dLLMs and, using state-of-the-art PTQ techniques, performs a comprehensive evaluation across various aspects, including bit width, quantization method, task type, and model type. Through this, we aim to provide practical insights into the quantization behavior of dLLMs and lay the foundation for efficient dLLM deployment.