This paper focuses on model quantization, a promising method for acceleration and compression of diffusion models. Quantization-Aware Training (QAT) is essential because conventional Post-Training Quantization (PTQ) shows serious performance degradation in low-bit quantization. However, the wide range of diffusion models and the time-varying activation functions increase the complexity of quantization, which reduces the efficiency of conventional QAT methods. In this paper, we propose a novel QAT framework, DilateQuant, to solve these problems. DilateQuant reduces the quantization error and ensures model convergence by reducing the range of activation functions while maintaining the original weight range by expanding the non-saturated input channel weights to a limited range through Weight Dilation (WD). Furthermore, we introduce Temporal Parallel Quantizer (TPQ) to address the time-varying activation functions, and Block-wise Knowledge Distillation (BKD) to reduce training resource consumption. Experimental results show that DilateQuant outperforms conventional methods in terms of accuracy and efficiency.