This paper proposes SegQuant, a novel quantization framework for reducing the computational cost of diffusion models. Addressing the challenges of existing post-training quantization (PTQ) methods, which struggle with generalization due to their specificity in model structure, SegQuant combines the SegLinear strategy, which captures structural semantics and spatial heterogeneity, with the DualScale technique, which preserves polar asymmetric activation, to achieve high performance and applicability to a wide range of models. It is applicable to a wide range of models, including Transformer-based diffusion models, and ensures compatibility with major deployment tools.