This paper introduces DrDiff, a novel framework for long-text generation. DrDiff overcomes the trade-off between efficiency and quality through three key techniques. First, we design a dynamic expert scheduling mechanism that intelligently allocates computational resources during the diffusion process based on text complexity, enabling more efficient handling of text generation tasks with varying difficulty. Second, we introduce a hierarchical sparse attention (HSA) mechanism that adaptively adjusts attention patterns based on varying input lengths, reducing computational complexity from O($n^2$) to O($n$) while maintaining model performance. Finally, we propose a soft absorption guidance optimization strategy, combined with DPM-solver++, that significantly improves generation speed by reducing the diffusion step. Through comprehensive experiments on various long-text generation benchmarks, we demonstrate the superiority of DrDiff over existing state-of-the-art methods.