In this paper, we propose a novel efficient method, called PLADIS, to address the issue that existing distributed diffusion models require additional training or neural function evaluation (NFE) when using guided techniques (e.g., Classifier-Free Guidance) for generating high-quality conditional samples. PLADIS enhances the pre-trained U-Net/Transformer models by extrapolating query-key correlations using softmax and its sparse counterpart at the cross-attention layer during the inference process. By leveraging the noise robustness of sparse attention without additional training or NFE, we overcome the difficulties of existing models and enhance text alignment and human preference. It integrates seamlessly with guided techniques including guided distillation models.