In this paper, we conduct an empirical study to provide a comprehensive understanding of classifier-free guidance, which has become a key technique in conditional generation using denoising diffusion models. Unlike previous studies, we go back to the fundamental classifier guidance, clarifying the core assumptions of its derivation, and systematically studying the role of classifiers. We find that both classifier guidance and classifier-free guidance achieve conditional generation by moving denoising diffusion trajectories away from the decision boundary, where conditional information is usually entangled and difficult to learn. Based on this classifier-centric understanding, we propose a general postprocessing step based on flow-matching to reduce the gap between the learned distribution of a pre-trained denoising diffusion model and the real data distribution, mainly around the decision boundary. We verify the effectiveness of the proposed approach through experiments on various datasets.