This paper analyzes the differences between consistency distillation and consistency training, which are learning methods of consistency models, and proposes a new method to improve the performance and convergence speed of consistency learning by bridging the differences. The consistency model is a model that mimics the multi-stage sampling of score-based diffusion with a single forward pass of a neural network. While the conventional consistency distillation uses the true velocity field approximated by a pre-trained neural network, consistency learning uses a single-sample Monte Carlo estimate of the velocity field. This paper shows that the gap between the two methods due to this estimation error persists, and to alleviate this gap, we propose a new flow that passes noisy data to the output of the consistency model. This flow is proven to reduce the aforementioned gap and the noise-data transfer cost.