Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Improving Consistency Models with Generator-Augmented Flows

Created by
  • Haebom

Author

Thibaut Issenhuth, Sangchul Lee, Ludovic Dos Santos, Jean-Yves Franceschi, Chansoo Kim, Alain Rakotomamonjy

Outline

This paper analyzes the differences between consistency distillation and consistency training, which are learning methods of consistency models, and proposes a new method to improve the performance and convergence speed of consistency learning by bridging the differences. The consistency model is a model that mimics the multi-stage sampling of score-based diffusion with a single forward pass of a neural network. While the conventional consistency distillation uses the true velocity field approximated by a pre-trained neural network, consistency learning uses a single-sample Monte Carlo estimate of the velocity field. This paper shows that the gap between the two methods due to this estimation error persists, and to alleviate this gap, we propose a new flow that passes noisy data to the output of the consistency model. This flow is proven to reduce the aforementioned gap and the noise-data transfer cost.

Takeaways, Limitations

Takeaways:
Speeding up the convergence of consistency learning.
Overall performance improvement of consistency learning.
To increase theoretical understanding of the differences between consistency distillation and consistency learning.
A novel flow-based efficient consistency model learning method is presented.
Limitations:
Additional experiments are needed to evaluate the generalization performance of the proposed method.
Performance evaluation on various datasets and model architectures is needed.
Analysis of the computational cost of the proposed flow is needed.
👍