This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
Morse: Dual-Sampling for Lossless Acceleration of Diffusion Models
Created by
Haebom
Author
Chao Li, Jiawei Fan, Anbang Yao
Outline
In this paper, we present Morse, a simple dual-sampling framework that improves the speed of diffusion models without loss. Morse reconstructs the iterative generation process (from noise to data) by utilizing fast jump sampling and an adaptive residual feedback strategy. Two models, Dash and Dot, interact. The Dash model is an existing pre-trained diffusion model that operates in a jump-sampling manner to secure space for improved sampling efficiency. The Dot model is much faster than the Dash model and is trained to generate residual feedback conditioned on the observations at the current jump-sampling point in the trajectory of the Dash model, thereby improving the noise estimate to easily match the next-stage Dash model estimate without jump sampling. By temporally cross-linking the outputs of the Dash and Dot models, Morse flexibly achieves the desired image generation performance while improving the overall runtime efficiency. The weight sharing strategy between the Dash and Dot models makes it efficient in both training and inference. It shows lossless speedups of 1.78x to 3.31x over nine baseline diffusion models on six image generation tasks, and shows that it can also generalize to the Latent Consistency Model (LCM-SDXL) tuned for few-step text-to-image synthesis. The code and models are available at https://github.com/deep-optimization/Morse .