Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Diffusion Generative Models Meet Compressed Sensing, with Applications to Imaging and Finance

Created by
  • Haebom

Author

Zhengyi Guo, Jiatu Li, Wenpin Tang, David D. Yao

Outline

In this study, we develop a dimensionality reduction technique to accelerate diffusion model inference in the context of synthetic data generation. The idea is to integrate compressed sensing into the diffusion model (CSDM). First, we compress data into a latent space and train a diffusion model in the latent space. Next, we decode the generated samples from the latent space back into the original space using a compressed sensing algorithm. The goal is to improve the efficiency of model training and inference. By making specific data sparsity assumptions, the proposed approach achieves proven faster convergence by combining diffusion model inference with sparse recovery. It also provides insights into the optimal choice of latent space dimensionality. To demonstrate the effectiveness of this approach, we conduct numerical experiments on various datasets, including handwritten digits, medical and climate images, and financial time series data for stress testing.

Takeaways, Limitations

Integrating compressed sensing into diffusion models to improve the efficiency of synthetic data generation.
Potential to speed up diffusion model learning and inference
Provide guidance on choosing latent space dimensions.
Depends on scarcity assumption
Lack of information on specific algorithm implementations and performance comparisons.
Validation limited to experimental datasets
👍