Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Understanding and Mitigating Memorization in Generative Models via Sharpness of Probability Landscapes

Created by
  • Haebom

Author

Dongjae Jeon, Dueun Kim, Albert No

Outline

This paper presents a geometric framework for analyzing memorization phenomena in diffusion models using the sharpness of the log probability density. We mathematically justify the effectiveness of previously proposed score difference-based memorization metrics and propose a novel memorization metric that captures sharpness in the early stages of image generation in latent diffusion models, providing early insights into potential memorization phenomena. Leveraging this metric, we develop a mitigation strategy that optimizes early noise in the generation process using a sharpness-aware regularization term.

Takeaways, Limitations

Takeaways:
A new geometric framework for analyzing the memorization phenomenon of diffusion models is presented.
Mathematical justification of existing score difference-based memorization metrics.
A new metric is proposed to capture the memorization phenomenon in the early stages of image generation.
A new strategy to alleviate the phenomenon of memorization is presented.
Limitations:
Further research is needed to determine the generalizability of the proposed framework and indicators.
More extensive experiments are needed to evaluate the performance of the proposed mitigation strategy.
Possibly limited to certain types of diffusion models.
👍