Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Understanding and Mitigating Memorization in Generative Models via Sharpness of Probability Landscapes

Created by
  • Haebom

Author

Dongjae Jeon, Dueun Kim, Albert No

Outline

This paper presents a geometric framework for analyzing memorization phenomena in diffusion models using the sharpness of the log probability density. We mathematically justify the effectiveness of previously proposed score difference-based memorization metrics and propose a novel memorization metric that captures sharpness in the early stages of image generation in latent diffusion models, providing early insight into potential memorization phenomena. Leveraging this metric, we develop a mitigation strategy that optimizes early noise in the generation process using a sharpness-aware regularization term. The code is publicly available ( https://github.com/Dongjae0324/sharpness_memorization_diffusion ).

Takeaways, Limitations

Takeaways:
A new geometric framework for analyzing the memorization phenomenon of diffusion models is presented.
Mathematical justification of existing score difference-based memorization metrics.
A new metric is proposed to capture the memorization phenomenon in the early stages of image generation.
Presenting a new strategy to alleviate the memorization phenomenon and providing open code.
Limitations:
Further research is needed on the generalization performance of the proposed framework and metrics.
Additional experimental validation is needed for various diffusion models and datasets.
A more in-depth analysis of the effectiveness and limitations of proposed mitigation strategies is needed.
👍