Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Neon: Negative Extrapolation From Self-Training Improves Image Generation

Created by
  • Haebom

Author

Sina Alemohammad, Zhangyang Wang, Richard G. Baraniuk

Outline

This paper introduces Neon (Negative Extrapolation from Self-Training), a novel training method proposed to address the lack of high-quality training data. Neon addresses the problem of model self-feeding disorder (MAD), which arises during model fine-tuning using self-synthesized data from a generative model. Neon augments the weights of the trained model through reverse gradient updating to improve model performance. Neon is applicable to a variety of architectures and datasets and demonstrates excellent performance while consuming minimal additional training resources.

Takeaways, Limitations

Takeaways:
We present a novel learning method that effectively improves the performance of generative models by leveraging data obtained through self-training.
Preventing model sample quality degradation by addressing the model autophagy disorder (MAD) problem.
Applicable to various architectures and datasets, and achieves high performance with limited training resources.
Achieving new SOTA FID on ImageNet 256x256 dataset.
Limitations:
There is no Limitations specified in the paper.
👍