This guest column was written by Professor Lim Seong-bin of Korea University.
Summary
•
Development of Generative AI: Generative AI is quickly evolving and is now able to produce high-quality data in a variety of forms, such as video, text, and audio.
•
How generative models work: They are defined as sampling unobserved data from a probability distribution, so understanding probability distributions and sampling is essential.
•
GAN and VAE: GANs are based on game theory between a generator and a discriminator, while VAEs use a technique that compresses and reconstructs data in the latent probability space.
•
Diffusion Model: The diffusion model is built on mathematical foundations like probability theory and stochastic differential equations, with its main goal being to discover a function that maps from noise space to data space.
•
DAE (Denoising AutoEncoder): A DAE learns how to recover original data after adding noise to it, making it useful for extracting resilient patterns.
Questions to think about
•
The future of generative AI: With the current rapid progress and advances in generative AI technology, can we predict which fields it could be used in going forward?
•
Application of Diffusion Model: Given the mathematical complexity of diffusion models, what might be the pros and cons of using them in real-world industries?
•
Quality and diversity of data: How do technologies like GANs, VAEs, and diffusion models ensure data quality and diversity, and in what ways can we assess these attributes?
Subscribe to 'haebom'
📚 Welcome to Haebom's archives. --- I post articles related to IT 💻, economy 💰, and humanities 🎭. If you are curious about my thoughts, perspectives or interests, please subscribe. haebom@kakao.com