Academics
  /  
Courses
  /  
Descriptions
IEMS 490: Deep Generative AI


VIEW ALL COURSE TIMES AND SESSIONS

Prerequisites

Graduate standing, or permission of instructor

Description

Course Overview

This course provides a comprehensive introduction to diffusion models and flow models for generative AI, covering both theoretical foundations and methodological advancements. The course is divided into two parts:

  1. Theoretical Foundations: Mathematical and probabilistic principles underlying diffusion models and flow-based models, including Variational Autoencoders (VAE), Denoising Diffusion Probabilistic Models (DDPM), Denoising Diffusion Implicit Models (DDIM), and flow matching.

  2. Methodologies & Applications: Key techniques such as classifier-free guidance, fine-tuning, and cross-domain applications in image, text, and reinforcement learning settings.

Week-by-Week Breakdown

Part 1: Theoretical Foundations

Week 1: Introduction & Background

  • Overview of generative models: GANs, VAEs, LLMs and diffusion/flow models

  • Motivation for diffusion and flow models

  • Key mathematical tools: conditional probability, probability flow, statistical estimation

Week 2: Variational Autoencoders (VAE) and Denoising Diffusion Probabilistic Models (DDPM)

  • ELBO and variational inference

  • Connection between VAEs and score-based models

  • Forward and reverse processes

Week 3: Interpretation of DDPM and Denoising Diffusion Implicit Models (DDIM)

  • Variance schedules and training objectives

  • Interpretation as a Markov chain

  • Differences between DDIM and DDPM

Week 4: Continuous-Time Description of DDPM and DDIM

  • Score-based models and SDE interpretation

  • Sampling by discretizing SDE and ODE

Week 5: Flow-Based Models & Flow Matching

  • Normalizing flows and invertible networks

  • Continuous-time flow matching (FM)

  • Comparing diffusion models and flow-based approaches


Part 2: Methodologies & Applications

Week 6: Guidance in Diffusion/Flow Models

  • Classifier guidance vs. classifier-free guidance

  • Trade-offs between sample quality and diversity

  • Universal guidance methods for diffusion/flow models

Week 7: Fine-Tuning

  • LoRA and DreamBooth fine-tuning

  • Domain-specific adaptation of diffusion models

  • Low-shot and zero-shot generation

Week 8: Applications in Sequential Data

  • Sequential data modeling and capturing spatial-temporal dynamics

  • Sequential data imputation and forecasting

Week 9: Applications in Reinforcement Learning

  • Reinforcement learning via trajectory modeling with diffusion/flow

  • Trajectory optimization with diffusion/flow

Week 10: Future Directions & Open Problems

  • Discrete diffusion models and language generation

  • Robustness in fine-tuning and connection with distributionally robust optimization

  • Diffusion language models