COMP_SCI 496: Generative Deep Models

Quarter Offered

Spring : 4-6:50 W ; Pardo


Registration is by instructor permission only. Students should contact if they are interested in taking the class. Note: Prior experience with the algorithms underlying machine learning (and especially deep learning) is necessary. Example prior coursework includes Prof. Han Liu’s Statistical Machine Learning course and Prof. Bryan Pardo’s Deep Learning course. This course is designed for doctoral students. Appropriately prepared BS and MS students may also be admitted, once doctoral student demand has been met.


Deep learning is a branch of machine learning based on algorithms that try to model high-level abstract representations of data by using multiple processing layers with complex structures. One of the most exciting areas of research in deep learning is that of generative models. Today’s generative models create text documents, write songs, make paintings and videos, and generate speech. This course is dedicated to understanding the inner workings of the technologies that underlie these advances. Students will learn about key methodologies, including Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Transformer-based language models.  This is an advanced course that presumes a good working understanding of traditional supervised neural network technology and techniques (e.g. convolutional networks, LSTMs, loss functions, regularization, gradient descent).

REQUIRED TEXTBOOK: Advanced research papers in the field.

REFERENCE TEXTBOOKS: (not required purchases) The focus will be on research papers published in the field.


COURSE GOALS: The goal of this course is to familiarize graduate students (and advanced undergraduates) with the current state-of-the-art in machine generation of speech, music, still images and video using Deep Learning architectures. Students will read recently published papers in the field and become well informed on at least one sub-field within this area.


What follows is an example syllabus. As topics of current interest in the field shift, course content will vary to reflect research trends.

Week 1: Autoencoders

Week 2: Variational Autoencoders (VAEs)

Week 3: Conditional VAEs and Wasserstein VAEs

Week 4: Generative Adversarial Networks (GANs)

Week 5: Conditional GANs

Week 6: Transformers & Language Modeling

Week 7: Recent Transformer Architectures (BERT, GPT-3)

Week 8: Creative Applications of Generative Models

Week 9: Societal Impacts (Deep Fakes, Adversarial Examples)

Week 10: Student Project Presentations


Presentation on topic (30%)

Research paper synopses (30%)

Final project (30%)

Class participation (10%)

COURSE OBJECTIVES: When a student completes this course, s/he should:

  • have a general understanding of the current state-of-the art in generative models.
  • be able to distill large amounts of research into coherent summaries.
  • be able to think critically about work in the field.