Courses
  /  
Descriptions
EECS 495: Deep Learning

Quarter Offered

Spring : 2-5 F ; Pardo

Prerequisites

Graduate Standing, Permission of the Instructor, Machine Learning (EECS 349) or similar course.

Description

Deep learning is a branch of machine learning based on algorithms that try to model high-level abstract representations of data by using multiple processing layers with complex structures. Some representations make it easier to learn tasks (e.g., face recognition or spoken word recognition) from examples. One of the promises of deep learning is replacing handcrafted features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.

In this course students will study deep learning architectures such as restricted Boltzmann machines, deep neural networks, convolutional deep neural networks, deep belief networks and recurrent neural networks. They will read original research papers that describe the algorithms and how they have been applied to fields like computer vision, automatic speech recognition, and audio event recognition.

REQUIRED TEXTBOOK: Advanced research papers in the field.

REFERENCE TEXTBOOKS: (not required purchases) The focus will be on research papers published in the field.

COURSE COORDINATOR: Prof. Bryan Pardo

COURSE GOALS: The goal of this course is to familiarize graduate students (and advanced undergraduates) with the current state-of-the-art in machine perception of images and sound using Deep Learning architectures. Students will read recently published papers in the field and become well informed on at least one sub-field within this area.

DETAILED COURSE TOPICS:

What follows is an example syllabus. As topics of current interest in the field shift, course content will vary to reflect research trends.

  • Week 1: Perceptrons, Multilayer Perceptrons
  • Week 2: Representations of images and audio: spectrograms, cepstrograms, bitmaps, Fourier transforms
  • Week 3: Representations of images and audio: spectrograms, cepstrograms, bitmaps, Fourier transforms
  • Week 4: Boltzmann Machines and Restricted Boltzmann Machines
  • Week 5: Deep Belief Networks
  • Week 6: Convolutional Deep Networks
  • Week 7: Recurrent Networks
  • Week 8: Long-Term-Short-Term Memory Networks
  • Week 9: Applications in Audio Processing
  • Week 10: Applications in Image Processing

ASSIGNMENTS:

  • Presentation on topic (30%)
  • Research paper synopses (30%)
  • Report on research area (30%)
  • Class participation (10%)

COURSE OBJECTIVES: When a student completes this course, s/he should:

  • Have a general understanding of the current state-of-the art in machine perception of sound and images using Deep Learning.
  • Be able to distill large amounts of research into coherent summaries.
  • Be able to think critically about work in the field.