EECS 496: Topics in Theoretical Machine Learning

Quarter Offered

Winter : 1-4 F ; Vijayaraghavan


You must have a solid background in multivariate calculus, linear algebra, basic probability, and algorithms. You must have general mathematical maturity and be comfortable with mathematical writing (e.g., mathematical arguments, derivations, and proofs). This is a theory course targeted at PhD students, so you will be expected to understand and produce mathematical arguments and proofs. It is highly recommended that you have taken Graduate Algorithms (EECS 495/496) or Computational Learning Theory (EECS 395) or similar courses. Undergraduate students and masters students should get the permission from the course instructor before registering for the course.


This course is an advanced graduate-level/seminar course on some topics in theoretical machine learning. The course is ideal for graduate students and senior undergraduates who are theoretically inclined and want to know more about related research challenges in the field of machine learning. The goal in this course is to learn tools and techniques to design algorithms with provable guarantees for some basic tasks in unsupervised learning. In this course, you will see topics like learning probabilistic models, high-dimensional data analysis, clustering, mixture models,  matrix completion, dictionary learning etc., and encounter tools like spectral methods, tensor decompositions, convex relaxations and non-convex optimization.

INSTRUCTOR: Prof. Aravindan Vijayaraghavan


Students in this seminar will read and present research papers on topics covered in this course. Readings will be assigned from notes, books, and research papers available on the web.


Ankur Moitra's course at MIT:
Daniel Hsu's course at Columbia: