EECS 395, 495: Special Topics in Machine Learning

Quarter Offered

Winter : 3:30-4:50 TuTh ; Downey


EECS 349 or permission of the instructor


In Winter 2017, this course will cover topics in machine learning for statistical language modeling. Statistical language models assign probabilities to sequences of words, and are used in systems that perform speech recognition, machine translation, and many other tasks. In recent years, deep neural networks have provided radically improved language models. This course will cover both the fundamental technologies that comprise statistical language models, from more classical n-gram models to recent memory-based deep neural networks, along with applications of the techniques to important tasks in artificial intelligence. Students will be required to read and present research papers, and to complete a substantial course project.

  • This course fulfills the AI Depth requirement.

TEXTBOOK: None (the course material will be comprised of research papers in the field).


COURSE GOALS: The goal of this course is to familiarize graduate students and advanced undergraduates with the current state-of-the-art in statistical language modeling. Students will read recently published papers in the field.


N-gram models, Hidden Markov Models, Recurrent Neural Networks, Long Short-term Memory Networks, word embeddings.


  • Leading a paper discussion (30%)
  • Class participation (20%)
  • Project (50%)

COURSE OBJECTIVES: When a student completes this course, s/he should:

  • Have a general understanding of the current state-of-the art in statistical language models.
  • Understand how at least one statistical language model is implemented and can be applied (via the course project).
  • Be able to understand, and think critically about, recent research papers in the field of statistical language modeling.