Academics
  /  
Courses
  /  
Descriptions
COMP_SCI 496: Reliable Machine Learning with Unreliable Black-box Models


VIEW ALL COURSE TIMES AND SESSIONS

Prerequisites

Permission from instructor (unless you're a PhD student, in which case it should be easy to register). Recommended: CS 336 (Algorithms)/ Theory ML course, Basic Probability/ Statistics course

Description

Modern machine learning methods have made huge strides in recent years, in many domains. Even so, reliability of modern models remains poorly understood. Models often suffer from biases, overconfidence, and can fail unpredictably on new unseen data. As these models are increasingly used in high-stakes domains such as medicine, policy, self-driving technology and scientific discovery, establishing principled foundations of when and how to trust the predictions of these models has become a pressing concern. Theory offers a unique foothold for understanding reliability by providing formal guarantees. However, our current state of understanding of deep learning and other practical methods is far from being able to inform us when and why ML models fail, and how to foresee and avoid these failures.

This course will cover new approaches towards statistical and computational foundations for the reliable use of powerful but opaque modern machine learning models, by treating such models as black boxes that may be unreliable or trained on mismatched data.

Specific topics include Uncertainty Quantification including approaches like Conformal Prediction, Algorithms with Predictions and Data-Driven Algorithms, Reliability under Distribution Shift, Failure of Distributional Assumptions, and connections to Robust Statistics, and Calibration.

  • This course fulfills the Technical Elective area.

REFERENCE TEXTBOOKS: N/A
REQUIRED TEXTBOOK: N/A

COURSE COORDINATORS: Aravindan Vijayaraghavan 

COURSE INSTRUCTOR: Aravindan VijayaraghavanVaidehi Srinivas