Exploring the Connections Among Machine Learning, Interpretability, and Logic

The field of interpretability investigates what machine learning (ML) models are learning from training datasets, the causes and effects of changes within a model, and the justifications behind its predictions and conclusions — allowing users to evaluate the decision-making and trustworthiness of ML algorithms.

The five participating universities of the Institute for Data, Econometrics, Algorithms, and Learning (IDEAL) — Northwestern University, Illinois Institute of Technology, Toyota Technological Institute at Chicago, University of Chicago, and University of Illinois Chicago (UIC) — hosted a multi-day workshop last month exploring the connections among the fields of interpretability, machine learning, and logic.

The event was organized as part of the IDEAL Winter/Spring 2023 Special Program on Machine Learning and Logic by Shai Ben-David, professor of computer science at the University of Waterloo; Lev Reyzin, professor of mathematics, statistics, and computer science and IDEAL site director at UIC; and Gyorgy Turan, professor of mathematics, statistics, and computer science at UIC.

Lev Reyzin“This workshop brought together researchers from across the world to discuss the many exciting connections between the special program’s main theme of machine learning and logic to cutting edge work on interpretability, and we had many useful discussions around these topics that will help to set the research agenda for these fields,” Reyzin said. “I am also pleased that the workshop involved all five of the IDEAL sites and was able to include student speakers from the participating universities. This furthered the cross-institutional ties within Chicago's data science community."

This special program is the first of IDEAL Phase II, which aims to accelerate transformative advances in the theoretical foundations of data science through research and education programs on machine learning and optimization; high-dimensional data analysis and inference; and emerging topics including reliability, interpretability, privacy, and fairness.

Program

The workshop’s guest speakers addressed various topics in interpretability, including accuracy and intelligibility tradeoffs, knowledge representation, learning Markov models from data, neuro-symbolic learning and tractable deep generative models, regulating the usage of ML systems, repeated multi-unit auctions with uniform pricing, and tree-based classifiers.

The speakers were:

  • Gilles Audemard (Artois University) — “Computing Explanations for Tree-based Classifiers”
  • Shai Ben-David (University of Waterloo) — “A Short Introduction to ML and Using Logic for Impossibility Results in ML”
  • Sebastian Bordt (University of Tübingen) — “Explanations and Regulation”
  • Simina Brânzei (Purdue University) — “Online Learning in Multi-unit Auctions for Carbon”
  • Rich Caruana (Microsoft Research) — “Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning”
  • Lee Cohen (Toyota Technological Institute at Chicago) — “Finding Safe Zones of Markov Decision Processes Policies”
  • Zachary Lipton (Carnegie Mellon University) — “Responsible ML’s Causal Turn”
  • Gyorgy Turan (University of Illinois Chicago) — “Machine Learning Interpretability and Knowledge Representation”
  • Guy Van den Broeck (University of California, Los Angeles) — “AI Can Learn from Data. But Can it Learn to Reason?”

“I had a great time attending the workshop in Chicago, and it was great to reconnect with other researchers who are working on interpretable machine learning," Bordt said.

Each afternoon of the workshop, students presented on interpretability topics, such as combinatorial models, competitive algorithm for explainable k-means clustering, deep neural network inference, finite model theory, healthcare predictions using supervised ML models, integrated gradients, learning automata via queries, ML systems in decision-making, multi-objective decision making frameworks, and strategic behavior in screening processes.

The student speakers were:

  • Gregoire Fournier (University of Illinois Chicago) — “Finite Model Theory and Logics for AI”
  • Anmol Kabra (Toyota Technological Institute at Chicago) — “Reasonable Modeling Assumptions for Real-world Principal-agent Games”
  • Omid Halimi Milani (University of Illinois Chicago) — “Predicting Incident Hypertension in Obstructive Sleep Apnea Using Machine Learning”
  • Kavya Ravichandran (Toyota Technological Institute at Chicago) — “A Simple Image Model for Combinatorial Dictionary Learning and Inference”
  • Liren Shan (Northwestern University) — “Explainable 𝑘-Means: Don’t Be Greedy, Plant Bigger Trees!”
  • Yuzhang Shang (Illinois Institute of Technology) — “Neural Network Compression and its Application on Large Models”
  • Han Shao (Toyota Technological Institute at Chicago) — “Eliciting User Preferences for Personalized Multi-Objective Decision Making through Comparative Feedback”
  • Kevin Stangl (Toyota Technological Institute at Chicago) — “Sequential Strategic Screening”
  • Ruo Yang (Illinois Institute of Technology) — “A Framework to Eliminate Explanation Noise from Integrated Gradients”
  • Kevin Zhou (University of Illinois Chicago) — “Query Learning of Automata”

Graduate Student Conference in Logic

IDEAL and the Association for Symbolic Logic sponsored the 23rd Graduate Student Conference in Logic (GSCL) “Special Session on Logic, Algorithms, and Machine Learning,” held April 15-16 at UIC.

Organized by UIC PHD students Will Adkisson and Kevin Zhou, the GSCL is a weekend conference for graduate students studying mathematical logic. The special session is also part of the IDEAL Winter/Spring 2023 Special Program on Machine Learning and Logic.

McCormick News Article