Empowering Human Knowledge for More Effective AI-Assisted Decision-Making

In today’s world, humans and machines are making decisions together. Doctors may use artificial intelligence (AI) systems to help diagnose diseases. Banks might deploy AI-based tools before determining whether to lend someone a loan. While people often rely on knowledge of cause and effect and their abilities to reason, machines use statistical inferences to make decisions.

Research from the Northwestern Center of Advancing Safety of Machine Intelligence (CASMI) is exploring key differences between human and AI judgment to support more effective human-AI decision-making. The project, called “Supporting Effective AI-Augmented Decision-Making in Social Contexts,” is made up of researchers from Carnegie Mellon University (CMU), and its principal investigator is Kenneth Holstein, assistant professor at the CMU Human-Computer Interaction Institute (HCII).

“Our field work aims to understand what AI-augmented decision-making actually looks like in real-world contexts,” Holstein said. “What we find, again and again, is that there are a lot of complexities that aren't typically talked about or studied in the academic research on AI-augmented decision-making.”

Holstein traveled with Charvi Rastogi, CMU Machine Learning Department Ph.D. student; and Anna Kawakami, CMU HCII PhD student, on Nov. 6-9 to Delft, Netherlands to present the research team’s findings at a joint convening between the Association for the Advancement of Artificial Intelligence (AAAI) Conference on Human Computation and Crowdsourcing (HCOMP) and the Association for Computing Machinery (ACM) Collective Intelligence (CI) Conference.

View media coverage of our news story at the following link: https://casmi.northwestern.edu/news/articles/2023/empowering-human-knowledge-for-more-effective-ai-assisted-decision-making.html

McCormick News Article