News & Events
  /  
Seminars, Workshops, & Talks
Second Adobe and Northwestern Computer Science Workshop 2021

February 26, 2021
10:30 a.m. - 1:30 p.m. CST

During the second Adobe Research and Northwestern Computer Science collaborative workshop, CS faculty and PhD students and Adobe Research will share progress and findings on joint research. The workshop will cover research in the intersections of human computer interaction, artificial intelligence and machine learning, and algorithms. 

For Zoom registration, contact Pamela Villalavoz at pmv@northwestern.edu

Talks and Speakers

10:30 – 10:35 a.m.
Welcome and introduction

Samir Khuller and Shriram Revankar
Chair: Eunyee Koh

10:35 – 10:50 a.m.
Keynote "AI is the new user experience"

Vasanthi Holtcamp

10:50 – 11:35 a.m.
Session 1

Human AI Interactions in sketch companion cognitive systems, predictive systems with uncertainty visualization, AI/ML tools for medical decision making (15 minutes per speaker)
Chair: Fan Du

Kenneth Forbus
Software companions you can sketch with

Abstract: People often use sketching when thinking through ideas, especially with other people. Person-to-person sketching involves a combination of drawing and language, since open-ended sketching cannot assume a closed set of entities. Moreover, sketching builds up a shared history: People refer to prior sketches, and conventions for how to depict concepts get quickly built up and reused to facilitate future conversations. This talk outlines some research my group is doing aimed at creating Companion cognitive systems that can participate in sketching conversations, using a combination of qualitative representations and analogical reasoning/learning to perform human-like understanding of sketches.

Matthew Kay
Uncertainty visualization as a moral imperative

Abstract: Uncertain predictions permeate our daily lives (“will it rain today?”, “how long until my bus shows up?”, “who is most likely to win the next election?”). Fully understanding the uncertainty in such predictions would allow people to make better decisions, yet predictive systems usually communicate uncertainty poorly—or not at all. I will discuss ways to combine knowledge of visualization perception, uncertainty cognition, and task requirements to design visualizations that more effectively communicate uncertainty. I will also discuss ongoing work in systematically characterizing the space of uncertainty visualization designs and in developing ways to communicate (difficult- or impossible-to-quantify) uncertainty in the data analysis process itself. As we push more predictive systems into people’s everyday lives, we must consider carefully how to communicate uncertainty in ways that people can actually use to make informed decisions.

Maia Lee Jacobs
Bringing AI to the bedside with user-centered design

Abstract: In medicine, the integration of artificial intelligence (AI) and machine learning (ML) tools could lead to a substantial paradigm shift in which human-AI collaboration becomes integrated in medical decision-making. Despite many years of enthusiasm towards these technologies, the vast majority of these tools fail once they are deployed in the real-world, often due to failures in workflow integration and interface design. In this talk I will review a series of studies which use methods in human-computer interaction (HCI) to design machine learning tools for real-world clinical use. I will show how current trends in explainable AI can lead to worse performance in clinical decision-making, and describe how we can use iterative, user centered design processes to create machine learning tools to support complex medical decisions.

11:35 – 11:40 a.m.
Break

11:40 a.m. – 12:40 p.m.
Session 2

Computer human interaction: Belief updating, conversational recommendation, data presentations (15 minutes per speaker)
Chair: Jane Hoffswell

Emily Beth Wall
Causality & Confirmation: Exploring the effects of causal priors and confirmation bias on belief updating

Abstract: As people encounter new information in the world, they iteratively form and update their beliefs. Prior work informs us that there are a number of factors that impact the way people combine information into updated beliefs, including how uncertain the information is, how salient the information is, etc. In some cases, people update their beliefs in sub-optimal ways. For instance, people systematically make errors in statistical reasoning such as neglecting the base rate (e.g., people often overestimate the likelihood of a given disease based on a highly accurate positive test result, when the disease prevalence, or base rate, is actually quite low). In this talk, we empirically explore two possible factors that influence belief updating: 1. the effect of causality (when new information has a causal explanation, e.g., the belief that COVID-19 cases are reducing due to widespread dissemination of vaccines) and 2. people's natural tendency to accept confirming information (when new information is in agreement with pre-existing beliefs).

Victor Bursztyn
Conversational recommendation with open-ended preferences

Abstract: Conversational recommendation systems (CRS) are dialog-based systems that can refine a set of options over multiple turns of a conversation. We present an open-ended approach to user modeling in a CRS that uses sequential critiques to build a model of user preferences. In the setting we explore, users need to make the best possible decisions amongst a limited number of options, such as finding the best restaurant within walking distance. Similar to a competent and attentive human salesperson, our system actively asks for feedback, infers preferences from freely expressed comments in natural language, and tries to persuade the user with recommendations backed by real customer testimonials. A critical feature of our approach is that it is open-ended: We do not specify item attributes ahead of time. Working within the restaurant domain, we noticed that it may be hard for a CRS to directly use open-ended negative feedback (e.g., "That's not good for a date") since these may not match restaurant attributes as expressed in customer reviews; thus we transform these critiques into positive preferences (e.g., "I prefer more romantic"') by using a large language model in a few-shot setting. In two pilot studies, our novel open-ended critique understanding method responded accurately 82% of the time in the wild—compared to a theoretical baseline of 89%—and users followed our recommendations 57% of the time when critique understanding worked.

Cindy Xiong
Data arrangement of a chart can afford different viewer conclusions

Abstract: Well-chosen data visualizations can lead to powerful and intuitive processing by a viewer, both for visual analytics and data storytelling. When poorly chosen, a visualization can leave important patterns obscured, misunderstood, or misrepresented. Designing a good visualization requires multiple forms of expertise, weeks of training, and years of practice. Even after this, designers still require ideation and several critique cycles before they are able to create an effective visualization. Current visualization recommender systems formalize existing design knowledge into rules that can be processed by a multiple constraint satisfaction algorithm. They use these rules to make design decisions, such as whether data plotted over time should be shown as lines or in discrete bins as bars. One fundamental problem with existing recommenders is that they can correctly recommend a visualization type but offer little to no suggestions for how to arrange the data within the visualization, even though the same data values can be grouped differently by spatial proximity. In a bar chart, for instance, the bars can be stacked on top of each other or overlaid next to each other. These different arrangements can influence what conclusions viewers draw from the data. In one arrangement, a viewer could focus on mean comparisons and conclude that the mean of one group of bars is higher than another, while another arrangement may elicit a comparison between differences to conclude this difference is smaller than that. We examine how different data arrangements affect what the typical viewer will see in a visualization, to help create visualization recommenders that ensure that a viewer sees the ‘right’ story in a dataset.

Hyeok Kim
Quantifying message loss for automated responsive visualization recommendation

Abstract: Increased mobile access to data visualization requires authors to tailor representations for different screen sizes, referred to as responsive visualization. Responsive visualization design can require further significant design iterations to find small screen views that preserve the intended patterns of a large screen design while also achieving constraints like appropriate information density for the smaller screen. To enable automated recommendations of design alternatives to support authors of responsive visualizations, we explore approaches to quantifying message preservation between pairs of views comprised of a a source (e.g., large screen) and target (e.g., small screen) visualization design. In this talk, I will describe how we quantify common types of visualization message (identification, comparison, and trend) and show how our quantification works in a prototype recommendation system for responsive visualization.

12:45 – 1:30 p.m.
Session 3

Inferring, estimating, and scheduling in a marketplace whether it is ad-click auctions or spot instances (15 minutes per speaker)
Chair: Ryan Rossi

Aleck Johnsen
Inferring values from simple-estimate dashboards

Abstract: Consider the setting of a marketplace for a repeated single-item auction, in which distinct buyers arrive each round and possibly purchase from a number of ever-present agents, for example, ad-click auctions.  In practice, the marketplace itself is a non-truthful mechanism designed to sell stochastic priority to the agents, given an arbitrary objective function of its choice (revenue, total welfare, ethical considerations, mixtures over these, etc.).  To maximize its objective, the marketplace has three overlapping subtasks: (1) identify a good mechanism; (2) encourage agents to behave in ways that achieve the potential of the good mechanism while respecting their rationality (and avoid bad equilibrium); and (3) identify agents' private underlying values which are not observed directly.  This talk outlines the techniques of a first theoretical solution for all of these tasks by using dashboards, which are predicted price-allocation curves published privately to each agent in each round (EC'19). Based on this theoretical foundation, this talk further proposes to study a relaxed, very-simple model to estimate dashboards, in order to both inform the agents' behavior and infer their values as part of the mechanism.

Sheng Yang — Scheduling on spot instances

Abstract: Cloud providers rent out surplus computational resources as spot instances at a deep discount. However, these cheap spot instances are revocable. When demand surges for higher priced on-demand instances, cloud providers can interrupt these spot instances after a brief alert. Such unreliability makes it challenging to utilize spot instances for many long-running, stateful jobs, but with checkpoints and restoration, machine-learning (ML) training jobs are a good candidate to overcome this difficulty. We face the trade-off between low cost and uninterrupted computations. The interruptions are modeled as known stochastic processes, while the diminishing returns in utility for ML training are modeled as a monotone submodular function. We study the problem of scheduling with the presence of spot instances, maximize the total utility obtained under a given budget. The problem is reduced to a variant of the stochastic correlated knapsack problem with submodular target function. With the help of the stochastic continuous greedy algorithm and the contention resolution scheme, we managed to get a \((1 - 1/\sqrt{e})/2 \simeq 0.1967\) approximation algorithm, improving on previous works.

Michalis Mamakos — Estimating models of learning agents

AbstractIn this paper we provide conditions that can serve as a basis for parameter estimation and inference in models where agents conduct some form of no-regret learning. Such conditions lead to moment inequalities to which methods from econometrics can be applied for the execution of the statistical tasks. The model can be either a strategic game or an environment with non-interacting agents that make decisions under uncertainty. Our approach allows for incomplete-information games and agents with different selection mechanisms, through which decisions are generated. We apply our approach on Monte Carlo simulations in order to assess its validity, and on data from an experiment on contests conducted in the literature.

Event Organizers

Samir Khuller
Peter and Adrienne Barris Chair and Professor of Computer Science

Shriram Revankar
VP and Fellow, Adobe Research Labs