See all NewsEngineering News

Jon Kleinberg Discusses Algorithmic Decision Making

Cornell professor explored how the complexity of a classification rule interacts with its fairness properties during his virtual lecture

Somehow, your Netflix recommendations are almost always correct, and your search engine of choice knows almost what you’re looking for before you do. That leads some to argue that if online algorithms are so successful, maybe they can be used for offline situations, such as evaluating job applicants, potential college students, or whether a defendant would be pre-trial flight risk.

But that doesn’t mean algorithmic decisions are without bias.Jon Kleinberg

During his May 13 presentation, “Fairness and Bias in Algorithmic Decision Making,” Jon Kleinberg, Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University, considered the key conditions at the heart of these fairness debates.

During the lecture, part of the 2019-20 Dean’s Seminar Series at Northwestern Engineering, Kleinberg said simplification and the ability to understand algorithms are part of an auditing pipeline that can help the discovery of bias and discrimination in the process of algorithmic decision making. But simplification comes with risks of creating incentives and of not being optimal.

For human decisions, the risk of bias has long been studied, Kleinberg said in the talk, held via Zoom. “What’s been intriguing here is the ways in which algorithmic bias both partly resembles — but is partly distinct from — human bias.”

Humans give the illusion of explaining their choices. The challenge is figuring out whether the given explanation is an illusion or the real reason, he said. And, human beings may intend to explain their processes, but might be wrong.

Algorithms present no such difficulties, provided there are regulations in place for them to be properly examined, he said. This doesn’t require researchers to read the code, he added. All it necessitates is access to how the algorithms were constructed.

“It’s entirely possible that well-regulated algorithms may make discrimination easier to detect,” Kleinberg said. "With algorithms, even though they’re very complicated, there's a sense in which certain things are explicit in a way they just never can be with human beings." 

Algorithms have no direct incentive to exhibit bias. Instead, they are trying to optimize an objective function they’ve been given. The features they are fed, however, reflect existing human biases, and therefore can introduce bias into the algorithm's decisions.

However, algorithms are becoming more and more complicated to interpret, he said.

Simplification transforms disadvantage into bias, similar to humans falling back into stereotypes. When information is missing about somebody, one falls back on attributes like group membership as a proxy, he said.

“Algorithms both allow us to think about our own human decision making, and they provide alternatives to human decision making that we can probe in ways that are simply impossible with human beings,” he said.

Kleinberg’s research focuses on the interaction of algorithms and networks, and the roles they play in large-scale social and information systems. He is a member of the National Academy of Sciences and the National Academy of Engineering. He is the recipient of MacArthur, Packard, Simons, Sloan, and Vannevar Bush research fellowships, as well awards including the Harvey Prize, the Nevanlinna Prize, and the ACM Prize in Computing.

The Dean’s Seminar Series is a school-wide lecture series, and has included talks this academic year by Kleinberg, Heather Stern, and Dario Robleto.