Menu
See all NewsEngineering News
Events

Humans Have the Power to Decode Bias in AI

During a Q&A with filmmaker Shalini Kantayya, faculty, students, and the community examined the bias in algorithms that impact us all.

Algorithms make decisions for humans every day. Some decide who gets the COVID-19 vaccine first, while others determine what candidate gets a job or which person gets undue police scrutiny.

But these same systems have not been vetted for bias or discrimination — nor do they have standards for accuracy. A discovery made by MIT Media Lab researcher Joy Buolamwini revealed that facial recognition technology does not see dark-skinned faces accurately. 

That finding inspired Coded Bias, a 90-minute documentary created by director/producer Shalini Kantayya. The film, premiered at the 2020 Sundance Film Festival to critical acclaim, explores how Buolamwini pushed for the first-ever legislation in the US to govern against bias in algorithms.

Shalini KantayyaDuring the Northwestern Computer Science event “Decoding Bias: A Conversation with Filmmaker Shalini Kantayya” on February 23, faculty, students, and the community talked with the filmmaker about the harmful ways artificial intelligence (AI) and facial recognition technologies wield power, and how humans can intervene. 

The virtual event was moderated by Northwestern Engineering’s Josiah Hester and Sarah Van Wart, and PhD students Natalie Araujo Melo and Stephanie Jones, both part of the Computer Science and Learning Sciences program.

How did it get to a point that biases occur without scrutiny in AI and machine learning (ML) technologies? It could be a matter of people from different backgrounds not talking to one another. 

“Science and technology are currently in the hands of the few. But in order to solve big problems, the public needs to be engaged,” said Kantayya. “Lay people have this intense fear of feeling like an imposter when they engage in conversations about these technologies. But the reality is 10-year-olds are using them. We need a shared language and safe space where voices of dissent can be heard.” 

Josiah HesterHester, a native Hawaiian and assistant professor of computer science and electrical and computer engineering, said he, too, has felt imposter syndrome as the only Indigenous person in his computing program. Seeing Buolamwini, a Black female who was then a master’s student, take her discovery to Congress was an inspiration.

Kantayya said the fight for ethics in AI has been led by people of color, women, religious minorities, and LGBTQ folks.

“Three Black women scientists shined a light on bias in AI that the three biggest tech companies in the world missed,” Kantayya said. “That’s the reason it’s so important people like you [Hester] are in the room--people with identities that allowed them to see something that was missing.”

Even programmers with the best intentions can create bias in their code. When Amazon built an AI system to automate the search for top talent, Kantayya said, the model effectively erased women from the hiring pool.

“It’s striking because the first programmer was Ada Lovelace. One of the first compilers was built by Grace Hopper. It’s interesting how now, again, we rely on people like Joy to save us from ourselves,” Hester said.

Sarah Van WartEven CS educators who teach ethics, such as Van Wart, consider their role as engineers and how our technologies can impact the world. “We’re having a little bit of an identity crisis. We don’t want to cause harm or subject the world to our own tunnel vision,” Van Wart said.

One conclusion: Oversight and regulation of AI and ML is needed, especially because of their scale.

“You have technology like facial recognition going from big tech companies in the US straight to law enforcement, ICE, and the FBI without an elected official in between to give oversight,” Kantayya said. “We need a vetting system like the FDA that enforces standards.”

Conversations between people from different industries — and different countries — are critical, too.

“A lot of times scientists are working in a space where they are not connected to the people who are most vulnerable to the potential harms and impact of their technologies,” Kantayya said. “The people who are most vulnerable need to have access to the science in an empowered way.”

Natalie MeloPhD student Melo works with the Young People’s Race, Power, and Technology Project at Northwestern, an after-school program where Chicago students discuss the intersections of race, power, and technology. She reflected on the different countries Kantayya visited while producing Coded Bias, and how a conversation on facial recognition technology might look different in the UK, which has a framework for data rights as human rights, versus in China, where the government has unfettered access to its citizens’ data.

“The larger question is how can technology be in service of a human-centered society?” Kantayya said. “By visiting different countries, I wanted to explore the ways we’re approaching data rights.” 

Stephanie Jones“We have relationships with the things we create. We need to ask ourselves, ‘How do we want to be treated by a computer? How do we want other people to be treated by an algorithm?’” Jones added. 

Kantayya issued a call to action to engineers, educators, and lay people alike: Get educated about how bias can infiltrate the everyday technologies. Resources are available on the Coded Bias website, including reading materials, discussion questions, an activist toolkit, and more. 

“I made Coded Bias because I believe everyday people can make a difference, and the people in Coded Bias have shown that,” Kantayya said.