Developing Guardrails for Artificial Intelligence

A two-day conference at Northwestern provided relevant lessons for MSAI students to understand when it comes to safely working with artificial intelligence.

To Sarah Spurlock, unleashing artificial intelligence (AI) without safeguards is like letting a tiger out of its cage at a crowded zoo.  

Sure, nothing bad might happen. But something truly awful could.   

Sarah SpurlockSpurlock is the associate director of the Center for Advancing Safety of Machine Learning (CASMI), a Northwestern University-based research network that is establishing best practices for the evaluation, design, and development of machine intelligence.  

The group hosted an event in January of particular interest to students in Northwestern Engineering's Master of Science in Artificial Intelligence (MSAI) program. Dubbed “Toward a Safety Science of AI,” the event focused on advancing the discussion to develop a framework that ensures AI works for the benefit of humanity without causing harm.   

“We're seeing systems just sort of being unleashed on the general public and then seeing what happens,” Spurlock said. “They can have consequences that are devastating.”   

For example, a paper in Nature Machine Intelligence demonstrated the potential for AI to turn good intentions into negative results. In less than six hours, an AI generator designed to search for helpful drugs wound up inventing 40,000 potentially lethal molecules — some similar to VX, a human-made chemical warfare agent that is one of the most toxic nerve agents, according to the U.S. Centers for Disease Control and Prevention.   

That’s just one of the undesirable outputs from AI in recent months. In fact, so regular are incidents of unintended consequences coming from unfettered AI that Sean McGregor, one of the speakers at the CASMI event, created the AI Incident Database to track such events.   

As of mid-March, more than 1,000 entries had been made in the database, including: 

  • AI using the voices of celebrities such as Joe Rogan and Emma Watson to generate racist and homophobic rants. 
  • A couple in Canada scammed out of $21,000 after getting a call from an AI-generated voice pretending to be their son.  
  • A chess-playing AI robot that grabbed a 7-year-old boy’s hand and broke his finger.  

Spurlock said incidents like the ones chronicled on McGregor’s website are why events like the two-day CASMI conference are so important.   

“We want to figure out how we harness all the great things that AI could do in ways that also safeguard against those potential harms,” she said. “We really see safety as a foundational element that has been important to many technologies as they've been developed.”  

Training students to think about the safety and ethical challenges that come with AI is a hallmark of the MSAI program — and one that Spurlock said needs to be emphasized. 

“Our MSAI students are here building really great skill sets in their technical areas," she said. "It's also important that they think about how with great power comes great responsibility.”   

The February conference was just the beginning for CASMI. Spurlock said more events and additional opportunities for MSAI students to learn about developing AI safety will be coming soon.  

In the meantime, MSAI students should never think about trailblazing the AI path alone. The stakes are too high and the consequences too great to not work with others from differing backgrounds with various points of view, Spurlock said.  

“Ultimately you want your technology to be impactful and positive, and that takes more than just you,” she said. “Utilize your time at Northwestern to understand the contexts where your skill set can be used and how it can be best utilized in a team and collaborative space.”  

McCormick News Article