Engineering News

Who is Responsible When Autonomous Systems Fail?

A panel of experts including Northwestern Engineering’s Todd Murphey explored the legal, technological, and societal implications of assigning responsibility

Artificial intelligence is becoming a bigger part of our lives, and there are legal, technological, and societal implications of assigning responsibility when it fails.

On March 18, 2018, as Elaine Herzberg pushed a bicycle across a street in Tempe, Arizona, she was struck and killed by an Uber vehicle testing self-drive mode. After investigations by local authorities and the National Transportation Safety Board (NTSB), Uber’s human safety driver Rafaela Vasquez was found to have been watching “The Voice” on her phone right before the incident. In September, she was charged with negligent homicide.

Uber settled out of court with Herzberg’s family and ceased testing in Arizona. The ride-sharing giant faced no criminal charges, although the NTSB found flaws in the vehicle’s mechanisms and the company’s safety culture. For instance, Uber removed the car’s own automatic collision warning system, which would have detected the victim.

Todd Murphey

The incident brings into stark relief important questions about artificial intelligence (AI) as it becomes a bigger part of our daily lives. On November 18, a panel of experts including Northwestern Engineering’s Todd Murphey explored the legal, technological, and societal implications of assigning responsibility during the virtual event “Autonomous Systems Failures: Who is Legally and Morally Responsible?”

Sponsored by Northwestern University’s Law and Technology Initiative and AI@NU, the event was moderated by Dan Linna, senior lecturer and director of law and technology initiatives at Northwestern who has a joint appointment at the McCormick School of Engineering and the Pritzker School of Law. The panel also included University of Washington Law Professor Ryan Calo, Google Senior Research Scientist Madeleine Clare Elish, and was attended by almost 300 people. 

Software and technology can fail when the stakes are low, like when a Microsoft Word document is erased, or somebody can’t participate in a Zoom call. But what happens when a software problem has physical consequences, as it did when Herzberg was killed or when the Boeing 737 Max autopilot with faulty coding led the system to push the aircraft’s nose down?

In each case, user error was partially blamed — but that doesn’t paint a complete picture. Calo said more threat modeling needs to be used for autonomous transportation, and that the systems need to be as readable and transparent as possible.

“Society, and especially societal institutions, need to get a heck of a lot better at human-computer interaction,” Calo said. “We need to get a lot more sophisticated at what happens when you put these complex machines into a picture with multiple human beings and other machines.” 

Dan Linna

Elish discussed how in the history of technology, some advances have been billed as paths to do away with humans and human error. Of course, this has never happened. Technology is never perfect, and while automation and AI rearrange how humans work together, the pairing often obscures the required human input to make the technology function.

Until an accident happens, Elish pointed out, the capability of technology is overestimated while the human role in operating the system is underestimated. That flips when something goes wrong.

While scrutiny turned from Uber to Vasquez, the company resumed testing elsewhere by late 2018. This is the latest example of what Elish termed a “moral crumple zone,” where the technology in complex systems is eventually insulated from blame by a human.

“Human-in-the-loop design is often called upon as a means of control and agency, but that doesn’t always work out in reality, because it matters a great deal how the human is positioned in the loop and whether they are empowered or disempowered to act,” Elish said.

Whether the discussion is about understanding driverless cars and liability or how to prescribe regulations for AI broadly, human-in-the-loop can’t be the only responsible party, she added. Instead, there needs to be a more specific conversation around how control and responsibility are fairly and appropriately aligned.

Transparency with autonomous technology is key.

“The call to action for me is a shift in the technical culture, and maybe just a shift towards always being explicit about the things that don’t work as well as you were hoping they would work,” said Murphey, a mechanical engineering professor and member of the Center for Robotics and Biosystems.

Not meeting expectations should no longer be a deal stopper in terms of market value, he said.

“How to convey that (information) and air dirty laundry in a way that is healthy and allows industries to thrive,” Murphey said, “(while enabling) consumers to understand what they’re buying” will be a key step towards transparency.