Workshop to Explore Sociotechnical Standards to Better Manage AI Risks

Artificial intelligence (AI) systems are being developed and publicly released faster than policymakers can effectively regulate them. While these technologies are able to detect numerous threats, such as cancer and wildfires, they may also worsen biased and discriminatory practices that hurt people. As the Northwestern Center for Advancing Safety of Machine Intelligence (CASMI) works to mitigate these harms, researchers will travel to Washington, D.C. to develop methods that promote AI safety.

CASMI is co-hosting a workshop on Oct. 16-17 in our nation’s capital to test and evaluate sociotechnical approaches for AI systems, focusing specifically on expanding the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF). The workshop, entitled “Operationalizing the Measure Function of the NIST AI Risk Management Framework,” will be collaboratively led by CASMI; Abigail Jacobs, assistant professor of information and of complex systems at the University of Michigan; the NIST-National Science Foundation (NSF) Institute for Trustworthy AI in Law & Society (TRAILS); and the Federation of American Scientists (FAS).

Kristian HammondThe workshop will gather AI experts from academia, industry, and government to create a testbed, or a controlled environment to assess AI systems. The goal is to better understand the technologies’ performance and societal impact.

“This workshop is about evaluation, metrics, and measurement,” said Kristian Hammond, Bill and Cathy Osborn professor of computer science and director of CASMI. “How can we get to a real understanding of the impact of systems? If we are concerned about issues of harm, then we need to go beyond articulating harm to measuring harm, even if it is challenging to do so.”


View media coverage of our news story at the following link:

McCormick News Article