Examining Trust and Reliability at the Intersection of Computer Science and Law

During a two-day workshop in October, IDEAL explored reliability and trust in machine learning and artificial intelligence systems through the lens of computer science, law, and policy

The massive datasets that power machine learning algorithms and systems are complex, noisy, and vulnerable to various kinds of errors, contamination, and adversarial corruptions. As data science and machine learning are increasingly deployed across the decision-making pipeline, designing provably reliable and trustworthy methods and systems is imperative.

Last month, the Institute for Data, Econometrics, Algorithms, and Learning (IDEAL) hosted a two-day workshop exploring the critical intersection of computer science, law, and policy in addressing the challenges and opportunities associated with ensuring reliability and trust in machine learning and artificial intelligence systems.

The event was organized as part of the IDEAL Fall 2023 Special Program on Trustworthy and Reliable Data Science by Daniel W. Linna Jr., senior lecturer and director of law and technology initiatives at Northwestern; Mesrob Ohannessian, assistant professor of electrical and computer engineering at the University of Illinois Chicago (UIC); and Gyorgy Turan, professor of mathematics, statistics, and computer science at UIC.

Convening experts, researchers, legislators, and regulators from a variety of fields, the workshop organizers aimed to foster collaboration and understanding between computer scientists and legal and policy experts.

During his opening remarks, Linna discussed how transformational technologies like artificial intelligence and machine learning profoundly impact the legal landscape, both in terms of how computation is changing legal systems and law practice and how law governs technology.

Daniel W. Linna Jr.“CS+Law has a bi-directional relationship,” Linna said. “During this workshop, we’ll examine how CS+Law research and the shared foundations of the two disciplines can help support the design, development, and validation of trustworthy and reliable systems.”

Northwestern’s IDEAL site director Aravindan Vijayaraghavan, associate professor of computer science and (by courtesy) industrial engineering and management sciences at Northwestern Engineering, also welcomed attendees to the workshop held in Mudd Hall, hosted by Northwestern Computer Science. He noted IDEAL’s multi-year investigation of CS+Law topics, starting with the IDEAL Phase 1 2021 Special Quarter on Data Science and Law co-organized by Linna and Jason Hartline, professor of computer science at the McCormick School of Engineering.


The workshop focused on three main topics: foundations in CS+Law, generative AI, and the regulation of and legislation around AI.

The “Foundations in CS and Law for Reliability and Trust” session provided overviews of the principles and technologies of reliability and trust through a CS and legal lens and included a presentation from Daniel B. Rodriguez, Harold Washington Professor of Law at Northwestern’s Pritzker School of Law.

Linna then moderated a panel discussion which aimed to provide insights on how interdisciplinary CS+Law research can contribute to shaping trustworthy and reliable AI. Panelists included Rodriguez, Ana Marasović (University of Utah), Robert Sloan (UIC), and Charlotte Tschider (Loyola University Chicago School of Law).

During the “Introduction to Generative AI and Specifics of Reliability and Trust in CS and Law” section, speakers introduced the technical, legal, and regulatory landscape of generative AI and analyzed specific AI uses and challenges, such as platform regulation, content moderation, misinformation, privacy in machine learning, cybersecurity, conversational AI, and automating legal advice and decision-making.

Northwestern Engineering's V.S. Subrahmanian presented “Judicial Support Tool: Finding the k-Most Likely Judicial Worlds.” Subrahmanian is Walter P. Murphy Professor of Computer Science and a faculty fellow at Northwestern Roberta Buffett Institute for Global Affairs.

Ermin Wei associate professor of electrical and computer engineering and industrial engineering and management sciences at Northwestern Engineering, discussed “Incentivized Federated Learning and Unlearning.”

Sabine Brunswicker (Purdue University) presented research conducted with Linna on “The Impact of Empathy in Conversational AI on Perceived Trustworthiness and Usefulness: Insights from a Behavioral Experiment with a Legal Chatbot.” Applying a novel behavioral theory of empathy, Brunswicker and Linna designed a legal chatbot that integrates a rule-based logic for empathy in language display using syntactic and rhetorical linguistic elements.

In addition, Paul Gowder, professor of law and associate dean of research and intellectual life at Pritzker, described the institutional preconditions of trust and safety work in the talk “The Networked Leviathan: For Democratic Platforms.”

Finally, the “AI Policy and Law” segment featured talks exploring AI risks and benefits; AI and legal reasoning; and emerging and future policy, legislation, and regulation, including the European Union AI Act, the 2023 proposed US Algorithmic Accountability Act, and the task force on generative AI in Illinois.

Sarah Lawsky, the Stanford Clinton Sr. and Zylpha Kilbride Clinton Research Professor of Law and vice dean of Pritzker, presented “Formal Methods and the Law.”

Linna also moderated a panel on legislation and regulation of AI with Tom Lynch, chief information officer and head of the Cook County Bureau of Technology; and Rep. Abdelnasser Rashid of Illinois State Representative 21st House District.

Additional speakers at the workshop included:

  • Kevin Ashley (University of Pittsburgh) – “Modeling Case-based Legal Argument in an Age of Generative AI”
  • Anthony J. Casey (University of Chicago Law School) — “Your Self Driving Law Has Arrived”
  • Aloni Cohen (University of Chicago) — “Control, Confidentiality, and the Right to be Forgotten”
  • Aziz Huq (University of Chicago Law School)
  • Kangwook Lee (University of Wisconsin-Madison) — “Demystifying Large Language Models: A Comprehensive Overview”
  • Michael Maire (University of Chicago) — “Landscape of (non-LLM) Generative AI”
  • David McAllester (Toyota Technological Institute) — “Generative AI and Large Language Models Reliability and Trust Issues from CS Perspectives”

McCormick News Article