Assessing the Admissibility of Artificial Intelligence Evidence

Speakers introduced the fundamentals of AI and explained the evidentiary principles governing AI evidence in civil and criminal cases

The rapid and transformational growth of artificial intelligence (AI) technology will profoundly impact the legal landscape. This requires lawyers, judges, and consulting and testifying experts to be knowledgeable of evolving topics related to AI and issues that govern the admissibility of AI evidence in civil and criminal cases.

Maura R. Grossman“One of the phenomena of AI is that the goal posts are always moving,” said Maura R. Grossman, research professor of computer science at the University of Waterloo. “It’s simply what a computer can’t do until it can and, once we get used to it, we just call it software.”

On February 3, Grossman and the Honorable Paul W. Grimm, District Judge for the United States District Court for the District of Maryland, presented the virtual seminar “Artificial Intelligence as Evidence,” based on a paper of the same name published by the Northwestern Journal of Technology and Intellectual Property (JTIP). Gordon V. Cormack, professor of computer science at the University of Waterloo, is a co-author.

The Honorable Paul W. GrimmGrimm is a member of the American Law Institute and an adjunct professor at the University of Maryland Francis King Carey School of Law, and the University of Baltimore School of Law. He served on the Civil Rules Advisory Committee and chaired the Discovery Subcommittee that wrote amendments to the Federal Rules of Civil Procedure approved by the Supreme Court of the United States in 2015.

Grossman is an adjunct professor at Osgoode Hall Law School of York University and an affiliate faculty member of the Vector Institute for Artificial Intelligence. She is a principal at Maura Grossman Law, an eDiscovery law and consulting firm in Buffalo, New York.

The event was sponsored by the Northwestern University Law and Technology Initiative, a partnership between Northwestern Engineering and Northwestern’s Pritzker School of Law, Artificial Intelligence at Northwestern (AI@NU), Northwestern Law’s High Tech Law Society, and JTIP. Daniel W. Linna Jr., senior lecturer and director of law and technology initiatives at Northwestern, who has a joint appointment at the McCormick School of Engineering and Northwestern Law, organized and moderated the event.

Daniel W. Linna Jr.“As AI is introduced as evidence in courts and used to provide legal services, we need multidisciplinary teams of lawyers and technologists working together to capture the benefits and mitigate the risks.” Linna said. “We need lawyers who have a functional understanding of the technologies and technologists who understand the values and goals of the law.”

AI fundamentals

Grossman discussed the fundamentals of AI, including a general overview of what it is and how it works and the confluence of factors leading to the rapid growth in AI. This includes the massive increases both in the volume of data and processing speed, significant decreases in the cost of storage, and lower barriers to entry due to open-source communities.

“We carry around more computing power in our pocket than what landed humans on the moon,” Grossman said.

Grossman highlighted the ubiquitous nature of AI in both public and private sectors and outlined examples of applications in the fields of health care, education, employment-related decision-making, transportation, finance, and law enforcement. She presented various applications of AI in the legal profession, including technology-assisted review and analytics in eDiscovery, contract analysis, litigation outcome forecasting, and jury pool evaluation.

Grossman explained the concept of robustness in testing as it relates to the validity of an AI application — how accurate is the execution of the AI system — and the reliability — how consistently the AI produces accurate results.

She also raised issues implicated by AI that can affect the validity and reliability of AI evidence, including bias, lack of transparency and explainability, and the application of AI for a purpose which it was not designed, termed “function creep.”

“AI is a tool. A hammer and a screwdriver are neither good nor bad,” Grossman said. “It all depends on how they are used and what regulatory and ethical framework we put around them.”

Evidentiary principles

Grimm addressed the admissibility issues around deciding whether AI evidence should be admitted in civil and criminal cases based on standards of validity and reliability and described how the pertinent rules of evidence apply to AI.

“Keep in mind that the rules of evidence, like all the rules of practice and procedure, are not designed to be technology-specific,” Grimm said. “In fact, there are only two rules of evidence specifically drafted to be able deal with technical circumstances, but even those were drafted generally. The reason is because it can take years to get new rules enacted and technology changes so fast and is used so quickly that it outstrips the ability of courts to be able to assimilate how legal principles —which change in very small increments — deal with it.”

The first level of inquiry related to evidence is relevance and prejudice. Grimm described the heightened importance of Rule 403 – Excluding Relevant Evidence for Prejudice, Confusion, Waste of Time, or Other Reasons in evaluating AI evidence. He explained that even if the evidence is relevant, and therefore presumptively admissible, there could be circumstances in which it is inadmissible if the danger either of unfair prejudice, or of misleading or confusing the jury, or of just being redundant is greater than the probative value of the evidence to the case.

“The risk of an incorrect result is what you work backwards from. You ask: ‘What is this AI information designed to be doing in this case?’ And if it’s wrong, what’s the risk of the adverse outcome to the person on the other side of the case against whom this evidence would be introduced?” Grimm said. “If the risk is something that is so great that you are concerned that you really have to get it right otherwise there would be an unfair result, then you’re going to want to ask more probing questions and insist on a greater showing of validity and reliability.”

Authenticity

Grimm next discussed Rule 901 – Authenticating or Identifying Evidence as the primary evidentiary hurdle for the admissibility of AI evidence.

“The rule that deals with technical evidence that overarchingly is the primary ground for controversy and dispute and importance in terms of getting AI evidence in or out deals with the rules of evidence focusing on authenticity,” Grimm said.

Authenticity requires that the party seeking to introduce evidence must show that the evidence is what it purports to be more likely than it is not – a 51 percent preponderance. Certifications of authenticity must be provided by either a witness with personal knowledge (Rule 602) or by a person who qualifies as an expert (Rule 702).

“In the last 10 years, I haven’t had any significant case go to trial that didn’t have some component of very significant technology involved,” Grimm said. “I see witnesses all the time getting up on the stand purporting to talk about how some technical evidence works when I have a strong suspicion that they have no idea how it works. They’ve just been told that it works this way and they’ve been trained to use it a certain way. And they may be proficient in the steps you have to take to use it, but they have no idea how it actually works.”

AI evidence is not relevant if it is not valid and reliable because then it has no tendency to prove consequential facts. Grimm discussed the methodology that can be used to provide a sufficient foundation for validity and reliability, which entails “borrowing” from rules of evidence that deal with standards for admitting scientific, technical, or specialized information and determining whether the evidence will assist the jury by demonstrating that the AI evidence was produced by a system that produces reliable results.

“When analyzing the admissibility of AI evidence, we cannot afford to approach artificial intelligence with genuine stupidity,” Grimm said.

McCormick News Article