Menu
Back Issues
Featured

AI and the Law: The Jury's Still Out

Exploring the potential benefits—and risks—of using generative artificial intelligence in legal services

Graphic web of tech-related icons

The transformative and disruptive possibilities of generative artificial intelligence (AI) have alternately captivated, alarmed, and entertained us.

In the form of large language models (LLMs)—such as OpenAI's ChatGPT and Google's Bard—generative AI has wowed millions with its ability to seamlessly mimic human speech and language, transforming our understanding of and relationship with traditional technical, creative, and professional workflows.

In law and legal services, LLMs could be a game-changer. The industry is overwhelmed by time-consuming, traditionally manual tasks, such as conducting research; drafting, reviewing, and revising legal memos, briefs, and contracts; and identifying and summarizing relevant statutes, regulations, and case law. With systems that leverage generative AI, these tasks could be completed in minutes.

In fact, a March 2023 LexisNexis survey found that 39 percent of US attorneys and 46 percent of law students believe generative AI tools will increase their efficiency. A majority of respondents suspect the technology will fundamentally revolutionize the entire practice of law.

Yet, in a field driven by human analysis and reasoning, with stakes as high as a defendant's innocence or guilt under the rule of law, these technologies pose more questions than answers. What are the limits of generative AI? What responsibility do legal professionals have to understand these technologies?

Northwestern engineers are collaborating with faculty at the Pritzker School of Law to explore these questions—with a focus on taking the right actions to pave the way for the future.

The Limits of ChatGPT

At issue is what LLMs like OpenAI's GPT-4 can and cannot do. They can use deep learning to predict language and produce conversational text from a user prompt. They can also respond to a question with authoritative language that might have no connection to ground truth.

The McCormick School of Engineering's Kristian Hammond notes that ChatGPT's remarkable promise as a pattern-matching statistical tool has overshadowed this crucial shortcoming, leading to unrealistic expectations over the technology's potential usefulness in the legal space.

Kris Hammond

"People trust it to tell the truth, when in fact, these systems were never designed to tell the truth. They were designed to be fluent. And it turns out, if you're fluent, sometimes you tell the truth because you know how words connect to each other," says Hammond, Bill and Cathy Osborn Professor of Computer Science.

"What I worry most about with regard to generative systems and the law right now is a misunderstanding: If I think that a device is an information system or is a repository of fact, then I will treat it that way. But if it's not, then I will trust it to give me the right information, and it won't."

Generative AI products like ChatGPT are also only trained on publicly available legal databases, which inherently limits their knowledge, providing only a small part of relevant data for specific case law.

"Generative AI can't reliably do the things involved in the bare minimum of good legal analysis and writing: accurately laying out the facts of the case, accurately describing the law that applies to the case, and applying that law to those facts to come to a conclusion," says Joseph Blass (JD, PhD '23). "Large language modules are not capable of critical thinking, and legal reasoning requires critical thinking."

Blass currently serves as a judicial clerk with the Honorable Joshua Deahl, District of Columbia Court of Appeals. He was advised at Northwestern by Ken Forbus, Walter P. Murphy Professor of Computer Science.

Those in the legal sector who use LLMs without considering these issues face consequences. In June, sanctions were imposed on two New York attorneys who filed a legal brief containing six fictitious case citations generated by ChatGPT, underscoring the need for rigorous scrutiny of the outputs of computational systems.

"As lawyers, we're required to know the benefits and the risks of the technology we use," says Dan Linna, senior lecturer at Northwestern Engineering and the Pritzker School of Law. "Lawyers need a functional understanding of these tools, including so that they can ask good questions of the developers about things like the training data and how AI systems have been validated and how they will fail."

Linna notes this doesn't mean that those in law cannot use LLMs to their advantage—they just must be informed of its benefits and risks. He says that even ChatGPT can assist with certain tasks, such as improving the clarity and persuasiveness of writing, and legal technology startups and traditional information providers are building reliable generative AI tools for a range of tasks, from legal research to assisting with contract drafting.

Helping Legal Professional Navigate the Risks

To help legal professionals navigate these issues and understand how ChatGPT and generative AI could change legal services, Northwestern Engineering and the Pritzker School of Law held an interactive executive education class in April.

Led by Linna and Hammond, the course addressed several topics, including developing a fundamental understanding of how ChatGPT and similar LLMs work and assessing the value of leveraging generative tools. Thirty-three legal professionals attended—representing law firm leadership, consulting companies, and legal services providers.

"People closest to the work need to understand these tools and experiment responsibly to learn how they can use them to provide greater value to the clients they serve," Linna says.

That value could include LLMs that quickly generate or revise text in legal memos or that extract and summarize pertinent information from contracts and briefs. Attendees also learned how AI-based chatbots powered by specialized knowledge bases could quickly answer legal research questions and foster brainstorming. These use cases, according to Hammond, reinforce generative AI's potential as a supporting tool for legal professionals.

"If the attendees thought, 'We're going to get ourselves a language model, and we're going to take all of our contracts and train it'—it's just not going to work because it doesn't know how to reason like a lawyer," Hammond says. "There's this idea that you can make these systems better by just showing them more and more documents. But you don't teach someone to be a lawyer by having them read a whole bunch of contracts or a whole bunch of regulations. They've got to reason better than that."

"As generative AI continues to evolve, lawyers and computer scientists need to collaborate to understand its possibilities and risks," says Hari Osofsky, dean and Myra and James Bradwell Professor of Law at the Pritzker School of Law. "We're excited to partner with the McCormick School of Engineering at the interface of law and legal practice with emerging technology, and I am grateful to Professor Linna and Professor Hammond for their leadership and innovative contributions."

Dan Linna

Promoting Student Collaboration to Create Best Practices

A better understanding of the value and limits of generative AI in the law is also being fostered in Northwestern Engineering classrooms, where students are learning how to best use LLMs. Last winter, Hammond and Linna cotaught the Innovation Lab: Building Technologies for the Law course, which brought computer science undergraduate and graduate students together with law students to develop and deploy client-focused technology solutions.

Working with clients ranging from Adobe and Thomson Reuters to law firms and legal aid organizations, eight interdisciplinary teams applied computational technologies, including ChatGPT and other LLMs, to augment and automate a range of legal tasks, including drafting and reviewing contracts and providing legal guidance to businesses and individuals.

"The most important lesson we teach the students is: don't start with the technology," Linna says. "One of the ways we can contribute value for our project partners is to make sure we deeply understand the crux of the problem. Our clients may have a particular idea of what the solution is, but maybe there's another pathway to it."

One student team partnered with Berkeley Research Group to develop Contract Genie, which leverages ChatGPT to augment the drafting of compliant, industry-standard employment contracts within minutes. Another group collaborated with the Law Center for Better Housing to improve the natural language taxonomy of the LCBH Rentervention chatbot, a free resource designed to help Illinois renters diagnose their legal housing issues, understand their rights, and explore solutions.

"Computer scientists can be very insulated with code-specific problems or algorithmic abstractions," says Siddharth Saha, an undergraduate student pursuing a degree in computer science from Northwestern Engineering and another in mathematics from Northwestern's Weinberg College of Arts and Sciences. "Seeing problems faced by real people up close can provide the most clarity on how to develop the best kinds of solutions. Effective channels of communications between lawyers and computer scientists are becoming crucial in a legal setting increasingly influenced by big data."

Finding the Right Path Forward

As with any landmark technology, the hype around LLMs may outpace their capabilities. Yet, as short- and long-term questions swirl around how generative AI will change the legal field, Linna advises leaders in the space to remain focused on taking action.

"We're making decisions right now that will play a huge role in determining how AI is used to make society better by improving the law, the courts, the delivery of legal services, and access to justice. So, let's get started," Linna says. "We must be proactive. Lawyers and legal industry professionals need to collaborate with others to move the profession forward. There's so much positive action that can be taken. If we want to have a seat at the table, now is the time to act."

Recent Stories about AI at Northwestern

McCormick in the Media