Building a Bridge Between AI and Law

Director of Law and Technology Initiatives Daniel W. Linna Jr. explores how students in Northwestern Engineering's Master of Science in Artificial Intelligence (MSAI) program can help organizations better understand the legal and ethical challenges that come with implementing AI technologies.

A survey of more than 3,000 CIOs from around the world found that from 2015-19, the number of enterprises that implemented artificial intelligence (AI) increased by 270 percent. As technological capabilities evolve and become more generally available, the presence of AI will continue to be more prevalent moving forward.  

There are countless opportunities for technologies that leverage AI and machine learning in just about any industry, including law, said Daniel W. Linna Jr., senior lecturer and director of law and technology initiatives at Northwestern University's McCormick School of Engineering and Pritzker School of Law. Linna thinks computational technologies can revolutionize how legal aid is provided in the United States, where more than 80 percent of impoverished people and more than 50 percent of the middle class lack access to legal services.

Linna, who teaches courses in the Master of Science in Artificial Intelligence (MSAI) program, envisions applications for legal services similar to software that assists people with filing their taxes. With those programs, users benefit from the knowledge and expertise of CPAs and other tax preparers, which knowledge engineers have incorporated into the software. This enables users to file tax returns and get advice in response to a wide range of questions.  

"There is an abundance of areas in law where we can do the same thing," Linna said.

Applications are in development to help with tenant-landlord relations, for example. If the water in a rented apartment is not working, at what point can the renter withhold rent? Linna explained that with an app, renters could ask their question and find out how to make a proper demand for repairs and take appropriate steps to legally withhold rent. AI is already used to help landlords screen tenants as well as automate the application and leasing process. Tenants could greatly benefit from software tools that help them solve and prevent legal problems, he said. Renters could also benefit if landlords were expected to use similar applications to guide them to comply with applicable law when dealing with tenants.

"We're just scratching the surface of the ways these tools could be helpful," Linna said.  

As AI tools become more abundant in all areas of services, so too do questions about their ethical use, from biases that can be incorporated during the design and development process to the inherent risks that arise from their implementation. Research exposing bias in facial recognition and high-profile errors have led to jurisdictions such as San Francisco; Boston; Oakland, California; and Portland, Oregon, passing legislation that bans city government from using technologies that include facial recognition. Facebook, meanwhile, continues to be challenged about the role its algorithms play in political polarization in this country.

In 2019, US senators Cory Booker (D-NJ) and Rob Wyden (D-Ore.) partnered with US Representative Yvette Clarke (D-NY) to introduce the Algorithmic Accountability Act, which "requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased, or discriminatory decisions impacting Americans." In January 2020, Microsoft president Brad Smith, speaking at the World Economic Forum said, “We should not wait for the technology to mature before we start to put principles, and ethics, and even rules in place to govern AI.” 

A host of questions arise about what these policies and regulations should look like, from debates about privacy to the definition of AI itself. As more businesses confront these questions about using technology responsibility, it's important to have individuals available who understand the technical and legal sides of the conversations. Linna believes MSAI students and alumni can be that bridge. 

"We're training machine learning engineers to understand how those conversations take place across industries," Linna said. That knowledge can improve students' marketability and career satisfaction; it can open up an enormous world of opportunities for them, he said.

Engineers cannot design and build a product first and then later consider whether it respects users' privacy and rights. The product must be designed from the beginning with users — and their privacy, rights, and other ethical considerations — front of mind.  

"How do we create guardrails to steer and develop these AI tools so that they will be beneficial to society?" Linna said. "Right from the beginning, you need lawyers, compliance people, and engineers talking about how to make sure privacy and ethical principles are included by design.

"I want MSAI students to be part of this broader conversation about how we can use AI tools to empower people, respect human rights, expand access to legal services and justice, and promote good in the world."

McCormick News Article