Much ado has been made about the rise of artificial intelligence in the legal sphere, but discussions of threats and potential safeguards have been largely confined to a national discourse. Artificial intelligence, however, recognizes no borders; transnational problems require transnational solutions, and it remains unclear who or what will intercede.
At the Second Annual BC Law LLM Conference on April 12, scholars from around the world shared their perspectives on the limitless opportunities—and pitfalls—that accompany the rise of generative artificial intelligence models like ChatGPT. The event, organized by BC Law LLM students and co-sponsored by the newly invigorated International Law Society in partnership with the firm Cleary Gottlieb, raised fascinating philosophical questions not simply about artificial intelligence’s place in the legal field, but also about its ethical and moral implications for humanity writ large.
In her introduction to “Artificial Intelligence and Human Values: Law at the Intersection,” Dean Odette Lienau shared an update regarding the role that AI will play at the BC Law in the near future. She outlined four “broad goals” for students, including integration of AI in legal practice courses, training students to assess the quality of work product generated by artificial intelligences, cultivating a sense of how AI tools fit into a broader legal strategy for clients, and, most importantly, creating a shared understanding of the ethical implications of these tools on legal practice and society more broadly. Although no clear timeline was offered, it was obvious that the administration has been working diligently to craft and implement a coherent approach to AI as soon as possible.
BC Law Professor Frank Garcia, who moderated the conference and town hall discussion, drew attention to one of the great quandaries that AI poses: “In an ideal world, if the law is working, there is significant overlap between what is legal and what is ethical and what is responsive to our deepest moral values. But given the current state of [artificial intelligence] regulation, those do not necessarily overlap at the moment,” he said.
The first presenter was Professor Sateesh Nori, a clinical instructor at New York University Law School and a legal services attorney with over two decades of tenant rights advocacy under his belt. During that time, Nori helped more than 1,000 families—“I’ve never lost a case… I’ve just come in second place many times,” he quipped—to receive access to top-notch legal representation when they could not otherwise afford it. Improving access to justice is Nori’s lodestar—indigent criminal defendants are provided an attorney by the state, but no such right exists in civil cases—and he believes artificial intelligence will go a long way toward bridging the gap.
According to a recent study by the American Bar Association, 92 percent of Americans’ civil legal needs are going unmet. “In America, there is one free lawyer for every 10,000 people. Americans spend more on Halloween costumes for their pets every year than the federal government spends on legal services,” Nori said with incredulity. “Lawyers in the field are burning out due to high stress and low pay—they’re on the brink of revolt.”
What if artificial intelligence could change all that? “In Washington, DC, there are forty nonprofits that offer legal aid to low income residents, each with its own website and knowledge base,” Nori said. “With generative artificial intelligence, you can create a chatbot to extract the best answer from all forty of those websites in one second, in any language. That will revolutionize the delivery of legal services to those who need help.”
The legal field remains as risk-averse as ever. The industry has a reputation as being not a first-adopter, but a last-adopter. Not too long ago, the first lawyer who sent an email to his client actually faced disbarment, according to Nori, signaling that this industry-wide risk aversion often errs on the side of hysteria and irrationality. Which is not to say that caution ought to be thrown to the wind, but that lawyers must engage in a serious and thoughtful cost-benefit analysis regarding the nascent technology.
Zhiyu Li, a professor at Durham Law School in the United Kingdom who is on sabbatical at Boston College, has dedicated her studies to broadening our understanding of the Chinese legal system. She painted a vivid picture of the modern Chinese judiciary, which formally adopted a “smart courts” initiative in 2016 in order to modernize and streamline access to justice for the nation’s staggering population.
In China, courthouses are equipped with computer kiosks said to be capable of predicting the cost of litigation and the likelihood that any given litigant will win their potential case based on AI-generated analyses of relevant legal sources and prior court decisions.
Li said that China has embraced big data, artificial intelligence, cloud computing, and blockchain technology on a national scale in a way that the United States has not. Courthouses are equipped with computer kiosks said to be capable of predicting the cost of litigation and the likelihood that any given litigant will win their potential case based on AI-generated analyses of relevant legal sources and prior court decisions. Visitors to Beijing’s No. 1 Intermediate People’s Court are greeted by Xiaofa, a five-foot tall android that speaks in a child’s voice in order to ease litigants’ agita. If the future isn’t quite here, one thing seems certain: The future is there.
“On the one hand, artificial intelligence brings law closer to people, and increases popular access to justice; would-be litigants can better understand their rights and know whether their claims are legitimate,” Li said, but she went on to caution the audience. “On the other hand, AI could mistakenly discourage potential litigants from pursuing legitimate claims, and it downplays the important role that lawyers play in identifying legal strategies and loopholes.”
Professor Jose Ignacio Hernández of Andrés Bello Catholic University and the Central University of Venezuela proposed a pathway to international cooperation and joint regulation of artificial intelligence modeled after the International Atomic Energy Agency, an autonomous organization that operates under the United Nations umbrella. As an international administrative lawyer by trade, Hernández sees artificial intelligence as an issue of global import—it is no coincidence that the template for a potential regulatory body for AI happens to deal with the last existential, apocalyptic weapon devised by humanity—and one that will require every nation to act in lockstep.
“Artificial intelligence is a great tool for public administration, but it also represents one of the most pressing collective action problems of our time,” Hernández said. Unfortunately, he analogized it to climate change, another collective action problem which has gone unsolved for want of a strong, unified regulatory hand: “There is no international governance with the ability to deal with climate change … the United Nations has largely failed in that regard.”
History did not deter Hernández from adopting an optimistic stance regarding the ability of international governments to come together and address the problem, however. “Artificial intelligence is new to be sure, but the principles of administrative law are not,” he concluded.