Generative AI took the world by storm when OpenAI released GPT-4 in March 2023, the fourth generation of their large language model (LLM) that was packaged and sold as ChatGPT. Although GPT-4 represented only a slightly more reliable and creative iteration than previous versions, it captured the imagination—stoking some deep fears about technology in the process—and saw wide adoption for the first time, largely among Gen-Z and millennial Americans.
In anticipation of the release later this year of GPT-5, when the technology is sure to take another quantum leap forward, a trio of panelists convened April 8 at a BC Law-sponsored event entitled “From the C-Suite: How AI is Changing Legal Practice.”
The group acknowledged that serious national regulation remains a distant prospect, and many industries have failed to grapple with the so-called Fourth Industrial Revolution that will fundamentally alter the way people live and work in the 21st century. It is therefore absolutely vital, they said, that the newest generation of legal practitioners, with their heightened understanding of risk and the potential for growth, take the reins of this technology and shape the discourse around the future of generative AI.
The panelists, who represented a broad cross-section of the legal field, shared their experiences with implementing early generative AI policies and technologies at law firms and in-house. Their discussion was moderated by BC Law Dean Odette Lienau.
Given the risk-averse nature of legal practitioners, it should come as no surprise that law firms have resisted the inevitable march of progress, as Edward Black, senior counsel and technology strategy leader at Ropes & Gray, pointed out. “Candidly, a lot of people inside law firms just don’t think about the deployment of tech in the delivery of legal services,” he said. “If you want to see a doctor today, it’s all through a website, but if you want a lawyer, you’ve got to pick up the phone. At least, that’s where we were two years ago, and it’s been a major cultural challenge.”
Damon Hart ’99, executive vice president, chief legal officer, and secretary at Liberty Mutual, was optimistically bullish on the future of Gen AI at in-house legal departments: “One of the things [Gen AI] does well is summarize, and there’s a big opportunity there, between outside counsel management, billing, contract drafting, and due diligence,” he said. “Now, one attorney and a paralegal can do what used to take a whole team of attorneys to do, in terms of volume and scale.”
So-called AI hallucinations—a polite way of describing the technology’s habit of confidently inventing “facts” from whole cloth—pose a particular risk to legal practice.
Tricia Wood ’96, chief privacy officer at Liberty Mutual, expressed concerns about confidentiality when Gen AI is widely adopted by the legal industry. “If, for example, I included my client’s information in a question with ChatGPT, where does that data go? My question could be used to further train the model and, in a worst-case scenario, that confidential information could end up being the response to someone else’s question,” she warned.
Confidentiality isn’t the only aspect of the Gen AI issue that keeps Wood up at night. So-called AI hallucinations—a polite way of describing the technology’s habit of confidently inventing “facts” from whole cloth—pose a particular risk to legal practice; a pair of New York attorneys have already been sanctioned for submitting a brief that contained six entirely fictitious cases hallucinated by ChatGPT.
And that’s not to mention allegations of inherent bias in Gen AI. “Facebook has been accused of using AI to generate job listings that are rife with gender bias—showing advertisements for preschool teachers only to women, or mechanic openings only to men—because the data it was trained on is fifty years old, and it naturally attempts to perpetuate those dated biases and stereotypes,” Wood said.
Potential qualms with the technology aside, the panelists were united in their firmly held belief that Gen AI is the way of the future, especially in the legal field. “We must realize both the benefits and the limitations, and ensure that there’s human accountability. There must be a human in the loop. ‘But the AI told me…’ won’t be an excuse,” Wood said.
And for members of the legal profession who fear an impending wave of mass unemployment, Black offered a salve: “The jobs aren’t going away, they’re simply shifting… junior lawyers are often tasked with reviewing documents and summarizing troves of data, the sort of tasks that are best-suited for AI. Moving forward, attorneys will have greater bandwidth to work on actual client and business analysis.”
The implication that the days of parsing hundreds of leases or contracts may be numbered ended the discussion on an optimistic note.