Ross Martin ’95 opened his contracts drafting class this past fall with a decidedly Papa’s-Got-a-Brand-New-Bag overture. As soon as his students hit their seats, the adjunct professor blurted out the obvious—acknowledging that whatever anyone thinks about the degree to which academic integrity and AI-supported work products are educational counterforces, open-source AI use is a bell that can’t be unrung.
You can’t simply tell students they can’t use it, Martin says. “I told them ‘I know all of you are worried that, institutional guidelines aside, somebody else in the class is using it and no one will be able to tell, so, (we’re all going to use it).’ Every pair of shoulders in that room dropped. You’re communicating to them that you understand the playing field they’re on, and that you’re there to facilitate. Accept that your students are worried about it and figure out how you engage with AI. Just let ’em use it.”
Martin’s classroom isn’t some macabre laboratory where Westlaw’s AI tool CoCounsel is put to use to create a doctrinal Westworld. He understands the institutional hurdles and the ongoing development of the Law School’s protocols for engaging with AI. He gets the security and privacy concerns. And most importantly, he’s instituted a non-negotiable: closed book quizzes in class, closed book midterm, closed book final—old school law school. The point being, his students still have to learn the legal principles and understand their application. AI can help them get there, but they’re assessed on their skills.
Martin’s approach is a reasonable entry point for a much broader conversation—one that is as overwhelming and existentially fraught as it is enticing and mind-bending. It’s also a discourse burdened by questions large and small, many of them as yet unanswerable. The biggest is whether this next-generation technology will bend Martin Luther King Jr.’s “moral arc of the universe” toward or away from justice.
Beneath that firmament are countless considerations. For instance, what are the implications of this tech for law students and how might that impact the learning environment at Boston College Law School? What are law firms’ expectations pertaining to graduates’ AI competencies? Can any higher ed institution hope to design principles and guidelines for artificial intelligence use that are supple enough to be relevant a month after they’re issued?
Deeper still, where do LLMs (large language models) and agentic AI (artificial intelligence systems that can operate independently) fit into the Jesuit mission, and how does the formation of the whole person play out with AI riding shotgun? Pope Leo opined on that in his November message to the Artificial Intelligence Forum 2025 in Rome, where he called upon attendees engaged in AI development and employment to ensure systems serve “human dignity, justice, and the common good … reflecting God the Creator’s design: intelligent, relational, and guided by love.”
It’s tempting to prompt an AI tool to weigh in on any of the above inquiries. BC Law is taking a proactive, human-centric approach. Barely a year after OpenAI released GPT-4 in March 2023, the Law School sponsored a forum titled “From the C-Suite: How AI Is Changing Legal Practice.” Specific pedagogical ideas and guidelines were circulated to faculty in 2024, while the library’s technology team developed trainings and workshops and created an online hub of AI resources for faculty and administrators. That fall, the theme of the Law School’s Sixth Annual International IP Summit (co-sponsored by Ropes & Gray) was AI in Law and Business.

“This is one of the most transformative technologies humans have created. From the scary to the super-practical, we need to identify what the average person needs to know about the tool.”
Kyle Fidalgo, academic technologist
In the spring of 2025, Dean Odette Lienau formed an Artificial Intelligence Task Force of faculty, legal technologists, and administrators to study these issues and make recommendations. It convened for the first time this past fall. Although its work is ongoing, the task force tackled three main areas for research and recommendations: curricular innovation and student formation; AI in faculty research and scholarship; and leveraging AI in support of administrative work across BC Law. All of it is constructed through the lens of BC Law’s Jesuit mission of human-centered lawyering and ethical practice.
Meanwhile, talented staff and faculty have worked the issue in other contexts, from the Law School’s Center for Experiential Learning (CEL) to its Law Practice program to its individual classrooms. Universitywide, BC’s Center for Digital Innovation in Learning and the Lynch School of Education and Human Development’s Purpose Lab conducted independent research and investigated AI-engagement.
Then there’s BC Law Academic Technologist Kyle Fidalgo, who embodies the closest thing to a machine-learning shaman as anyone on the payroll. Fidalgo says the hardest part of talking about AI is how expansive the subject matter is. He’s also sensitive to facing the downsides and using caution, noting that we’re not accustomed to a tech that can string us along like a human can.
“This is one of the most transformative technologies humans have created,” says Fidalgo. “From the scary to the super-practical, we need to identify what the average person needs to know about the tool. Everyone needs a base level of AI literacy and fluency skills and, here at the Law School, they can be tied to core competencies students need to work in the field. That means getting them talking to AI. A good way to think about this tech is that it can be both the vehicle to get you where you want to go, and also your guide.”

AN ETHICAL DUTY
Claire Donohue, the associate dean of experiential learning at the Law School, describes a fragile equilibrium when it comes to student engagement with AI and preparing them for the profession. CEL’s mission is to bring together traditional classroom methods and practical applications via first-hand experiences in externships, in-house clinics, and advocacy competitions.
“We need to consider the ways AI is being used by potential employers and examine whether we’re being responsive to students’ needs,” says Donohue. “In what ways might AI actually improve the center’s capacity and responsiveness? We’re working hard to develop best practices in the AI space.”
Assistant Professor of Law Raúl Carrillo has allowed student engagement with AI from the outset because of what he views as “my ethical duty” to ready them for the professional world they will actually enter. “I’m not preparing them for now, I’m preparing them for the world of 2028—or trying to do as best I can—because they are 1Ls.”
Carrillo’s stance is by no means a full-throated endorsement. A law and technology scholar with a focus on fintech before coming to academia, he’s been a vocal AI critic in his writing. However, he’s introduced multiple classroom exercises rooted in the tech, including students making an argument before custom-built bot judges. “I think that BC, with a squarely Jesuit mission, has a lot to think about,” he says. “I think the way AI development is going is deeply socially and ecologically problematic. But I have to balance that with the fact my students are going to encounter tensions about this in their future employment and within themselves as well. As an educator, you simply have to deal with this.”
Carrillo is treating his classroom like what gamers would call a “building sandbox” software, a genre where users freely construct, experiment, and create without real-world consequences. With Fidalgo as his guide, he used a Claude AI assistant to build an enriched version of the canonical 1960 contract dispute, Frigaliment Importing Co. v. B.N.S. International Sales Corp, a disagreement over the meaning of the English word “chicken.” Using AI, Carrillo built a fictional three-party lawsuit over a contract with multiple terms of interpretation (in this case, the issue was: what constitutes duck?).
Students could access all manner of AI-fabricated documents detailing the history of the parties, including spreadsheets identifying different species of duck, logs of communication between litigants, and accompanying records that brought the “contract” to life.
“There are some topics in contract law that are very difficult to do exercises around or to provide an exam question about, especially the interpretation and interpretability of contracts,” explains Carrillo. “It’s very complicated and time-consuming to build a ‘world’ that has enough context for the testing of interpretation to be helpful. In this AI-supported framework, students could interact with all this stuff I never could have created myself. When the context is richer, when there are more facts, the interpretation feels more realistic—even if it’s within a closed universe that Kyle [Fidalgo] and I created.”
Meanwhile, the Law School task force’s findings reflect the parameters of Dean Lienau and the school’s early thinking. Its work is guided by a set of principles for interaction. Lienau notes that graduates must 1) understand how to use appropriate AI tools to assist in the production of particular legal language or documents (and be trainable as tools evolve); 2) be equipped to assess and benchmark a tool’s work product; 3) anticipate how AI tools might fit within a broader client strategy; and 4) weigh the tech’s ethical implications for the legal profession and the rule of law.
“How to teach, when to teach, and who should teach these tools to our students has to be grounded in the traditional legal-learning model,” she says. “As you progress in your legal career, those human elements of lawyering become increasingly important. That’s the element you cannot outsource.” Therefore, she argues, BC Law needs to ensure that its students have, at a really high level, the top-flight legal skills and high-EQ elements of lawyering—the capacity to listen, the capacity to connect, the capacity to absorb an array of disparate information—“whether it’s the technical legal analysis, the context of legal analysis within a particular client problem, or the nuance that’s inevitable in any human interaction.”
The Law School’s Law Practice program may not seem like an obvious petri dish for student-AI engagement. The six-member faculty’s curriculum is built for students to learn fundamental skills in legal reasoning, critical reading, and writing—precisely the tools they’ll need to objectively evaluate AI when they use it as practitioners. Department policy currently prohibits AI use in generating any student work. Nonetheless, the program as a whole recognizes an unavoidable, learn-by-doing component to generative and agentic AI.
Associate Professor of the Practice Maureen Van Neste helped develop a unit in collaboration with colleagues Lis Keller and Joan Blum that challenged students to explore how a lawyer might use AI tools not only efficiently and in service of their client, but also ethically. “The entire Law Practice faculty is mindful of the fact students may go to summer placements where they have access to these tools, so our work would be incomplete if we didn’t train them how to critically assess the output and think about ways to efficiently use AI,” says Van Neste.
As part of the exercise, students are trained on Westlaw’s CoCounsel tool and Lexis/Nexis AI-enabled Protégé, then work through various research problems in class—for example, drafting an AI-assisted research email for a supervising partner. They’re required to assess the output, independently verify its accuracy and determine its level of completion, and articulate the steps of that vetting process. Discussion points include additional verification measures and ways to improve upon the output’s accuracy and/or the format’s accessibility to a client.

“I think it could only make lawyers better if we know how to use (AI) to work in our favor, and I think we’re at the beginning stages of that. The law is unique in that we have this layer of ethics and standards that govern us as we interact with AI. And there’s no substitute for human judgment and the client-relationship aspects attorneys bring to the table.”
Emma Follansbee ’17, senior associate and AI lead at Mintz’s Boston office
“We reinforce that by giving them a graded assignment with a new research question,” explains Van Neste, also a member of the dean’s task force. “They’re required to submit the original output, their critique of the output, a description of how they verified whether the content was correct and comprehensive, and, finally, their revised output that is correct, comprehensive, and in a client-friendly format.”
Blum, Keller, and Van Neste were awarded BC Law’s 2025 Faculty Prize for Innovation in Pedagogy, which annually recognizes creative approaches to teaching methods and new topics. Their work will continue as part of ongoing collaboration with colleagues on the Law Practice faculty as the curriculum evolves alongside new technologies.
SEISMIC ACTIVITY AT LAW FIRMS
The law firm Mintz has been thoughtful about introducing acceptable AI usage and continuing education for attorneys with professional rules at the forefront, according to Emma Follansbee ’17, a senior associate in the firm’s Boston office.
“Speaking personally, I think we have an obligation to learn it, master it, and understand how it can help us do better work for our clients,” she says, noting the firm has a Head of Innovation, AI, and E-Data Consulting who’s hip deep in ensuring Mintz adapts to AI’s accelerating arrival as seamlessly as possible in an ethical, responsible, and clinic-forward manner. “Based on my own experience, I think it could only make lawyers better if we know how to use it to work in our favor, and I think we’re at the beginning stages of that. The law is unique in that we have this layer of ethics and standards that govern us as we interact with AI. And there’s no substitute for human judgment and the client-relationship aspects attorneys bring to the table.”
That sentiment resonates with BC Law’s Donohue.
“The law, in general, is a slow-moving beast, and that’s probably for the good,” she says. “We’re supposed to be thoughtful and deliberate, and we’re a self-regulating space. So, we do things with an eye toward checking ourselves as we go. I think the world of practitioners and legal counselors is a really dynamic space in terms of who’s lifting, who’s leading, and who’s resisting, and the cast of characters seems to shift,” adds Donohue.
Firms’ relative effectiveness at educating clients about AI’s morphing applications, capabilities, and limitations, writ large, is virtually certain to take center stage in the profession over the next 12 to 36 months. Privilege, privacy, security, and billable hours are all part of that calculus, as is training new associates.
“I think attorney training, mentoring, and development is the biggest challenge faced by law firms with AI,” says Doug Nash ’96, who serves as chair of Barclay Damon’s AI Ad Hoc Committee. “One of the hardest parts of my job as a law firm partner is training the next generation of lawyers to be able to stand on their own two feet. That’s hard normally, let alone with the arrival of AI. I prepare for a deposition or write a brief in a certain way, and I try to impart that. If the process is part of the journey, I think everyone is struggling with the idea of inexperienced lawyers accessing a shortcut right out of the gate. There’s a lack of context and they haven’t necessarily been taught how to do things properly.”

“It’s important to understand (AI) and experiment with it and internalize its strengths and limitations, but it’s available to collaborate with you rather than compete with you. It’s certainly capable of making you a better lawyer, but it’s also capable of making you look incompetent or lazy. It does what it does. It’s up to you to check your work product.”
Colin Levy ’10, general counsel for tech company Malbek Solutions
Barclay Damon is governing this push and pull by setting boundaries for young attorneys until they’ve reached a certain level of maturity in their practice. New associates can use AI to support the generation of a given work product, but they have to disclose the platform they used, the prompts they gave, and the specific results.
“This practice affords us a way to redirect them,” says Nash, who adds that sensitivity to confidentiality issues is the prime directive prior to AI engagement. “It’s not a perfect solution, but it’s at least an avenue to attack the problem.”
“Young lawyers still need the reps, so to speak,” agrees Kati Pajak Strzelczyk ’18, who established and now leads an AI Governance program at KAYAK as counsel for AI (Innovation & Trust), Product, and Procurement. “They need to have the fundamentals. Do they understand the basis of the law? Can you recognize the clues required to issue-spot and explore unique interpretations that are the most intellectually demanding and make the law interesting? Only then should they use tech to make them more efficient.”
Another tectonic transition in play is the fact that legal clients of all stripes are sure to soon push hard for cost-saving measures within professional services fees by way of AI use. At the same time, they won’t want less-than-fully-trained attorneys handling their cases, even on an expedited basis.
“The legal profession has to come to grips with the fact that the rules we all had to apply and abide by were created in a different world than we live in now,” says Colin Levy ’96, general counsel for Malbeck and a longtime leader in the legal tech space. “That has a ton of downstream impacts, including how we train lawyers, how we educate them, and, frankly, what it means to be a lawyer and do legal work.”
Levy is no Cassandra. He calls AI “inescapable,” but he doesn’t think an us vs. it mentality is productive, either for lawyers or their clients. “It’s important to understand it and experiment with it and internalize its strengths and limitations, but it’s available to collaborate with you rather than compete with you. It’s certainly capable of making you a better lawyer, but it’s also capable of making you look incompetent or lazy. It does what it does. It’s up to you to check your work product.”
KAYAK’s Pajak Strzelczyk has been engaged in issues related to AI’s impact on business and law since, well, that became a thing. She possesses a deeper sense for the tech’s ethical use and best applications than most. She’s also someone who “engages with it knowing I’m the boss” and she’s resolute in her belief that the frontal lobe needn’t ever play second fiddle to whatever lurks within the back-end algorithms.

“I think the way AI development is going is deeply socially and ecologically problematic. But I have to balance that with the fact my students are going to encounter tensions about this in their future employment and within themselves as well. As an educator, you simply have to deal with this.”
Raúl Carrillo, assistant professor
“Learning to use AI in an effective manner pays dividends because you learn how to use prompts that induce a more consistent output,” she says. “It can be a beneficial resource for junior lawyers as a training tool, too. It allows them to workshop their understanding in private so they can present a polished analysis in public. The interactive nature of AI helps you ‘beta test’ your legal theories. It creates a feedback loop that sharpens your inquiries, helping you move from surface-level questions to a more sophisticated grasp of the material. I’m an advocate for efficiency hacks and make it my mission to share lessons learned that produce process efficiencies with my colleagues.”
BC Law’s Carrillo reports the training-tool paradigm is playing out with similar benefits in the parallel universe of his classroom.
He keeps asking the students, “‘What is it that we do that we’re discovering the bot can’t do? Can it produce real legal reasoning that is organic, or is it just doing pattern recognition?’ This is a great question that we should also be asking of human lawyers. And judges. Are you really grappling with the law and the facts, or are you operating within templates, essentially? That lends itself to a host of intellectual questions about what is the law and what is lawyering, but also doctrinal questions and big practical issues they’ll face with higher stakes when they graduate.”
As a consequence, Carrillo’s students are compelled into a humanistic exercise not in spite of using the tech, but because of it. “It’s been incredible to watch them develop a sense of whether they like AI or hate AI, and recognize they have to confront it whether they want to destroy it or work alongside it, which is the more likely outcome going forward.”
THE HUMAN LEAGUE
The tension between the organized, widespread adoption of AI use in the world of legal advocacy vs. the time (read: sunk cost in lost billable hours) it takes to train a junior attorney in AI’s ethical, effective, protected, and outcome-enhancing use remains a turbulent headwind in the industry. That reality undoubtedly shines a spotlight on legal education as the optimal incubator.
Tim Lindgren, the assistant director for design innovation at BC’s Center for Digital Innovation in Learning (CDIL), believes an academic landscape augmented by AI presents an opportunity for what BC Professor Paula Mathieu calls a “Co-Inquiry” approach, where students and professors engage with this new technology as partners. Learners in all disciplines are craving guidance, but even as learners, they already own valuable perspectives from their own experiences experimenting with AI. As Lindgren explains it, “The ability to think deeply about how you think is a much more important skill today because we have these tools now.”
That thinking reflects the essence of BC’s AI Test Kitchen, developed by CDIL with an assist from the Law School’s Fidalgo. Lindgren says the job of CDIL is to reckon with artificial intelligence from a design perspective to get folks experimenting with it and reflecting on what they learn about its capacity and themselves in the process. For the center, it’s a chance to capture insights while offering constituents a setting for both professional development and self-reflection.
CDIL has organized and hosted multiple, year-long faculty working groups since 2024; these meet monthly and afford attendees a chance to work on an AI-enabled project, reflect, share ideas, and create community. Smaller, shorter-form faculty and professional development sessions are currently rolling out. The center also offers workshops targeting faculty interested in augmenting their teaching, not necessarily with AI as the driving force, but to gain a sense of what it means for teaching, learning, and assessing student mastery of a subject. This spring, CDIL is collaborating with BC Student Affairs to host pizza discussions centered around AI.
“This is obviously a design material different from other digital affordances, platforms, and frameworks, so understanding what it does and doesn’t do well is a really important conversation within the role it’s going to have across all of Boston College and in our lives,” says Lindgren. “It’s been fascinating to have both faculty and staff attend our programming, so there’s this cross-fertilization of learning from one another. It’s crucial to keep understanding how people learn and how they are reflecting on AI, and for us to keep adapting because AI keeps changing, our work keeps changing, and some of our roles are evolving.
“There’s a lot of making it up as we go, but because we’re BC, we can embrace this deliberative approach that wants to be reflective,” he notes. “Relationships are really important as we lean into the Jesuit mission and ask, ‘Even though we don’t know where this is going, how can we strengthen bonds and find new ways to collaborate, both with each other and with AI?’”
Raúl Carillo photograph by Caitlin Cunningham


