Why Every Law School Appointment Must Confront the AI Future

Why Every Law School Appointment Must Confront the AI Future
A hiring committee interviews a law school applicant with AI lurking not far in the background

A quiet and profound shift is occurring in the classrooms of every law school in America. Our students, facing significant academic and professional pressures, have already integrated generative AI into the very fabric of their intellectual lives. They are not waiting for our permission, our policies, or our pedagogy to catch up. This creates a growing disconnect: we often continue to teach and assess using familiar methods, while they are already living and working with new tools. But this gap is not merely the familiar tension between tradition and innovation. It is something more urgent: a collision of timescales.

Legal academia operates on a distinct, deliberate cadence. We can—and often do—spend a decade debating grading systems or the optimal structure of the 1L curriculum . These are important, worthy conversations, conducted with the careful contemplation our profession demands. Yet we are now confronted with a phenomenon that does not respect our pace. The development of artificial intelligence is not linear; it is exponential.

This collision between a slow-moving institution and a technology accelerating at a rate most of us fail to truly comprehend presents a profound challenge. It is one that calls for a thoughtful but rapid evolution in how we approach our craft—and how we hire for its future. This is not just another topic for the faculty retreat agenda; it is an immediate, structural challenge to our relevance.

I. The New Workflow and the Coming Wave

The modern law student's study process bears little resemblance to that of even five years ago. Faced with what they perceive as a mountain of dense reading, a relentless series of high-stakes assessments, and the ever-present anxiety of securing postgraduate employment to help discharge significant debt, students are making entirely rational choices. They are leveraging powerful new tools to manage their workload, gain efficiencies, and navigate a hyper-competitive environment.

At a minimum, they may use AI to get a first look at a complex case, perhaps to untangle the procedural posture of a dense Supreme Court opinion before committing to a close read. They use it to explore dense hypotheticals or to polish the prose on a writing assignment due the next day. Many faculty, of course, report much more aggressive use: students submitting entire take-home assignments written by AI, perhaps with a few feeble human tweaks.

This is not entirely a failure of character; it is a pragmatic adaptation. It is students rationally reserve their finite human attention for tasks they deem highest-value. This observation is well-supported by trends across higher education—one recent survey found that 92 percent of British undergraduates now use AI in their studies. American college students are not far behind. There is no logical reason to assume that law students, whose burdens are arguably even greater, are the exception. There is also little reason to assume that Honor Codes will significantly deter abuse where AI-detection methods remain crippled by excessive false positive and false negative rates. Trust but verify seldom works where verification is impossible. A substantial minority of students just don't see such use as real cheating.

And yet, in faculty lounges (do they still exist?) and curriculum committee meetings, the conversation about this new reality is often muted. We tend to speak of AI as a future challenge, a topic for a specialized seminar, part of an inchoate retreat on pedagogy a year hence, or primarily as an issue of academic integrity. We have not yet fully reckoned with the fact that for our students, the AI transformation is not a future event. It is their present.

The Generational Wave

Crucially, the urgency of this situation is amplified by a generational shift that is already underway. The students in our classrooms today are, relatively speaking, early adopters, figuring out the tools alongside us. But the students arriving next year, and the year after, are true natives.

A student entering law school in 2026 will have spent their entire undergraduate career with sophisticated LLMs as a constant companion. The class entering in 2028 will barely remember high school without them. They are not just familiar with the technology; they are steeped in it. Their fundamental methods of research, writing, and perhaps even thinking have been shaped by it.

AI is not going away. And as this wave of students arrives, our current, often hesitant, responses will seem increasingly out of touch, even archaic. The gap between how they work and how we teach will widen from a crack to a chasm. The problem is not static; with each incoming class, it grows larger and more pressing.

II. An Understandable, But Unsustainable, Response

The most common institutional reaction has been a sensible, if temporary, focus on containment. Faced with a new and disruptive technology, the instinct to secure the integrity of our assessments is a natural one. Reverting to time-tested methods like proctored, handwritten exams is a pragmatic stopgap—a way to ensure fairness while we get our bearings. I'm guilty of it too. It's an honest attempt to preserve rigor in an uncertain environment.

While understandable as an immediate triage measure, this approach is untenable as a long-term strategy, precisely because the technology we are trying to contain is improving exponentially. A stopgap is not a strategy.

The Exponential Curve

Humans are notoriously bad at internalizing the logic of exponential growth. We think in straight lines. The classic analogy is the lily pond whose plant life doubles every day: on day 29, the pond is half full, and it seems like there is plenty of time. On day 30, it is completely covered. For the first 28 days, the change was present but manageable. The final, dramatic shift happened almost instantly.

This is where we seem to be with artificial intelligence. The progress over the last few years feels substantial, but if the exponential trend holds for even a little while longer, the progress we see in the next eighteen months could dwarf everything that has come before. (For a rather nightmarish scenario, that will render irrelevant everything written here, you can read this.) AI is not a static new tool to be mastered once, like Westlaw or electronic discovery platforms. It is a dynamic, rapidly evolving capability.

This simple, mathematical reality transforms the nature of our challenge. Treating this as a routine pedagogical issue—to be contemplated over years—is like seeing the pond half-full on day 29 and concluding we have another month to figure out a plan.

Containment is doomed. As AI capabilities become more powerful and, crucially, deeply integrated into the operating systems and software we all use (word processors, email clients, search engines), our ability to create truly "sterile" environments will vanish. Block the laptop, it runs on a cellphone. And in future, block the cellphone (good luck with that!) and the AI is in your glasses or, perhaps in a decade, a brain-cloud interface. The effort to wall off the classroom from the tools of the modern world is like trying to build a conventional seawall to combat a major tsunami. It might rebuff the first wave or two, but it is destined to be overwhelmed.

Moreover, techniques such as reverting to handwritten 1980s style bluebook exams carry their own pedagogical penalties. They ignore the fact that most of our students have not handwritten much of anything since elementary school. I doubt I have handwritten more than a paragraph in the past five years and would be severely disadvantaged by a requirement that I do so. (So too would the professor tasked with reading my scrawlings!)

More importantly, though, containment misses the pedagogic opportunity. By relying solely on methods that preclude the use of modern tools, we may inadvertently send the message that a legal education is an artifact, disconnected from the realities of modern professional life. We risk training our students for a law world that is rapidly fading, rather than the one they are about to enter—a world where their success will depend not on their ability to work on the hypothetical desert island without these tools, but on their ability to command them downtown with skill and wisdom.

III. Aligning Our Hiring with an Exponential Future

Precisely because teaching and practicing effective use of AI is likely to be central to the role of the law professor before this decade is out, we need – right now – to align our hiring practices with this reality. For years, appointments committees have relied on a stable set of criteria to evaluate candidates: scholarly potential, institutional fit, and teaching ability, often judged by traditional metrics. It is time to add a new dimension to that evaluation.

We are hiring faculty for thirty-year careers in an era likely to be defined by continuous and accelerating disruption. Given the exponential trajectory of AI development, the legal profession of 2040 is almost unimaginable today. Even the world of 2030 may look very much different. In this environment, intellectual flexibility and a commitment to lifelong learning about technology are not merely virtues; they are essential competencies. An appointments committee that fails to rigorously inquire into a candidate's plan for navigating the AI revolution is committing a significant strategic oversight.

Appointments committees should consider adding two questions to their interview protocol for every candidate, regardless of their substantive area:

  1. How do you plan to maintain your technological literacy and adapt your teaching as AI and other tools evolve over your career? This question moves beyond current knowledge—which may be obsolete in two years—to test for a candidate’s intellectual disposition. It probes for adaptability, humility, and a recognition that the tools of the trade are no longer stable.
  2. What is your initial plan for addressing generative AI in your [substantive area] classroom? This brings the abstract into immediate focus. It invites a candidate to share their thinking on pedagogical responses, whether it is a syllabus policy, a novel assignment, or a framework for class discussion.

A candidate who has already thought through these issues is simply better prepared to teach in the current environment. A candidate who has not seriously considered these questions is simply unprepared for the realities of teaching law today, let alone a decade from now. I would love it if the American Association of Law Schools would seriously encourage candidates to contemplate these issues along with more traditional ones as they prepare to enter the academy.

An Implementation Sketch

Moving from conversation to action is the essential next step. The inertia in legal academia can be a powerful force, so too is the desire of many schools not to "stick out." But the accelerating pace of technological change demands that we overcome these proclivities. And, I suspect, there are major rewards for schools that align their own pedagogic methods with the expectations of the modern student.

For Current Faculty: Deans could encourage faculty to form working groups to explore how to adapt their assessments. A simple goal for the next academic year could be for each volunteer to redesign one significant assessment in one course to meaningfully incorporate the use of generative AI, and then to share their experiences—both successes and failures—with their colleagues. We need to foster a culture of rapid experimentation and shared learning. I'm happy that my own institution is at least inventorying how faculty intend to use AI in the coming year.

For Appointments Committees: The hiring process can be updated immediately. In addition to the interview questions submitted above, they can ask candidates to submit a brief, one-page "Statement on Teaching and Technology" that addresses their thinking on these topics. (Of course, the smart candidate will use AI to help answer the question, which illustrates my entire point!) This would signal a shift in institutional priorities to the market of aspiring scholars and ensure that a candidate's readiness for the future of law practice is a central part of their evaluation.

III. Some good answers for prospective faculty

And what are prospective faculty to say in response to these pesky questions about use of AI? Here are some suggestions. We need to move from being a proctor to being an expert coach. This perspective suggests our goal should not be to simply prevent students from using AI, but rather to teach them how to use it skillfully, ethically, and effectively as a component of sophisticated legal work. This represents the next stage in the craft of legal education.

The exponential curve means the specific tools will change constantly. Just this week, for example, Grok from X introduced visual chatbots with personality who are fully able to be hijacked into discussions of HIPAA privacy or ERISA preemption. The models our students use in 1L year will seem as quaint by the time they graduate as 2022's ChatGPT does today. Therefore, our pedagogy must focus on the meta-skills of adaptation, critique, and ethical judgment. This shift invites a thoughtful redesign of what we ask students to do.

We can design for augmentation, not automation. Assignments that can be completed entirely by a machine are becoming less valuable as measures of human competence. We must acknowledge the exponential curve here, too: a current LLM can earn an A- on many an an assignment , an LLM a year from now may well earn a straight A. Six months ago, AI would not have been able to write much of this essay. With Deep Think from Google and some modest effort on my part, you have the result in front of you. We have an opportunity to focus our assessments more on the irreplaceable skills of a great lawyer: strategic judgment, creative problem-solving, nuanced client counseling, and sophisticated persuasion.

This means leaning into more complex simulations, oral advocacy, and drafting exercises where a student might use AI for a first pass—generating boilerplate language, summarizing research, or organizing an argument—but where the majority of the value lies in the human-led analysis and refinement.

We can integrate AI explicitly and critically. What the student does need to know—and it is a significant undertaking to teach—is how to assess the quality of an AI response. We must train them to be discerning editors, not passive consumers.

We can design assignments that require students to use these tools and then critique their output. Imagine a legal writing assignment where students use an LLM to generate an argument, and are then graded on their ability to identify the model’s errors, strategic blunders, and logical fallacies. We can ask students to use AI to generate a first draft of a contract and then grade them on their ability to redline it, identifying its weaknesses and ambiguities. This teaches them to act as discerning senior attorneys, supervising the work of a brilliant but inexperienced—and occasionally unreliable—assistant.

We can teach the limitations as a core competency. A central part of this new craft is to instill in our students a deep and abiding professional skepticism. They must learn how to work efficiently with AI while simultaneously understanding the nature and probability of model hallucinations and the ethical imperative of maintaining responsibility for their work product. Training them to perform this new kind of critical thinking—and teaching them how to constantly re-evaluate a tool's capabilities as it rapidly improves—is as fundamental to their education as teaching them to read a statute.

IV. Conclusion

Our students are already operating in this new world. They are building the skills they believe they need to succeed, adapting to the exponential curve even if their institutions are not. The question before us is not whether the legal profession will be transformed by these tools, but who will guide that transformation. By thoughtfully and urgently adapting our teaching and hiring, we can hope for continued relevance. If not, well ...