The Open Front Door: Why AI Makes the High-Stakes Take-Home Assessment Indefensible

For many years, American legal education has acknowledged a fundamental limitation of the timed, in-class exam: while it effectively tests quick recall and analysis under pressure, it struggles to measure the deeper, more deliberative skills that often define great legal practice. Take-home assessments filled this gap, honoring the notion that legal reasoning develops not only under pressure but also through deliberation.
But in 2025, I believe that era is drawing to a close. The high-stakes, unsupervised exam (or paper) no longer belongs in our evaluation toolkit. This isn't just a pedagogical preference; for me, it’s a matter of institutional integrity.
Here, I want to do two things. First, I'll lay out why I believe the traditional take-home model is no longer defensible in the age of AI. Second, and more importantly, I'll offer a framework for what should come next, exploring a range of innovative assessments that may be able to protect academic integrity while achieving our core teaching goals.
The Problem: Why the Old Model is Broken
The False Comfort of the Honor Code
Legal education has always leaned heavily on norms of integrity. Most law schools have "honor codes" that subject students to discipline if they cheat. They place responsibility on students themselves to enforce the prohibitions. Maybe that system worked satisfactorily when the upsides of cheating were, in many cases, relatively low. Today, however, the landscape has shifted. Large language models are frictionless, free, and available 24/7. Part of AI's so-called "democratizing" effect is how it dramatically raises the performance floor. A student who, with reasonable effort, might have previously landed in the bottom of the class can now use these tools to produce work that competes directly with all but their most exceptional colleagues. So what does this mean for the honor code? I think it means the deterrence calculus has collapsed. When we design policy, we have to do it for the world we live in, not the one we wish to inhabit.
Footnote: Rejoining that AI cannot (yet) produce brilliant work misses the point. First, I'm not so sure that is correct. And I am even less sure it will be correct in a year or so. Second, however, like it or not, a function of modern law school exams is to order mediocrity. Very few schools use a grading system of A and "other." Instead, most of us purport to find distinctions between the B student and the B-. And there should be no question in anyone's mind that in 2025, even imperfect use of AI can boost what would otherwise be a B- response from a student to a B, a B+ or quite possibly an A-.
Can Detection Tools Save Us?
Some have hoped that technical solutions—AI detection tools—might restore the viability of take-home exams. I'm afraid this hope is a dangerous illusion. As it stands, no reliable method exists for detecting AI-generated text, and a probabilistic score that an essay is "74% likely AI-generated" just isn't strong enough to ruin a student's career. Plus, there is now an industry devoted to "humanizing" what was originally AI-drafted prose. (Maybe this blog post even went through some of that!) We simply cannot police our way out of this problem.
Policy is Necessary, But Not Sufficient
To their credit, law schools are taking action. A majority have already updated their academic integrity policies for AI, and leading schools have developed nuanced frameworks based on faculty discretion. But that's not enough.
Think of it this way. You decide to secure your house against theft. You install high-tech locks and alarm sensors on every single window, acknowledging that there is a real security risk in the neighborhood. But at the same time, you leave your front door wide open, day and night, with a polite note on it that says, "Please be honest and do not steal anything."
The strong security on the windows doesn't make the house safe. It makes the vulnerability of the open front door all the more glaring and nonsensical. That open front door is the unsupervised, high-stakes take-home exam. Our new AI policies show we understand the risk, but we are still leaving the most obvious point of entry completely unprotected.
The Solution: From Diagnosis to Design
So where does that leave us as legal educators? Perhaps a little sad. I used to give take-home exams for a variety of reasons, including that they more closely resemble the skills a student is likely to need in practice. But acknowledging the dangers of this format shouldn't lead to despair, or to a wistful, permanent retreat back to an era of purely in-class proctored written exams.
Instead, this is a call to action. The path forward is to adopt a new playbook of assessments designed for the world we actually live in. While they requires more creative effort from us, these methods address the AI challenge by reintroducing supervision and accountability. Here are three approaches legal educators (and other educators, for that matter) can think about today.
- The Structured Oral Exam. This is a direct test of nimble thinking and clear articulation under pressure—skills that (until we are chipped) are impossible to outsource to an AI. Whether as a 15-minute argument on a key case or a simulated client counseling session, oral exams are perhaps the most AI-resilient form of assessment available. Such sessions may be brutal for faculty in a 100-person business organizations class, and they too are imperfect assessments, but in smaller courses or as a partial replacement for other forms of assessment, they may make more sense.
- The AI-Critique Assignment. This approach turns the threat into a pedagogical tool. Students are required to use AI to produce a draft response to a complex problem. Their graded work is then to submit a detailed analysis that critiques the AI's output—correcting its legal errors, strengthening its logic, and identifying its strategic blind spots. This directly teaches the essential modern skill of supervising a powerful but flawed technological assistant.
- The Two-Stage Hybrid Exam. This is the most direct replacement for the traditional take-home. It's also an idea that AI itself developed after I complained to AI that my draft of this blog post was strong on diagnosis and weak on solutions. The idea is for the exam to have two stages. AI is permitted in Stage 1 but not Stage 2. It is sort of like a custom open-book but proctored exam. In Stage 1, a few days before an exam, the professor releases some variant to the final exam query or perhaps the exact exam query. The student then uses AI or traditional methods to produce a research packet that conforms to rules set by the professor on length, levels of detail, and specificity of prose. The packet may not, however, directly answer the question in the ultimate exam question either because the professor has not yet released it or because the professor has simply prohibited doing so. Before the exam, the student submits their packet to the professor. Stage 2 prohibits AI and is proctored. The professor (or proctor) releases the exact exam question(s), gives the student their individual packet but nothing else, and tells them to respond. The professor now grades the exam in the traditional fashion but checks the packet to make sure that the student has not violated the rules. It's not a perfect substitute for the take-home; the student still has to write under time pressure. But it is secure and more closely resembles the skills a student will need in practice.
By the way, a similar crisis of integrity faces the long-form seminar paper. We now have no idea who wrote it. Since a proctored setting is impossible for such an assignment, the solution must be to shift our assessment focus: away from the final, polished paper and toward the observable process of its creation. This means incorporating mandatory check-in meetings where students must orally defend their thesis, requiring detailed research logs, and making a final oral defense a key component of the grade. These steps help ensure the work and its core insights truly belong to the student.
I acknowledge fully that none of these solutions are perfect. I worry, for examle, that Solution 3 for exams has a knife-edge problem. The problem is that a professor's decision to sanction a student is a high-stakes, binary choice, yet the student's submission may fall into a subjective grey area. This dynamic risks rewarding students who push the boundaries while penalizing cautious ones who hamstring themselves with a less useful packet to avoid suspicion. Still, I'd like to see the collective wisdom of the legal community focused on an effort to hone these imperfect ideas or develop others rather than implausible denials of the gut punch AI has delivered to the cherished take-home, honor-code -"secured", high-stakes assignment.
A Role for Students: Demanding Fair Assessment
This charge for reform should not come from faculty alone. Students should lead it, too. When a student colleague—lured by a mountain of debt and the huge salary gradient that separates the top of the class from the bottom—decides to cheat by using AI undetectably, it is the honest student’s grade that is unfairly diminished.
Students should have great difficulty when their professors place them in a setting where they cannot trust the integrity of the very instruments used to assess them. These are not just grades; they are signals that determine clerkships, job offers, and potentially millions of dollars in lifetime earnings.
If you are a student, you have a right to demand assessments you can trust. You should complain (nicely) to faculty who persist in offering high-stakes exams where significant cheating is not just possible, but likely. Make the case in a lawyerly way. Get AI to help you refine your argument and anticipate your professor's reluctance and defensiveness. When that fails, you should speak to your deans and associate deans to make your case for fairer, more credible forms of evaluation. The integrity of your degree is on the line. Does your dean really want to be at the center of the first publicized AI cheating scandal? (Particularly after they were warned!)
The Responsibility of Law Schools
Let's be honest: these alternative assessments require more from faculty. This is why the pivot to new forms of assessment must be seen as a necessary institutional investment to protect the value of a legal education. The good news is that by updating their policies, law schools have already acknowledged the problem. The next logical step is to follow that recognition to its conclusion. We must empower faculty with the time and resources to build the innovative assessments that our new reality demands. It is time to move beyond the traditional take-home exam—not as an end to a tradition, but as a beginning for a more resilient and authentic model of legal education.