Stop relying on personal statements in admissions

Stop relying on personal statements in admissions

At least the way they are currently generated

A few weeks ago I argued that the unsupervised, take-home law school exam has become indefensible in the age of widely available AI. Asking students to sit at home, swear to an honor code, and not consult a tool that the entire world now treats as a writing and reasoning prosthetic is the pedagogical equivalent of leaving your front door wide open with a handwritten note: Please don’t rob me. Millions inside. Predictably, some people rob you. Many more, seeing that no alarms go off, begin to wonder whether not walking in is just naïveté.

We have the same problem—worse, in fact—with the law school personal statement. Unlike a midterm, the personal statement is not a controlled academic exercise. It is unproctored, high stakes, slow-cooked, easily outsourced, and now—thanks to generative models—effortlessly optimized for whatever emotional and rhetorical appeals an admissions office is thought to reward. Pretending otherwise is not prudent conservatism; it is institutional negligence.

I. From Honesty Window to Narrative Engine

The personal statement historically did two things: (1) gave committees a sample of the applicant’s writing; and (2) supplied a narrative through which reviewers could gauge motivation, judgment, maturity, resilience—call it character. Those functions were always a little soft. Applicants could (and did) get coaching. Affluent families could hire consultants. Still, effort, cost, and time capped embellishment. AI vaporizes that ceiling.

A recent survey of 2023-24 college applicants found about one-third used AI in drafting their application essays, and 6% said AI wrote most or all of the draft. That is astonishing adoption speed for a tool that did not meaningfully exist three admission cycles ago. (Education Week summary of foundry10 data, July 2024: https://www.edweek.org/technology/1-in-3-college-applicants-used-ai-for-essay-help-did-they-cheat/2024/07.) These figures are surely an underestimate: many students will not admit to disfavored behavior and AIs have gotten way better since the primitive days of 2024.

If cost once limited fabrication, it no longer does. The same piece notes that high-end admissions counseling routinely runs into the thousands; now everyone with a $20/month subscription (or a free model) can obtain copyedited, structure-aware prose on demand. (For a discussion from LSAC on high-priced consultants and AI “leveling” access, see https://www.lsac.org/blog/chatgpt-law-school-application-personal-statements-and-lsat-writing-sample.)

II. My AI Doppelgänger

To test the possibilities, I fed an LLM a prompt containing a crude and selectively distorted account of my own long-ago childhood—mixing the true (a scary knife incident) with embellished urban hardship (living right next to Harlem) and curated empathy—and asked it to produce a law school personal statement. And when the output came back too polished, I asked a second model to “humanize” the prose: add grit, leave a few seams. The result of this iterative process sang on every sympathetic frequency an admissions reader might have: poverty adjacency, early exposure to inequality, moral seriousness, Princeton journalism, evidence-based policy awakening, and a concluding pivot to law as the instrument that turns values into reality. As fiction goes, it was good. As evidence of who I am, it was trash and would collapse under even the perfunctory cross examination. But who, reading 4,000 such essays in a cycle, is going to verify whether nine-year-old me was truly held at knifepoint over pocket change? (Although I was). Whether I lived in the neighborhood described and for how long? (10 years) Whether the empathy I claim to have felt existed anywhere but in a prompt? (Perhaps not).

They won’t. They can’t. We are inviting a crisis of integrity.

III. “But Students Always Got Help.” Yes—and No.

I am told that AI changes nothing because wealthy applicants have long outsourced their prose. This is both correct and profoundly misleading.

Correct: Paid consultants are ubiquitous. In elite undergraduate admissions, estimates put consultant use among admits to top programs (Harvard et al.) in the neighborhood of a quarter of the class; the precise number varies, but the practice is not rare. (Education Week / foundry10 reporting on widespread paid support; see also discussion in LSAC blog, which notes consultants charging “thousands” for personal statement work: https://www.lsac.org/blog/chatgpt-law-school-application-personal-statements-and-lsat-writing-sample.)

Misleading: Scale and detectability matter. When polish cost $3,000 and weeks of back-and-forth, many applicants went without. When comparable polish costs $0–$20 and takes minutes, most can get it—and the resulting influx collapses the signal. Moreover, consultant fingerprints—house style, recycled tropes—were sometimes visible to seasoned readers. AI outputs, especially when lightly edited by the applicant, are far harder to spot. Even OpenAI gave up on its own AI-text detector, shuttering it after internal numbers showed only 26% true detection and 9% false positives—which, in an admissions context, is litigation bait. (Observer: https://observer.com/2023/07/openai-shut-ai-classifier/.) Can you tell which paragraphs in this blog post were written by AI and which by me?

LSAC’s own technical blog came to the same practical conclusion: defining “AI use,” policing it, and reliably detecting it are all deeply problematic; bans are mostly performative. (https://www.lsac.org/blog/chatgpt-law-school-application-personal-statements-and-lsat-writing-sample.)

VI. The Other Two Numbers Aren’t Holy Either: LSATs and UGPAs

If the personal statement is compromised, can we simply lean harder on LSAT and GPA? Not easily.

Although people understandable bemoan the LSAT (or, now, the GRE), so far as I know it LSAT remains the single best quantitative predictor of first-year law school performance, and its incremental predictive value over UGPA is robust; LSAC data show that combining LSAT and UGPA improves prediction far beyond UGPA alone, and that LSAT validity has ticked up slightly in recent years while UGPA predictive power has ticked down. (LSAC research summary: https://www.lsac.org/data-research/research/lsat-still-most-accurate-predictor-law-school-success.)

But “best available” is not “sufficient.” Performance on a timed, multiple-choice, logic-heavy exam, while useful, does not exhaust the competencies that matter in lawyering. Nor does it capture growth trajectories, written advocacy, or professionalism.

UGPA is worse. Cross-institution grade inflation has been chronicled for decades; A’s now represent roughly 45–47% of all grades at four-year schools, with A’s and B’s reaching ~80% at elite institutions. Law-school-specific analyses have likewise warned that grade inflation in feeder institutions distorts comparisons and weakens UGPA as a credentialing measure. Moreover, grading norms still vary across disciplines and are manipulable by students in the many institutions that maintain only the loosest control over course rigor and faculty grading policies.

If we jettison the personal statement and distrust UGPA, we are left effectively with LSAT + non-racial resume metadata. That is an awfully shaky foundation on which to pick students likely to the sort of graduates desired by the institution.

VII. What Admissions Readers Already Know (and Are Quietly Doing)

At least some admissions professionals are willing to admit the obvious. Duke University’s dean of undergraduate admissions announced in 2024 that the school would stop assigning numerical ratings to essays (and standardized test scores) in part because the essay could no longer be assumed to reflect the applicant’s own writing ability amid AI and consultant interference. Essays would remain read for content, but not scored for prose. (Duke Chronicle, Feb. 2024: https://www.dukechronicle.com/article/2024/02/duke-university-undergraduate-admissions-changes-numerical-rating-standardized-testing-essays-covid-test-optional-ai-generated-college-consultants.)

That is a half-step: tacitly admitting that stylistic excellence is unreliable while still hoping the narrative sheds light on the person. I admire the candor; I doubt the sustainability. When the body of the narrative may be fabricated, “content” ceases to be evidence.

VIII. What Would a Defensible Personal-Information Regime Look Like?

If we accept (1) AI has destroyed the evidentiary value of unsupervised personal essays; (2) LSATs, while valuable, do not exhaust what we need; and (3) UGPAs are noisy and inflated, then we must unbundle what we have been asking the personal statement to do and reassign those functions to instruments that can bear the weight.

A. Writing Ability Under Conditions We Can Trust

We already possess an underused instrument: LSAT Argumentative Writing. It is timed (50 minutes), remotely but securely proctored (webcam, mic, screen monitoring), and required for score release. Law schools receive the raw sample. It is not scored numerically, but committees can read it. If we care about the applicant’s unaided writing, weight this. Schools that want more can add a short on-campus or live-remote writing calibration for finalists—10–15 minutes, low burden, high signal. (At the cost of really annoying applicants, they could then save money by having AI evaluate it).

B. Context, Motivation, and Contribution—Live, Not Ghostwritten

If narrative matters—and it does—move it to structured, recorded interviews. These need not be long. A ten-minute targeted conversation with standardized prompts (“Tell me about a time you had to take responsibility for a mistake”; “Why law rather than public policy?”; “Describe a community you expect to serve.”) probably yields more reliable insight than 650 unverified words. Trained alumni or adjunct readers can review the responses. To be sure there will be issues of bias and inter-grader reliability, particularly if a large pool of evaluators is employed, but it's not clear that this would be much more serious of an issue than exists with large admission staffs.

C. Professional Judgment & Non-Cognitive Traits

Medicine and allied health have migrated toward situational judgment tests (SJTs) such as Casper to assess ethical reasoning, empathy, and decision-making under ambiguity; these are proctored, scenario-based tools. And, yes, students can receive human and AI coaching on how to improve performance on this exam – just as they can for the LSAT and GRE – but at least when they have to respond to these live questions it is the student and not AI providing the answer. (At least for a few years when the brain-computer interface technology improves).

D. Documented Experience

Require applicants making factual claims material to admission (founding a clinic, years as a paralegal, military convoy work) to link to verifiable documentation or provide contactable referees. This is routine in employment; it need not be foreign to admission. And even if admissions officers can't check every submission, the prospect of random spot checks might reduce today's frequent applicant exaggeration. Matters that cannot be documented (even if true) – must sadly receive diminished salience under this approach.

IX. Cost and Scale

Every reform above costs money. Interviews are labor-intensive. Proctored writing requires tech and compliance staffing. Validating SJTs takes psychometric work. But compare the alternative: admitting cohorts based in meaningful part on unverifiable, AI-assisted fiction; drawing adverse inferences against applicants who don’t use AI and therefore present rougher prose; or facing disputes when an AI detection heuristic misflags an honest applicant (recall OpenAI’s 9% false positive rate in its own abandoned tool). (Observer: https://observer.com/2023/07/openai-shut-ai-classifier/.)

Law schools routinely invest in yield events, branding videos, climate audits, and post-admission pipeline programming. Reallocating a fractional slice of those budgets to front-end credibility is not extravagance; it is stewardship.

X. Why Not Simply Drop the Personal Statement Entirely?

At least one institution is experimenting with radical minimalism—scores only; or scores plus a short achievements list. There is something bracing in that honesty, and I admit admiration for schools candid enough to test it. But law is a profession that trades in judgment under facts. It is not quite the same as math. If we believe (as I do) that traits such as resilience, persistency, intellectual curiosity, and principled dissent matter to the profession, then a pure numbers regime is under-inclusive. The Supreme Court in SFFA v. Harvard did not require intellectual self-lobotomy; it prohibited racial sorting, not human inquiry tied to qualities of character. Thus, while I respect the temptation to burn the genre to the ground, I would rather salvage what is defensible: real writing, real conversation, real evidence.

XI. Implementation Sketch

A law school serious about integrity could adopt the following without wrecking its budget:

  1. Read but do not weight the unsupervised personal statement for prose quality. Treat it as background color only.
  2. Require LSAT Argumentative Writing (or a comparable school-run proctored prompt) and actually read it. Weight modestly but positively for clarity under pressure. (https://www.lsac.org/lsat/about/lsat-argumentative-writing.)
  3. Short structured video interview (live or asynchronous) for all admits above some LSAT/UGPA screen; rubrics keyed to reasoning and candor.
  4. Optional evidence upload for claims central to the applicant’s narrative (employment, service, publications).
  5. Pilot a brief SJT module in partnership with vendors who already run large-scale, secure, open-response ethics instruments.

Within two cycles you would have more trustworthy human data than any stack of ghostwritten trauma vignettes.

XII. What Gets Lost—and What Gets Regained

All of this will deprive the world of some lyric paragraphs. We will lose the capacity for an applicant to spend three months perfecting a metaphor about the light over a subway platform that taught them about justice. We will, blessedly, lose the arms race in melodrama and claims of trauma.

What we regain is signal. We regain the ability to say, when challenged, that the writing sample we weighed was completed under monitored conditions; that the applicant who spoke about representing farmworkers did so in their own halting, persuasive voice; that we considered, as the Chief Justice allowed, how an experience—racial, economic, military, familial—shaped qualities that matter in law. That is both more honest and more legally defensible than pretending that polished PDFs emailed from anywhere in the world represent the unfiltered soul of a twenty-two-year-old rather than the equiprobable output of a prompt generated by some giant AI.

XIII. Closing the Door

The door is open. The sign is up. The money is visible. We can stand there and hope the honor code holds—or we can close and lock the door, install a camera, and ask our visitors to ring the bell and provide credible, verifiable evidence of their talents. Admissions has always been about judgment under imperfect information. AI has not created that fact; it has merely exposed how thin some of our old proxies were. We should thank it for the audit—and get to work.

The views expressed here are my own and do not necessarily (or even probably) reflect those of the University of Houston or the University of Houston Law Center