Teaching LLMs in Law School While We Engineer for Confidentiality

At a recent AALS event, while I was extolling the virtues of large language models in legal education, someone asked an impertinent question: “What good is all this learning if lawyers can’t use these LLMs once they get out into practice because of confidentiality rules?” I improvised an answer—client consent can help, providers now offer pledges not to train on your data, and in theory you can even run a model locally to keep everything in-house. The answer was truthy but unsatisfying, and I knew it. (So too did a colleague who teaches professional responsibility and got on my case afterwards!)
The bigger point, though, is that it is becoming progressively irrational not to use LLMs in law practice. In 2024 the inefficiency was noticeable; in 2025 it is glaring; by 2026 it will be malpractice-adjacent. At $800 per hour and more for senior lawyer time, the stakes are not trivial. They go directly to access to justice. But we still have rules of professional conduct, and those rules are written for a world where “third-party disclosure” was a bright line. Either we relax those rules or we build an architecture that satisfies them. This essay explores that choice.
The Old Comfort of Lexis and Westlaw
When lawyers typed queries into Lexis or Westlaw, they were in safe territory. Those platforms were glorified libraries. A search like “Title VII retaliation burden of proof” did not transmit a client’s life story; it transmitted a doctrinal label. At worst, someone could infer that a lawyer’s client might have faced workplace discrimination. But there was no raw record, no deposition transcript, no confidential email string being handed over.
That insulation explains why no one worried much about confidentiality. A Lexis query was not a disclosure, it was a retrieval. The “third party” was functioning as a publisher, not a confidant.
The New Terrain of LLMs
LLMs operate in a different mode. Their utility lies not in spitting back case law but in weaving rules with fact, kind of like the analysis "A" in the time-tested IRAC rubric used by law students to analyze problems. That means they are most powerful when we feed them client documents: contracts, deposition transcripts, medical records. The moment we engage in this desirable form of "context engineering," we start transmitting what Rule 1.6 of the ABA Model Rules calls “all information relating to the representation.” Unlike Westlaw, which lived in the public sphere, LLMs thrive on the private record.
This is why lawyers feel the friction. The duty of confidentiality is broad, and courts treat disclosures to third parties as waiver of privilege. The “drop in the ocean” defense—that in the flood of trillions of tokens a single client’s data vanishes—does not answer the doctrinal question. For better or worse, current privilege law may not care much about probability. It cares about disclosure itself.
How the Bars Are Responding
ABA Formal Opinion 512 (July 2024) made the national standard explicit: generative AI is “relevant technology” within the meaning of comment 8 to ABA Rule 1.1. Competence requires lawyers to understand it; confidentiality requires “reasonable steps” to prevent disclosure; supervision requires treating it like a non-lawyer assistant; communication requires candid discussion with clients; billing must reflect the efficiency it provides.
States have followed suit. California warns against inputting client data into any system that lacks strong safeguards. Florida highlights the risks of “self-learning” systems. Texas analogizes to cloud computing, requiring careful review of terms of service. North Carolina reminds lawyers that ultimate responsibility rests on them, not the tool.
The consistent message: it is presumptively unethical to paste client facts into consumer-grade chatbots that reserve the right to train on inputs. But firms can use enterprise-grade or private deployments with diligence.
The Kovel Temptation
When lawyers consider privilege, many reach for United States v. Kovel (2d Cir. 1961). In that case, an accountant hired by a law firm was treated as the functional equivalent of a translator, enabling the lawyer to give legal advice about complex tax issues without waiving privilege. The logic has since extended to some third-party consultants: if their role is necessary to render legal advice intelligible, then communications with them may remain privileged.
The challenge for LLMs is mapping onto that “necessity.” Courts applying Kovel emphasize that the consultant must provide something the lawyer cannot otherwise achieve. On a literal level, LLMs do not meet that bar: they summarize, draft, and synthesize—tasks lawyers are trained to perform. Yet the reality is subtler. LLMs can execute those tasks at a speed and cost that transforms what is practically possible. A partner may never spend 20 hours combing through email chains to spot inconsistencies, but a model can do it in minutes. In that sense, LLMs make advice possible in practice that is impossible in practice without them, even if not impossible in principle.
Whether courts will stretch Kovel to encompass this kind of “practical necessity” is uncertain. Some recent privilege cases, especially in cybersecurity, have refused to extend protection when consultants were engaged primarily for efficiency rather than interpretation. But others have recognized that expertise enabling efficient legal work can fall within the privilege. The safest current course is infrastructural—designing systems where no disclosure to a third party occurs. But lawyers should not abandon the Kovel analogy entirely; with careful argument, it may yet prove part of the privilege toolkit for AI-enabled practice. Moreover, the sculpting of attorney-client privilege protections where in-house consultants are attorney agents and thus within the privilege but time-sharing consultants are subject to murky Kovel analysis seems like the sort of broken formalism that breeds inefficient relationships.
Confidentiality and Privilege Are Not the Same
Also, up to now I have been a little loose, speaking of “confidentiality” as if it were one category, but lawyers know there are two overlapping regimes. The first is the ethical duty of confidentiality under Rule 1.6, which sweeps broadly and covers any information relating to the representation. That duty can be relaxed with client consent. If a tenant client, desperate for affordable representation, wants their lawyer to use a cheaper, faster AI tool, the client can ordinarily authorize the lawyer to do so.
The second is attorney–client privilege. This is narrower, rooted not in the law of professional conduct but in the law of evidence, and controlled by courts rather than clients. It protects confidential communications between lawyer and client made for the purpose of legal advice. Crucially, privilege is not so easily waived. A client’s blanket consent to use an external LLM does not guarantee that a judge will treat privilege as preserved. If a privileged email is pasted into a system whose terms of service allow vendor access, a court may view that as disclosure. And courts do not always confine waiver to the specific message disclosed; they often apply subject matter waiver, requiring production of other privileged communications on the same topic to avoid selective disclosure.
Caution is needed. Lawyers must learn to separate ordinary confidential material—facts gathered from third parties, business records, expert reports—from true attorney–client communications. If we are sloppy about that boundary, we risk creating a new front for litigation: opposing counsel who suspects a firm has fed privileged matter into a public LLM will move to compel, arguing waiver. Even if the claim ultimately fails, the fight itself can be costly and disruptive.
Two examples illustrate the distinction. In one scenario, a discrimination plaintiff emails her lawyer: “I think I was fired because I complained about my supervisor’s harassment. Should I bring this up in my deposition?” That is classic privilege: client-to-lawyer, seeking legal advice. If the lawyer uploads it into a consumer chatbot that reserves the right to review or train on inputs, a court could find waiver—and perhaps extend that waiver to all privileged deposition-preparation communications.
Contrast that with a second scenario: a business email chain circulated widely within a company about whether to terminate a supplier contract. The lawyer, copied among many executives, adds: “We should review clause 14 before taking action.” That thread is then uploaded to an LLM to summarize obligations. Here the privilege claim was already weak: the conversation was primarily business, and the wide distribution undermined confidentiality. Uploading it might raise confidentiality concerns, but it probably adds little to the privilege analysis because privilege was never solid in the first place.
The educational lesson is plain. Competence in the AI era requires sharper skills in recognizing what is genuinely attorney–client privileged, as opposed to merely confidential. Secure deployments—the confidential drafting room—reduce the risk of waiver, but the line-drawing will remain part of the lawyer’s craft. Teaching future lawyers to recognize those lines, and to understand how privilege can be lost, is as important as teaching them how to use the tools themselves.
Informed Consent, Done Right
Client consent is nonetheless the answer to many issues posed by use of LLMs. But when client consent is sought, it must be more than a line in the engagement letter. It must be informed consent. Clients should be told why AI is being used, what vendor is involved, what benefits they gain (lower cost, faster turnaround), and what risks remain. They should hear, in plain English, that the system does not train on their data and that the firm has secured deletion and access limits. Client consent thus becomes a trust-building moment. Clients can see that their lawyers are both technologically current and ethically cautious.
Building the Confidential Drafting Room
The path that avoids the frontiers of professional responsibility is architectural. If the LLM provider is a utility—like the electric company or landlord—and never sees the data, then no disclosure has occurred. That is what we should mean by a “confidential drafting room.” There are several ways to build it.
High-Performance Per-Lawyer Workstations
One option, which I will confess bore some initial personal appeal, is to equip each lawyer with a dedicated GPU workstation. This keeps all data local, giving near-absolute confidentiality. But cost and upkeep make it unrealistic: a machine capable of running the quality of models required by legal professionals for serious work costs $20,000–$25,000, can serve as a substitute for a space heater, and is likely to please only those who enjoy the noise from leaf blowers. It also requires an IT staff to maintain drivers, libraries, and model updates across dozens of machines. Sad to say, it is maximum security at maximum impracticality.
Centralized On-Premises Servers
A more scalable approach is a shared enterprise-grade GPU server in a server room built for the firm. An eight-GPU H100 cluster can cost $250,000–$400,000. Security is strong—data never leaves the building—and performance is excellent. But the firm must also pay for upgraded power, cooling, and specialized staff. This is the gold standard for firms with adequate capital, an ability to amortize costs over clients, and tolerance for ongoing maintenance.
Colocation Facilities
Colocation means the firm owns the servers but houses them in a professional data center. The provider supplies power, cooling, and physical security, while the firm controls the machines and software. Confidentiality remains high, and the headaches of in-office infrastructure are reduced. But the large upfront purchase price and IT staffing burden remain. Some might also worry about the challenges of managing contract staff located far, far away with loyalties spread among many firms.
Private Bare-Metal Cloud
Here, firms rent entire GPU servers from providers like CoreWeave or Lambda Labs. They install their own stack, so the vendor acts as a utility, never seeing client data. Costs shift from capital expense to usage fees, letting firms scale up or down as needed. Spin-up of large language models may take minutes, but the flexibility and lower barrier to entry make this especially attractive for mid-sized firms and the better-heeled public-interest organizations.
All of these models require planning, but they are not science fiction. Banks and hospitals already operate under similar architectures. The cost is manageable compared to the billable hours saved.
Proprietary Models
A common solution is to forgo these complexities and just rely on a vendor (Harvey, for example) that for a price integrates confidentiality, typical LLM tasks of writing and research, and a large compendium of primary source materials. Think Westlaw, but with a real large language model attached. It's not just Harvey in this space. There are credible offerings emerging from vLex (Vincent AI), midpage, and others. And while the somewhat secretive but widely used Harvey is said to be pricey, and Vincent starts at $399 per month, a midpage license for lawyers is $99 per month. Moreover, law students and faculty can get midpage access for $10 per month, <joke, sort of>although a case could be made that midpage would be wise to just give it to the latter two groups for free, students to addict them and law professors to serve as influencers. </joke, sort of>
One might think that these proprietary models that have already done much of the heavy lifting for attorneys resolve the conflict I have been posing between the need for confidentiality and the desirability of using LLMs in practice. And, in some instances and to some extent, they do. But not entirely.
First, the proprietary models are not particularly adaptive. If the LLM world obtains some new capabilities – broad multimodality or easy access to MCP servers, for example – or the models themselves evolve, there is no assurance that the proprietary vendors will keep pace. And while companies like Harvey are not small and can invest in the sort of R&D needed to improve frontends and backends, their abilities are simply dwarfed when confronting generic LLM vendors such as Google, X, OpenAI, and Anthropic. In a competitive market, there will, of course, be a non-trivial incentive for vendors to adapt swiftly. Midpage, for example, which one might consider the underdog in this battle, notes that it uses OpenAI's ChatGPT as its front end, which should provide significant quality. Nonetheless, there are high switching costs for attorneys in changing vendors and some degree of market concentration that may dilute those pressures.
Second, these models can get expensive. For smaller practitioners, payments like $400 per month may seem significant on top of whatever other research subscriptions the firm purchases. Still, if the proprietary software (Vincent, for example) saves an attorney just an hour or two of work per month, the use of the software is economically efficient and the only question is whether a viable charging model can be created.
The Small Firm and Public-Interest Perspective
A word here about smaller firms and public interest organizations. BigLaw can throw money at the problem: dedicated servers, specialized staff, custom contracts with vendors. But what about the solo practitioner, the five-lawyer shop, or the public-interest organization running on grants?
Here the “reasonable steps” doctrine has bite. A small firm should not be required to replicate the infrastructure of Kirkland & Ellis. But it can insist on enterprise-grade APIs that commit not to train on data and that delete inputs quickly. It can anonymize prompts—stripping client identifiers before asking for help drafting a motion. It can obtain informed consent where the risks are higher.
For public-interest organizations, the stakes are higher still. They represent vulnerable clients who cannot afford inefficiency. If these organizations are locked out of LLMs because of excessive caution, the losers are tenants facing eviction, immigrants seeking asylum, and defendants who need rapid, competent advocacy. Ethics rules should not entrench a two-tier system where only wealthy clients benefit from technology. The standard must be calibrated so that “reasonable steps” scale to context. The "good news" is that many of these clients may cheerfully waive default rules ostensibly designed for their protection if it means that an attorney can actually handle their case.
The Educational Payoff
All of which brings us back to education and a better answer to the impertinence that motivated this entry in the first place. The reason to teach law students how to use LLMs is that by the time they enter practice, the responsible use of these tools will be ubiquitous. The same systems that students learn—OpenAI, Anthropic, Google—are already being adapted to meet confidentiality standards. Students will not need to retreat into clunky proprietary systems that strip away the very capabilities that make LLMs transformative. They will be able to use the same agentic workflows in practice that they learned in school, with modest but meaningful guardrails.
So the answer to the AALS skeptic is straightforward: yes, it makes sense to teach LLMs in law school. It makes sense because students will in fact use them in practice. It makes sense because efficiency is inseparable from access to justice. And it makes sense because the profession has already begun to chart the path—through “reasonable steps,” informed consent, and the architecture of confidential drafting rooms—to integrate these tools into daily lawyering without betraying client trust. The technological problems are superable and becoming ever more so.
The choice is not between ethics and modernity. It is between ethics that fossilize to the detriment of clients and ethics that co-evolve with technology. The best legal education prepares students for the latter.