Designed to Cross: Why Nippon Life v. OpenAI Is a Product Liability Case
Graciela Dela Torre settled a long-term disability claim with prejudice in January 2024. Feeling she had been misled by her attorney, she uploaded his correspondence to ChatGPT. The chatbot validated her distrust. She fired her lawyer, attempted to reopen the settled case, and filed dozens of motions that courts found served no legitimate legal purpose. In March 2026, Nippon Life Insurance Company of America sued OpenAI for $10.3 million. The underlying failure was not a hallucination problem. It was a design problem I first identified in October 2011 and formalized in January 2012.
Fourteen years ago, in this space, I introduced the term CLAI (Computational Law AI) and the concept of the uncrossable threshold (UT): the design principle that separates the provision of legal information from unauthorized practice of law. The UT is not about accuracy. It is not about disclaimers. It is about what a system is built to do and what it is built to refuse. OpenAI built a system with no such refusal architecture. The Nippon Life lawsuit is the consequence.
The Uncrossable Threshold
The intellectual lineage begins one step earlier than 2012. In October 2011, in Siri: What’s Next?, I described a scenario where a consumer buying a used car asks Siri whether the warranty is reasonable. Siri responds that it has compared the terms against thirteen other dealers within two hundred miles, that this warranty is similar to all of them, and that the user is not going to find a better deal in the region. I noted at the time that I was purposefully leaving open whether that answer crossed into unauthorized practice of law.
In January 2012, the Computational Law Applications and the Unauthorized Practice of Law post returned to that open question and answered it. The Siri response was not mere aggregation. It was a recommendation delivered to a specific user in a specific transaction. Whether it crossed the UT depended on a harm-centric analysis. Lawyers do not perform warranty comparisons for used-car buyers, the transaction cost is prohibitive, and the informational value of Siri’s answer is high for used car buyers. But the 2012 post also identified the line at which exemption ends. The UT is crossed when a system moves from comparative information to a tailored legal conclusion about a specific user’s specific legal situation. Siri’s response got close to that line. ChatGPT’s response to Dela Torre crossed it.
ChatGPT crossed it at the moment it told Dela Torre that her attorney’s advice was wrong. That was not information. It was a legal conclusion about a specific legal relationship, rendered without jurisdictional knowledge, without case history, and without any design constraint that would have prevented it.
The 2012 post argued that UPL exposure is a design question, not an output question. UPL rules serve two purposes that can be distilled into a single principle: protect the public and the integrity of the legal system from the incompetence of non-lawyers. I called that principle the “Rule.” A CLAI that operates within the Rule should be exempt from UPL scrutiny. Whether a given CLAI satisfies that standard could function like Apple’s App Store review. A third-party vetting it before deployment, with judicial review still available but developer liability tempered by the fact of certification. That was a rough sketch, and I said so at the time. But the principle was clear. OpenAI built nothing resembling it.
The Asymmetry Argument, Inverted
In the February 2018 post Dissolving Information Asymmetry with Computational Law AI-Enabled Applications, I argued that CLAIs could dissolve the persistent information asymmetry between institutions and individuals: the asymmetry produced by impenetrable layers of legalese, by marketing that exploits legal complexity, by transaction costs that make legal access prohibitive. Dela Torre’s turn to ChatGPT is that argument enacted. She was, in practical terms, unrepresented. She felt she was being told a story she could not verify by parties with significant legal resources. ChatGPT was accessible, responsive, and apparently authoritative.
But the asymmetry was not dissolved. It was replaced. The original asymmetry was between Dela Torre and Nippon Life’s legal team. The new asymmetry was between Dela Torre and a system that could mimic legal reasoning without understanding the legal constraints governing her situation. She did not know the threshold had been crossed. The system had no mechanism to tell her.
There is a structural irony in this complaint. Nippon Life, an institutional actor with sophisticated legal counsel, is using a federal lawsuit to recover costs incurred because an unrepresented individual reached for the only legal resource she could access. That framing does not excuse what the chatbot did or shift liability from OpenAI. But it confirms the asymmetry diagnosis. The demand for CLAI exists because the traditional legal system fails the individuals it is designed to protect. OpenAI met that demand with a system that was not designed to serve it safely.
Scaling Risk Without Scaling Safeguards
In April 2021, writing about GPT-3 and the Unauthorized Practice of Law, I noted that a 500x parameter increase for GPT-4 would not necessarily produce an equivalent increase in UPL risk, so long as effective design safeguards were in place. That conditional clause is the precise location where OpenAI’s approach collapsed.
OpenAI’s marketing told users that ChatGPT could pass the bar exam. Nippon Life’s complaint identifies this as a direct contributor to Dela Torre’s belief that the system could function as her lawyer. The bar exam claim was a capability assertion that invited reliance. It did not come with the design architecture that would have made that reliance safe.
OpenAI updated its terms of service in October 2024 to prohibit users from relying on ChatGPT for legal advice. That update does not appear in the Nippon Life complaint as a defense, but as evidence of the problem. The update shows that OpenAI recognized the risk and addressed it with a behavioral patch on a system whose underlying architecture had not changed.
A terms-of-service prohibition is not a CLAI design safeguard. It is a disclaimer. And disclaimers do not enforce the UT. They shift blame.
What the Lawsuit Gets Right, and What It Misframes
Nippon Life is correct that OpenAI marketed capability without engineering compliance. The tortious interference and abuse of process claims are the most analytically interesting part of the complaint because they do not require a court to hold that an AI can practice law. They require only that OpenAI’s system foreseeably produced meritless filings that harmed a third party. That is a tractable frame and may survive dispositive motion practice regardless of how the UPL count fares.
The UPL count itself tests the wrong question. UPL statutes were designed to regulate humans holding themselves out as attorneys. Applying them to an AI developer treats the system as the actor and the developer as a bystander. The better doctrinal frame is designer liability for failure to implement UPL-safe architecture. And that frame requires distinguishing two types of liability that the complaint currently conflates.
Output liability attaches to what the AI said. Architectural negligence attaches to what the system was permitted to say. Output liability is case-specific, infinite in scope, and practically uninsurable. Every conversation is a potential defendant. Architectural negligence is bounded. It asks whether the designer implemented controls that would have prevented the foreseeable class of harm. That question has a tractable answer, and it generalizes across every user of the system, not just Dela Torre.
The question is not whether ChatGPT practiced law. It is whether OpenAI designed a system that foreseeably crossed the UT without adequate controls. That question reaches the same defendant and produces the same accountability. But a holding grounded in architectural negligence gives courts a standard that applies to the next system. A holding grounded in output liability gives plaintiffs an invitation to litigate every conversation.
There is a third doctrinal frame available, one the complaint does not fully develop but which the facts support directly.
The Product Liability Pivot
Product liability offers more stable doctrinal ground than UPL, and the Nippon Life complaint’s facts map onto it directly. A design defect exists when a foreseeable risk of harm could have been mitigated by a reasonable alternative design. The risk here was not exotic. Any developer who had read the existing publicly-available literature on CLAI and UPL would have identified it: a general-purpose language model, marketed on its capacity to pass the bar exam, deployed to consumers navigating active legal disputes, without architectural constraints on the tailored legal conclusions it could produce. The harm that followed, an unrepresented individual firing her attorney, attempting to reopen a settled matter, and generating dozens of filings courts found meritless, was not an unlikely outcome. It was a foreseeable one.
The reasonable alternative design existed in 2012. Deterministic guardrails that refuse tailored legal conclusions at the system level. Jurisdictional disclosure at the point of output. Third-party vetting before deployment in legal contexts. None of these were technologically unavailable to OpenAI. They were architecturally inconvenient. A system designed to be maximally responsive does not refuse user queries. But a system designed for foreseeable legal use must.
The manufacturer frame, treating OpenAI not as a practitioner committing malpractice but as a manufacturer releasing a product into a regulated environment without adequate design controls, is the cleanest available path to a generalizable holding. I argue for it here as a proposed frame, not an established one. No court has yet applied products doctrine to a generative AI system in this context. But the doctrinal components are well-settled, and the facts map onto them without strain. The frame does not require a court to resolve whether AI can practice law, a question that generates more philosophical heat than doctrinal clarity. It requires only the application of existing products liability doctrine to a developer who knew, or should have known, the foreseeable use case. The bar exam marketing resolves the “should have known” question without extended argument.
This reframe also answers the disclaimer defense directly. In product liability, a manufacturer cannot disclaim its way out of a design defect that makes the product unreasonably dangerous for its foreseeable use. OpenAI’s October 2024 terms-of-service update, adding a prohibition on legal reliance after years of bar exam marketing, does not retroactively cure the architectural gap it acknowledged. In the Nippon Life complaint, that update appears not as a defense but as an admission. The complaint uses it to establish that OpenAI recognized the foreseeable risk and chose a behavioral patch over a design fix. That sequencing is precisely what a plaintiff needs to establish in a design defect case: the defendant knew, addressed it inadequately, and the harm followed.
The Privilege Vacuum
The Nippon Life complaint focuses on economic harm to an insurer. A more consequential danger falls on the user: the loss of evidentiary privilege over her own legal strategy.
On February 10, 2026, two federal courts issued first-of-their-kind decisions on that question, and they appear to conflict. In United States v. Heppner, Judge Rakoff of the Southern District of New York held that a criminal defendant’s documents generated through the consumer version of Anthropic’s Claude were protected by neither attorney-client privilege nor the work product doctrine. The court’s reasoning was direct. All recognized privileges require a trusting human relationship with a licensed professional who owes fiduciary duties and is subject to discipline. Claude is not that. The communications were not confidential: Anthropic’s privacy policy expressly reserves the right to disclose user data to third parties, including governmental authorities. And Heppner did not use Claude at counsel’s direction, which defeated the work product claim.
That same day, Magistrate Judge Patti of the Eastern District of Michigan held in Warner v. Gilbarco, Inc. that a pro se plaintiff’s ChatGPT-assisted litigation materials were protected work product. The apparent conflict dissolves on close reading. Warner was self-represented, which meant she was functioning as her own counsel. There was no attorney-direction gap to exploit. And under Sixth Circuit precedent, work product waiver requires disclosure to an adversary, not merely to a third party. Because AI tools are, in the court’s framing, tools rather than persons, the terms-of-service exposure that defeated Heppner was beside the point.
The governing variable across both decisions is not the AI tool. It is the architecture around the tool: whether counsel directed its use, whether the platform maintained confidentiality, and whether the user’s procedural posture created the equivalent of attorney involvement. Dela Torre had none of those conditions. She uploaded her attorney’s correspondence to a consumer-grade platform, without counsel’s involvement, on a platform that disclaimed confidentiality. Under Heppner, any legal strategy she exposed to ChatGPT may have been disclosed to a third party with no privilege protection. This is not a user error. It is a foreseeable consequence of deploying a system with no architecture for distinguishing a confidential legal consultation from a general query.
The Safe Harbor That Still Does Not Exist
The Nippon Life case will likely force courts and regulators to define a safe harbor for AI legal applications. That harbor needs to be architecture-based, not behavior-based. A CLAI certification regime, grounded in UT compliance and third-party vetting, gives developers a clear path and gives courts a workable standard. Neither Congress nor the ABA has produced one.
The 2012 post sketched the vetting mechanism. I can now be more precise about what it must contain. A functional safe harbor requires three architectural conditions, not policies.
First, deterministic guardrails. Hard-coded refusals for outputs that constitute tailored legal conclusions, implemented at the system level and not overridable by user instruction or conversational context. A terms-of-service prohibition is not a guardrail. It is text. The refusal must be structural.
Second, auditability. A logging requirement, operating under attorney-directed enterprise confidentiality controls, that preserves the reasoning path for any output touching a legal question. This addresses both the accountability problem and the privilege problem simultaneously. The Heppner court held that the consumer version of Claude destroyed confidentiality through Anthropic’s own privacy policy: user data collected, model training contemplated, government disclosure reserved. A CLAI architecture that operates under enterprise-grade confidentiality terms, at counsel’s direction, survives that analysis. Auditability is not a privacy threat. It is the condition under which the safe harbor has legal meaning.
Third, jurisdictional awareness. The system must surface, at the point of output, the limits of what it does not know: the applicable jurisdiction, the specific court’s local rules, the procedural posture of any identified matter. ChatGPT drafted motions for a dismissed-with-prejudice case in the Northern District of Illinois without knowing, or disclosing, that it did not know either of those facts. That is not a hallucination problem. It is an architecture problem.
A certification regime that requires these three conditions gives developers a compliance target. It gives courts a standard of care. And it gives the next Graciela Dela Torre a system that knows what it cannot tell her.
The scaffolding for that regime has been available since January 2012. The uncrossable threshold was defined then. In 2026, a federal court in Chicago is deciding what it costs to cross it.