Q&A: A Discussion with Casetext Co-Founders Jake Heller, JD ’10, and Pablo Arredondo, JD ’05

On innovation, artificial intelligence, and the legal profession on the cusp of a tech revolution

Like so many startups, Casetext might have failed.

The original idea for the company came to Jake Heller, JD ’10, when he started at Stanford Law School and saw how far behind legal education was in tech. Unlike, say, finding the best pizza within a 1-mile radius, legal information took hours or days to retrieve.

“I quickly learned how different and relatively inaccessible legal information is. You had to have a Lexis account to read essential court documents. And those huge costs put real pressure on attorneys across the market, from the biggest firms to legal aid work,” Heller said in a 2015 Stanford Lawyer interview.

Q&A: A Discussion with Casetext Co-Founders Jake Heller, JD ’10, and Pablo Arredondo, JD ’05

Heller soon found that the legal profession, too, was in the tech dark ages. After a clerkship on the First Circuit U.S. Court of Appeals and a position with Ropes & Gray in Boston, where for two years he had been an associate in the appellate practice group, he decided to address the challenge head on. A Silicon Valley native who started programming when he was in grade school, Heller aimed to create a free legal crowdsourcing tool—a Wikipedia for the law—that would help attorneys in legal firms and nonprofits alike. The idea was accepted by startup incubator Y Combinator, and, after months of development, in October 2013, backed by $1.8 million in seed funding, Casetext was launched. Heller left Ropes and began looking for co-founders. He was introduced to Pablo Arredondo by Paul Lomio, then the SLS associate dean of the law library, who had mentored both Heller and Arredondo when they were studying.

A self-taught coder whose innovations have been recognized by the World Economic Forum, the American Association of Law Libraries, and the American Bar Association, Arredondo was a patent litigator at Kirkland and Ellis before he co-founded Occam Law, a Sequoia-backed startup focused on building an alternative legal research engine. A fellow at the Stanford Center for Legal Informatics, he was in the process of building another search engine, but, after meeting Heller, realized they shared a similar vision for expanding access to the law and improving legal research. He was soon joined by Heller’s friend Laura Safdie, a Yale Law grad, and the three got to work.

While the early version of Casetext had some success, enticing attorneys to take the time to participate in crowdsourced law proved challenging. The team realized they didn’t have enough for real business success.

So, they pivoted to one of Arredondo’s  earlier ideas “to create a new category of legal tech product.” Already sandboxing with artificial intelligence, they could see how the speed and accuracy of the quickly developing technology could help them to create tools that would be even more useful than their earlier ones. Heller, Casetext CEO, and Arredondo, the company’s chief innovation officer, drove development of their AI product CoCounsel, which beta-launched in March 2023. Today, it is helping large law firms and nonprofits like the Innocence Project to parse information at superhuman speeds.

Almost exactly 10 years after its founding, Casetext was acquired by Thomson Reuters in August 2023.

Here, Heller and Arredondo discuss starting a legal tech company, AI, and more with David Freeman Engstrom, JD ’02, LSVF Professor in Law and co-director of the Deborah L. Rhode Center on the Legal Profession. One of the nation’s leading experts in law and innovation, Engstrom co-founded the Filing Fairness Project, an ambitious collaboration with seven states and technology providers to simplify digital court filing systems and widen access, and he has advised multiple states, including as a public appointee to the California State Bar’s Closing the Justice Gap Working Group, on how to rethink regulation of legal services to foster innovation. Earlier this year, he published a book, Legal Tech and the Future of Civil Justice, with Cambridge University Press.

—by Sharon Driscoll


DAVID FREEMAN ENGSTROM: You founded Casetext a few years after graduating from Stanford Law. Can you talk about the problem you were looking to solve?

JAKE HELLER: For as long as I can remember, I’ve been building stuff. My dad started an internet business in our garage in the early ’90s. I think a lot of kids play catch or fix cars with their dads, but I was making websites and coding with mine. My other passion ended up being law and policy. So, when I went to law school, I really couldn’t help but think about how we could apply technology to law. When I started in legal practice, I felt pretty strongly that I was oftentimes working against the technology—that it was much more difficult than it had to be. It was so easy to look up something relatively trivial, like a restaurant that’s open near you late at night. But if you wanted to find a case or look up a key document, it was so time-consuming. That disparity of researching for everyday simple stuff versus finding the evidence that might let my client not go to jail or save a company billions of dollars in litigation fees, was stark to me. I thought if only we could take some of the best stuff happening in consumer products, like those from Apple and Google, and apply it to legal tech, we’d be in a much better place.

ENGSTROM: How has Casetext’s focus evolved?

HELLER: In the early days, we were more focused on what were then cutting-edge technologies around crowdsourcing and information gathering. Companies like Wikipedia, Yelp, GitHub, and Stack Overflow were proving that you could collect a lot of human wisdom and create something that was much better than what came before it.

We thought we’d try to apply that to the legal field to get lawyers and law professors involved in creating a better knowledge database that’s easier to use, less expensive, etc. And for a variety of reasons, it didn’t work.

ENGSTROM: You hit a wall not uncommon to startups.  How did you adapt?

HELLER: It turns out that lawyers are not like coders and Wikipedia editors. We bill by the hour and our time is very valuable, so time spent contributing to a free website was hard to come by. So, we went back to the drawing board. And that’s where a lot of the ideas for using more automated and scalable approaches around natural language processing and machine learning came in, which later became artificial intelligence. We leveraged some of Pablo’s earlier ideas to create a new category of legal tech product called Brief Analyzers, our first AI tool. And that’s when things really started to work for us.

Q&A: A Discussion with Casetext Co-Founders Jake Heller, JD ’10, and Pablo Arredondo, JD ’05 1
Professor David Freeman Engstrom, JD ‘02

ENGSTROM: Let’s talk about AI and how large language models, LLMs, are going to affect the legal services marketplace. It was big news when GPT-4 [OpenAI’s large language model for AI use] passed the bar.  How important a moment was that?

PABLO ARREDONDO: Jake and I wouldn’t be here if GPT-4 hadn’t been so much better than what came before it. We’re used to linear, incremental progress. But the nature of these language models is that they leap. And so GPT-4 not only passed the multiple-choice questions, it also passed the essay portion and scored somewhere between the 70th and 90th percentiles.

ENGSTROM: That raises the big question of how LLMs, and whatever comes next with GPT or other models, are going to change, or are already changing, legal practice.

HELLER: What we see with these large language models, and specifically with GPT-4, is that we’ve arrived at a time when these models can read and understand, write, and to some degree use logic at what could best be described as a postgraduate level. The biggest opportunity for these kinds of models is to take the activities that require, say, a lot of reading, or a lot of input, and turn that into information and knowledge.

For tasks like document review, reviewing contracts, conducting legal research, and reviewing all the results that come back from Westlaw searches, then compiling that into a comprehensive memo, or taking a number of documents and turning that into a timeline of events in the case—these are all areas where having a machine that can work at superhuman speeds, but at a human-level of accuracy, could be a force multiplier for the profession. That’s where we see the most immediate impact for these technologies. This is what Casetext’s new product, CoCounsel, does—it’s an AI legal assistant. An attorney can delegate to it substantive legal work and expect it to get done at a very high level of quality and at the speed of technology, so at superhuman speeds.

ENGSTROM: Can you give me an example of what AI can do?

ARREDONDO: With our beta clients, we are able to do things like parse expert witnesses’ testimony from earlier litigations and find contradictions that might be used to now cross-examine them in a new case. Now that is not administerial, right? That is at the heart of what a litigator likes to do. You might even say it’s the kind of thing you go to law school to get to do. And so, while not a replacement for a human, we find that it’s been a useful tool in an increasingly wide range of tasks.

“When I started in legal practice, I felt pretty strongly that I was oftentimes working against the technology—that it was much more difficult than it had to be.”

Jake Heller, JD ’10, CEO at Casetext

ENGSTROM: Rosy optimists say AI will democratize law by allowing one-, two-, and three-lawyer shops to compete with highly leveraged BigLaw firms. Do you agree?

HELLER: A lot of our clients are one- to five- or ten-person law firms, and often they’re resource constrained. An example is a nonprofit with which we’ve worked, the California Innocence Project. As you have machines that can be used alongside attorneys working on these projects, to quickly do things like the first cut on a document review, and in-depth legal research, and review of thousands of contracts, it’s going to help, especially in the areas where resource and time are constrained.

ENGSTROM: But BigLaw firms may benefit even more, right? Is it possible AI is just going to allow litigation’s “haves” to come out further ahead than they already do?

Q&A: A Discussion with Casetext Co-Founders Jake Heller, JD ’10, and Pablo Arredondo, JD ’05 2
Casetext co-founders Laura Safdie, Jake Heller, JD ‘10, and Pablo Arredondo, JD ‘05 (Missy Marie Photography)

HELLER: One thing that’s unique about very large law firms, for example, is they’ve accumulated, sometimes over decades or centuries, millions of documents’ worth of knowledge that they alone have. The contracts that they privately negotiated, the briefs that they have filed, the internal memoranda on research questions, etc. Until very recently, actually accessing that knowledge was very difficult. A lot of this material is just in big folders somewhere, perhaps on a document management system, but it’s very difficult to extract real information from it. One of the amazing things about these large language models is that they can read at superhuman speeds, accurately, and without having to be specifically trained for any particular task. So, you can let it pore over your tens of thousands of contracts and pull out data points that in the future allow you to negotiate with the knowledge that over the course of 10,000 contracts, you have never accepted a certain term, for example.

ENGSTROM: What legal guardrails need to be in place as AI-based legal tech occupies a larger place in the lawyer’s toolkit?

ARREDONDO: I believe that many of the guardrails that are already in place to govern this profession can cover a lot of it. The duty of zealous representation, of competence, of checking the work of something if you’re not sure about it, etc. You can get a lot of the way there just by taking what we already do to regulate ourselves and applying it aggressively and with some common sense. This is not the first technology that lawyers have had to use. It’s not the first technology that could be misused if over-relied upon.

HELLER: I think implicit in Pablo’s answer is that we’re talking about technology for lawyers as opposed to technology that replaces lawyers. But with the super-fast AI assists we’ve been talking about, it’s still up to you as a lawyer using these technologies to make sure you are confident the results that it comes back with are sound.

ENGSTROM: That’s AI that augments the work of lawyers. What about direct-to-consumer AI applications?

HELLER: The situation is different for companies that create UMIs [platforms that provide legal service to the public] that communicate directly with consumers; those are still probably not going to be safe for public use. There are some examples of companies that have claimed they’re an AI lawyer that can fully represent people, and those consumers not getting great representation through those systems. A debate could be had about whether these are better than no legal service at all, but I think for the same reason you don’t want an unlicensed doctor, you’re certainly not going to want to go to an AI “lawyer” or person who isn’t licensed to practice law.

ARREDONDO: We talk a lot about the risks of using AI, which we should, but once we start seeing that AI can actually help you find things that you likely would have overlooked, it very quickly flips the question to what’s the risk of not using AI? I think we’re quickly going to have a situation where it’s assumed that you would use the best tools for your clients. Why wouldn’t you use this AI that could have found, for example, certain emails that humans might have overlooked or surface strategically relevant documents much sooner in the litigation?

“We talk a lot about the risks of using AI, which we should, but once we start seeing that AI can actually help you find things that you likely would have overlooked, it very quickly flips the question to what’s the risk of not using AI?”

Pablo Arredondo, JD ’05, Chief Innovation Officer at Casetext

ENGSTROM: Let’s shift to AI and access to justice. I’ve been working on how technology can help close the justice gap in the roughly 12 million cases each year where Americans are left to navigate a complex civil justice system alone, without lawyers. What’s your view as to the promise and the peril of AI in that part of the system?

HELLER: With AI, we’re going to see a shift away from lawyers having to turn away work and clients because those clients would be unprofitable. AI can make that work profitable. So, one mechanism by which you’re going to alleviate this massive access to justice problem is by developing tools for lawyers so they are armed with really good technology. AI will create opportunities for lawyers to take on more clients at a lower rate and narrow that gap in access to justice.

ENGSTROM: Am I hearing you right that you think AI might result in growth in the legal profession?

HELLER: How many times do major corporations settle suits they believe to be frivolous because the cost of litigating is more than the cost to settle? That’s something that happens in GC offices and boardrooms every single day in America. Yes, there’s a case to be made that there’ll be a lot more legal work when, instead of it costing $500,000 or a million dollars to defend against a claim, it will cost $20,000.

The same is true when you move to other parts of the system. Many small businesses sign contracts that never get touched by a lawyer because it’s too expensive for a lawyer to review. But now, for the first time, a lawyer armed with great technology can offer a service where the technology can do some of the heavy lifting; the lawyer can provide judgment analysis and set those parties in a potentially better place.

ENGSTROM: Can you touch on the promise and peril of direct-to-consumer AI tools in  access-to-justice cases like eviction?

ARREDONDO: The concern with direct-to-consumer is you just want to make sure that it’s good, quality advice. The draw of it being free and easy and “everyone can do it” is tempting. The fact is that some of the tools we’ve seen out there that purport to give legal advice were giving wrong advice and really bad advice. They were basically automating malpractice, which I don’t think anyone would agree is a step forward.

Years ago, my apartment’s management company sent me a letter that said they needed to raise my rent, and they included some legal phrasing about how they could do it pursuant to a certain California statute. No one likes a rent increase, but the law is a law. But they chose the wrong Pablo to send that letter to because I immediately looked up the statute and, of course, it did not apply whatsoever. I called them out on it, and they apologized, said the letter shouldn’t have gone out. But I asked them how many other people got this letter? They had a problem. So, you can envision a world where, with AI, maybe the government agencies that are designed to protect people could be policing letters like that at scale. Now, is that direct to consumer? I don’t know what that is. I think that’s a new category of protection, but I’d really like to see that happen where you have some legal oversight from the government agency, but the tech helps uncover these kinds of practices.

What’s your view, Professor Engstrom, on the direct-to-consumer possibilities?

ENGSTROM: As I noted, the best evidence says there are about 12 million people per year who are sued but don’t ever get legal representation. Stunningly, the modal case in state courts—and so the modal case in all of American law, since state courts handle 97 percent of cases—pits an institutional plaintiff (bank, landlord, or government) against a self-represented individual. The challenge is enormous.

I’m of two minds. On one hand, it seems like there’s a real place for the Rocket Lawyers and the Legal Zooms of the world to provide direct-to-consumer document assembly tools that help individuals pull together court filings. The great promise of generative AI is that it can do more than that, parsing an individual’s plain-language description of a problem and mapping it to legal options and outcomes. On the other hand, harnessing AI’s power will require revising laws in nearly every state that only lawyers can practice law. That’s a big political challenge, because lawyers are protective of their monopoly. But I also think that opponents of relaxing lawyer rules rightly worry about the possibility of Google Law and, maybe even more so, Google Court, in which tech companies serve both sides in disputes and supplant public courts entirely. We’d lose crucial conflict controls, and we’d also lose the public elaboration of law, which is a big part of the democratic role our courts play. This is for another discussion though.

I’d like to hear more about your work at the Innocence Project. How could that be extended?

ARREDONDO: We partnered with the Innocence Project, which is dealing with a huge flood of folks asking them to review and take their cases. We were able to figure out where CoCounsel, our tool built on the LLMs, could play a role to help them triage that incoming stream.

HELLER: Before we came on board, the Innocence Project estimated their backlog to be four years long. They believe that has now been cut in half. When people apply for the Innocence Project to take on their cases, they send a thick case file with thousands of pages of police reports and trial transcripts and witness reports, etc. Now AI can pore through these documents looking for things like: Did the witness actually identify somebody else at first and then change their testimony? Did the police suspect somebody else at first? All these kinds of small indicia that AI can very capably read in the documents and flag for the attorneys that would manually take dozens and dozens of hours per case file just to evaluate.

ENGSTROM: At Stanford Law, we educate the lawyer leaders of the future. What should we be doing to help prepare them?

ARREDONDO: I think for the doctrinal courses like contract law, constitutional law, etc., I don’t see new tech mattering all that much. When you’re teaching advanced legal research and writing, of course you will now need to talk about how you fold in these new tools. But learning how to write from the blank page, as agonizing as it is to start with a blank page, that process is actually about thinking better and figuring out how to structure an argument. I would urge Stanford to continue to have its students wrestle with a blank page.

HELLER: One of the most important skills that is not taught in law school today, but really ought to be, is delegation and management. Over the next five or 10 years, every lawyer is going to have an AI assistant, or a set of AI assistants, helping them. They will have to decide what they’re going to give to the AI assistant to do. They’ll have to figure out how to review that work for accuracy and completeness and how to give feedback and watch as it iterates the feedback to get to the result that they want. They’ll have to decide, overall, what they are trying to achieve with this new team of AI assistants and what is going to be in the client’s best interest.

ENGSTROM: This has been fascinating. Thank you for your time.

ARREDONDO: Thank you very much.

HELLER: Good talking to you, professor. Thanks.  SL