Putting Casetext’s CARA to the Test

How Casetext determines whether CARA’s case suggestions are on-point.

By Pablo Arredondo and Chelsea Strauss

By now, you’ve probably heard about CARA, the Case Analysis Research Assistant from Casetext. For the uninitiated: CARA is a simple but powerful way to improve your research. Just drag-and-drop a brief or memo into CARA, and in seconds CARA suggests cases that are highly relevant to the issues in the brief, but that aren’t actually cited in the brief. (You can give it a spin and learn more here.) It’s like having a super smart research assistant who can read a brief or memo and the entirety of U.S. case law in moments and give you personalized recommendations on what to research next.

Putting Casetext’s CARA to the Test

CARA has already become an indispensable tool for many of our users, who include AmLaw 100 litigators and high-power litigation boutiques and solos. These lawyers start their research by dropping the opposition’s brief into CARA to find relevant cases to read as the jumping off point in their research, and end the research process by dropping their own brief into CARA before filing to make sure they have not overlooked any relevant cases.

We’ve heard from litigators who use CARA that they are shocked at how accurate CARA is at pulling on-point, helpful cases. The most common questions we get are “how does CARA know what cases to pull?” and “how do you make sure CARA is working?” In this post, we’re going to peel the curtain back a bit and explain precisely how we do that.

One key method to both test CARA’s efficacy and improve CARA over time is a process we call “the comparator:” we send CARA a brief, and see if CARA suggests cases that the opposition brief used. Or we send CARA a court case, and see if CARA suggests the same cases that later appellate court opinions relied upon in the same matter. This method enables CARA to learn from real litigators and judges—each run-through brings CARA’s machine learning process closer to determining what makes a case relevant to a litigation, and helps it highlight the cases that may have been missed in a matter.

We look most carefully at how CARA performs while suggesting cases based on uploaded lower court opinions compared to the appellate courts that review them because, traditionally, higher courts and their clerks have orders of magnitude more time to research and draft their opinions. (While clerking on the First Circuit Court of Appeals, our founder CEO Jake Heller had more than a month to research and draft an opinion; Laura Safdie, our Chief Operating Officer and General Counsel, had only a few days while clerking on the Southern District of New York.) We want CARA to be researching at an appellate judge or clerk level.

So how does CARA do in this test? Really well. Take, for example, a version of this test where we look at whether CARA suggests cases based on lower court opinions that the reviewing court cites while overturning the lower court. It’s an important test, because overturning implies the lower court wasn’t right on the law, and may have needed to be more thoroughly researched. In this test, we started with a set of 6,153 pairs of cases, each pair comprised of one lower court case and the appellate case that overturned it. We ran the lower court cases through CARA to see whether CARA would recommend cases later cited in the appellate opinion—in a sense, doing the work of an appellate clerk to find relevant case law the lower court missed.

For 40 percent of these pairings, CARA did exactly that—its research uncovered the same cases that the lower court missed but the reviewing court and its clerks found during their more in-depth research. And even more impressively, we found that for several of these cases, the case CARA found was one that was crucial to the appellate court’s opinion.

Here’s what this looks like in practice. In Fed. Trade Comm’n v. Watson Pharms., Inc., 677 F.3d 1298 (11th Cir. 2012), the Eleventh Circuit held that it was not an antitrust violation for drug companies to agree not to pursue generic competitors to an on-patent drug in exchange for an agreement that they would not pursue patent invalidity claims. On appeal, the Supreme Court disagreed. In reversing the Eleventh Circuit, the Supreme Court relied on the California Dental Assn v. FTC  “rule of reason analysis”—a case the Eleventh Circuit did not cite and an analysis it did not apply. But California Dental comes up in CARA’s results; CARA’s research was able to correctly determine the relevant test to be applied.

Take Two: Casetext—Jake Heller in progress
Pablo Arredondo

 

CARA also works for older cases, state cases, and cases on completely different topics.  We uploaded United States v. Pink, 284 N.Y. 555 (1940), a state case involving complex issues of international sovereignty to CARA, and CARA suggested U.S. v. Belmont, 301 U.S. 324 (1937). Not only was Belmont cited in the Supreme Court opinion reversing the New York court (United States v. Pink, 315 U.S. 203 (1942))—the Supreme Court’s opinion refers to Belmont as “determinative of the present controversy.”

When we uploaded Nash v. Jeffes, 739 F.2d 878 (3d Cir. 1984) to CARA, it recommended looking at United States v. Mauro, 436 U.S. 340 (1978). Mauro is discussed at length in the Supreme Court  opinion that overrules Nash, Carchman v. Nash, 473 U.S. 716 (1985)—in fact, Mauro is cited 17 times by the Court.

Putting Casetext’s CARA to the Test 1
Chelsea Strauss

 

These results provide a glimpse at the impact CARA can have in practice. With CARA, an attorney can find on-point cases like these in seconds, rather than spending weeks researching and then still worrying about missing the key case. Because of CARA, we hope that in the future we will much less often see an appellate opinion which relies heavily on a case that was completely missed by the lower court.

Of course, this doesn’t mean we’re done refining CARA. We’re continuing to make improvements to the algorithms that brought us these results every day. Stay tuned as CARA grows, learning from real court cases and litigators to get better and better at finding on-point cases.

 

Pablo Arredondo is Vice President of Legal Research at Casetext and a Fellow at the Stanford Center for Legal Informatics. Chelsea Strauss is a 3L at Suffolk Law School where she was President of the Law Technology and Innovation Association.

Cover image: Clipart.com