The Significance of Statistical Significance

Early last month, Public Citizen, a non-profit devoted to “safeguarding individual rights and promoting public health and safety,” filed a lawsuit against the FDA regarding the Agency’s approval of high-dose version of Aricept, an Alzheimer’s drug. (Compl., Public Citizen, Inc. v. FDA, No. 12-cv-1461 (D.D.C. Sept. 5, 2012).) The FDA had previously approved 5 and 10 mg doses of Aricept; Public Citizen alleges that a 23 mg dose form, recently approved by the FDA, “has no greater efficacy than the lower doses but has more severe—and potentially life-threatening—side effects.” (Compl. ¶ 1.) At the center of Public Citizen’s complaint lies allegations that the FDA approved the high-dose version of Aricept even though several clinical studies did not demonstrate “statistically significant” benefits. (Id. ¶ 10.)

This intersection between “legal significance” and “statistical significance” has fascinated me since reading The Cult of Statistical Significance by Stephen T. Ziliak and Deirdre N. McCloskey, two years ago. The core claims of the book are that society over-relies on statistical significance to make policy determinations, that statistical significance is only good at measuring sampling error, and that other statistical measurements–such as statistical “power” (β)–are better “determinants.” (The book also tars R.A. Fisher for wanton self-promotion.) Much has been written about the book’s shortcomings (seee.g., reviews in AMS; Erasmus; Statistical Papers), but the rhetorical argument–Why is “significance” the only game in town?–seems mighty powerful.

In 2011, the Supreme Court appeared to have adopted at least some of Ziliak’s and McCloskey’s claims. In Matrixx Initiatives, Inc. v. Siracusano, the Court rejected Matrixx Initiatives’ defense that it was not liable from securities fraud claims surrounding its failure to report negative side-effects of its drug because those side-effects were statistically “insignificant.” The Court rejected Matrixx Initiatives’ argument “that statistical significance is the only reliable indication of causation.” (Slip op. at 11.) Rather, the Court described the many instances for which the FDA did not require “statistically significant” findings.

It will be interesting to see how Public Citizen’s lawsuit unfolds given its allegations concerning “statistical significance” and the Supreme Court’s lengthy discussion of the FDA’s use of the metric in Matrixx Initiatives. Like Ziliak and McCloskey, the court will likely address the significance of statistical significance.

1 Response to The Significance of Statistical Significance
  1. Check out this case.

    Adams Respiratory Therapeutics, Inc. v. Perrigo Co., 616 F.3d 1283 (Fed. Cir. Aug. 5, 2010)

    In construing the term “equivalent” in a medical claim, the Federal Circuit declined to adopt the FDA’s definition of the term because the FDA’s definition includes a 90% confidence interval that would inappropriately raise the bar for establishing infringement. The court did find that the patent owner had adopted part of the FDA’s definition—that “equivalent” meant within 80% to 125% of the value to which it is being compared.
    The court found that patent owner Adams had not even mentioned the 90% confidence interval during the reexamination, nor was there any evidence in the specification or prosecution history to indicate that he intended that limitation. Separately, the court expressed concern about how a 90% confidence interval would affect infringement analysis. “Requiring a 90% confidence interval would inappropriately raise the bar for establishing infringement. Adams must show that it is more likely than not that [the infringer’s] product will have a [certain value] within the 80 to 125% range. Adams is not required to show that Perrigo’s product will meet this requirement 9 times out of 10.”

Comments are closed.