By Roland Nadler, 2L, Stanford Law School, Student Fellow, CLB
As many of our readers will know, last week the Food and Drug Administration let the regulatory hammer fall on 23andMe, last of the major direct-to-consumer personal genomics firms (at least, the last one dealing in single-nucleotide polymorphism analysis). As Hank Greely detailed in this space previously, the company seems to have exhausted the agency’s patience by engaging in an advertising blitz while maintaining a baffling radio silence vis–à–vis its regulator for six months. The move has touched off some truly stimulating debate in the law and biosciences communities; our local NPR, for instance, hosted a lively Forum segment featuring some familiar voices.
There is certainly plenty to debate. On the more formal legal side of things, commentators have been trading arguments regarding whether FDA can properly exercise jurisdiction over these sorts of direct-to-consumer tests in the first place, an issue which boils down in large part to whether it is correct to say that the product is “intended” for diagnostic use. But looming perhaps larger is the policy debate: should we be regulating this sort of personal genomic service, and if so, how aggressively? As the above-linked radio segment makes clear, the ongoing sparring over this question pits “health exceptionalists” — who believe that a certain level of paternalism is quite justified when it comes to health and medicine, where expertise can be a matter of life and death — against more libertarian–leaning individualists, who counter that consumers are perfectly capable of using their genomic information sensibly, that the market will provide adequate resources and guidance to help them do so, and that, anyhow, no government has the right to stand between individuals and (companies offering them) information about their own bodies.
The clash of paternalist and libertarian views here raises exquisitely interesting political and philosophical issues, from the role of government in correcting market failure to the relative moral weight of the “right to know oneself” relative to the protection of public health. Law school having imparted upon me a healthy, Stephen–Breyer–esque respect for the knotty, sometimes irreducible difficulty of political questions, I would not argue that this is the sort of disagreement we can legitimately short-circuit by trotting out the right facts and figures. All the same, you can take the law student out of the empirically-oriented research group, but you can’t take the empirical research instincts out of the law student; channeling my old boss, Peter Reiner, my strong inclination here is to say, “show me the data!” Indeed, this is hardly the first time I have issued such a call.
What could data tell us about this question of regulatory policy? Well, one of the central concerns of the health exceptionalists is that people are simply not sophisticated enough to use their personal genomic information wisely — the nightmare scenario being, say, scores of users misinterpreting their 23andMe results and demanding preemptive mastectomies when in fact the results had only moved their breast cancer risk from population average (12 percent lifetime risk for American women) to somewhat higher than average but not enormous (say, 18 percent – a 50% “increase” in risk, note). Disastrously overconfident misinterpretations of “negative” results likewise pose a threat. The individualist camp hangs some (but not all!) of their case on disputing these predictions, lamenting that we give the common citizen far too little credit when it comes to competently vindicating their own personal health interests. So, there is a factual controversy here amenable to empirical resolution, or at least illumination. (Nor is this the only factual controversy – a whole separate post could be written about disputes over the actual validity of the genotype-phenotype associations upon which 23andMe relies.) And no, we cannot settle it with anecdotes. The fact that you or your cousin or your coworker went through the whole personal genomic testing process in a spectacularly astute (or, for that matter, disastrously ignorant) manner is nice (or not so nice) for them, but practically immaterial to the big picture. Trust me, that’s Science.
So what do we know in this area? A handful of studies bear on the questions we are interested in here, some more directly than others. To begin with, a study out of Duke University (Johnson & Shaw 2012) looked at the public’s ability to understand health risk information generally — not quite the sort of information one might get from 23andMe, but relevantly similar. Specifically, the study focused on understanding of risk information presented in graphical formats — just like 23andMe delivers. The results were . . . not encouraging. Participants overwhelmingly tended to report that the graphs were easy to understand and then go on to fundamentally misinterpret their meaning. Although this study looks like a failing grade for the public’s ability to handle this type of information, we cannot conclude too much yet. The paper’s authors characterize it as a pilot study, and indeed, with a “convenience sample” (translation: we recruited whomever we could find without any sampling methodology to guard against a biased sample) of only 30, more work remains to be done. On the other hand, the sample skewed toward more educated participants, so a more robust sample might actually present an even grimmer picture of people’s competence.
A prominent name in this vein of research is Cinnamon Bloss, of the Scripps Translational Science Institute, and two studies on which she is lead author bear on the question at hand. In a paper for the journal Genetics in Medicine (Bloss et al. 2010), Bloss and colleagues reported on a large study they conducted with users of Navigenics, formerly one of 23andMe’s competitors. The study aimed to understand how many users harbored anxieties and concerns about receiving their genetic information — and the answer was roughly half. Nonetheless, more than 80% of the respondents indicated that, concerns or no concerns, they would still want to know their genetic information. The significance of this result might go either way. Clearly demand for this service is strong (though, this was a sample of people who were already interested enough to take the test). But the 50–50 split on concerns might be interpreted to mean that plenty of people (or too few of them, depending on your view of things) will seek out expert opinion in order to assuage their anxieties. Or it might be interpreted to mean that many folks find this stuff scary and overwhelming, which might militate in favor of regulation, unless you think that 49.7% expressing concerns is insufficient.
In a subsequent paper for the New England Journal of Medicine (Bloss et al. 2011), the focus shifted to people who actually had received genomic tests. The investigators wanted to learn how users were affected by their test experience and results in terms of anxiety, diet, and exercise. The results were almost entirely null. Yes, participants whose tests came back with concerning results tended to report somewhat more stress, but as a general matter no significant changes manifested in participants’ anxiety levels, fat intake, or exercise levels between pre–test and post–test evaluation. Moreover, upwards of 90% of participants reported no test–related distress at all. This seems like a small (but not insignificant) point in favor of the individualist view. If people are not losing much sleep over their actual results (which, of course, is consistent with the possibility that they have concerns in anticipation of them), then one of the paternalist worries is dispelled. But, of course, the initial objection — that people, distress or no distress, will rely on their misinterpretations of the results to their detriment — remains unaddressed.
Moving on to a study that addresses the question at hand even more directly, a team of investigators publishing in the journal Public Health Genomics (Leighton et al. 2012) compared the performance of ordinary folk to that of genetic counselors when it comes to interpreting genomic test results. No one will be surprised to learn that general public fared worse at this task; but the study found that people were significantly out of step with the genetic counselors as to their belief in how helpful the results are. Lay respondents also considered results easy to understand but then misinterpret them, echoing the Johnson & Shaw paper. So this looks like a strong case for paternalism — or, rather, it will, if the results can be replicated. The study team amassed a sample of 171 genetic counselors and 145 members of the general public, the latter being recruited via Facebook, in part by a technique called snowball sampling, in which the investigator reaches out to recruit people through their social network but also asks those recruits to bring in several other recruits. Snowball sampling is not necessarily an invalid tactic (nor, for that matter, is the reliance on Facebook, given the high overlap between social media users and the set of people interested in their personal genomic information), but it falls far short of the gold standard for this sort of research. So, this is another study I would interpret with some caution.
The Leighton et al. paper also includes some very useful language in its introductory section. I quote it in full here [internal citations omitted]:
“The ability of individuals to understand risk values similar to those reported with DTC results requires an ability to comprehend, use and attach meaning to numbers, which is referred to as numeracy. Unfortunately, numeracy skills of the general public, especially those with less education, are relatively low. One large national study found that only 13% of individuals in the general population were proficient in numeracy skills and that 66% had only intermediate or basic numeracy skills. Other numeracy studies have shown that individuals have difficulties comparing different risk value presentations and converting a numerical value to a percentage. In addition, low numeracy skills have been shown to be associated with developing inappropriate risk perceptions. . . . Furthermore, the literacy demands of information supplied on DTC genetic testing websites is high. These studies provide evidence that the general public may not be equipped to interpret results from DTC tests without assistance from an appropriate medical professional.”
The poor numeracy skills of the general populace would seem to bolster the paternalist view.
Perhaps the best and most directly on–point study I have seen on this topic comes from Baylor College of Medicine by way of the American Journal of Bioethics (McGuire et al. 2009). Amy McGuire and colleagues surveyed a robust sample of social networkers, n = 1,087, about their understanding of and attitudes toward personal genomics companies and test results. Some key language from the paper: “Of those who would consider using PGT, 74% report they would use it to gain knowledge about disease in their family. 34% of all respondents consider the information obtained from PGT to be a medical diagnosis.” Another point in the health exceptionalist column. But then: “78% of those who would consider PGT would ask their physician for help interpreting test results, and 61% of all respondents believe physicians have a professional obligation to help individuals interpret PGT results.” So perhaps people will do the prudent thing after all? Finally, to complicate the picture just a little more: “Less than 50% (42%) of all respondents were confident that they understood the risks and benefits of PGT and knew enough about genetics to understand the results (46%).” So, by this study’s lights, the proportion of people essentially calling for help with this sort of information is a little larger than a simple majority. But which way does that cut? Does the prevalence of avowedly unequipped–to–handle–the–truth end–users mean that we need to regulate via restricting access and mandating consumer protections, or does it mean people are likely to seek out expert opinion anyway? I could see arguments either way, but one thing is certain . . . more data, especially from high quality samples, would surely go a long way toward helping answer those questions.
So where are we left after rooting around in the empirical weeds? It is hard to say precisely. There seems to be at least some preliminary data supporting the notion that end–users are bad, on the whole, at understanding health risk in general and personal genomic information in particular. But it also seems there is some reason to believe that consumers are inclined to seek out expert opinion. Even if we had more conclusive data, of course, there would still be plenty of room for political wrangling. A key question, for example, would remain as to just how severe a market failure justifies regulation here: would it be right for FDA to step in if, say, a mere 1% of users reported that they would eschew genetic counseling and interpret their results on their own? Or is the magic number 5%, or 10%, or higher? So while empirical findings — at least those born of asking the right questions and investigating them with unimpeachable scientific and statistical methods — can almost always shed more light on a topic like this, at the end of the day there will always be a role for persuasive political argumentation and pragmatic judgment calls. Nonetheless, this exciting and high–profile policy debate — which appears to be heading for a series of watershed moments in the coming months and years — deserves a rich pool of empirical data for both sides to draw on in the course of formulating the best possible arguments.