Should We Care About the Moral Propriety of Cognitive Enhancement?

Scientific American recently published a short essay by Roy H. Hamilton and Jihad Zreik, asking whether we should use devices (like tDCS, part of a larger category of brain stimulation technologies that I and others have blogged about before) to make ourselves smarter. I enjoyed it, and in no small part because the analysis breaks down the potential moral issues with cognitive enhancement into four basic categories — safety, distributive justice, autonomy / coercion, and authenticity — that happen to match the “four cardinal concerns” identified in the major empirical study of public attitudes toward cognitive enhancement that I helped author. But in reading the essay, and in revisiting the first-order moral question about whether we ought to enhance, I also felt a twinge of skepticism about the undertaking.

By way of defusing the slightly provocative title of this post: I do think we should care about the moral propriety of cognitive enhancement. But a few years ago, I would have been considerably less interested in asking when, how, and why we should care. My thinking has since changed. In some respects, the question strikes me as idle. Of course, this sense of “idle” still leaves room for the question to be intrinsically interesting, and therefore worth thinking about because doing good philosophy is a worthy end in itself. I do, however, mean “idle” in the sense that even a very convincing answer to the question will not straightforwardly dictate — might not even impact very substantially — the actual practice of cognitive enhancement in society.

The reason for my skepticism here is simple: politics is, by necessity, broader than any particular moral view. Of course, in some sense, policy does the work of implementing moral consensus; but in a pluralistic society, that consensus is almost always a compromise between competing and deeply-held views of the good life. And when it comes to cognitive enhancement — a technology that many, many people will almost certainly desire access to — it would be terribly naive to suppose that we will be able to arrive at a coherent policy position without some level of compromise, even if there are fantastically convincing arguments to the effect that enhancement is wrong.

In light of how the political process actually functions, the question of the moral propriety of cognitive enhancement is non-idle — it actually does some heavy lifting in shaping how the world will look — only to the extent that policies supporting or curtailing access to enhancement are within the bounds of political feasibility. If I read him correctly, my colleague Veljko Dubljevic has made a similar argument. Within those bounds, some moral concerns enjoy more heft than others. Almost nobody’s vision of the good life includes routine exposure to unreasonably unsafe products, so arguments about safety will find plenty of traction (though, of course, the libertarian view of the good life also encompasses freedom from regulation that curbs the marketing of unsafe products, so even here there will be disagreement). On the other end of the spectrum, reasonable people will disagree fundamentally about what it means to lead an authentic existence, not to mention whether doing so is a concern of such moral importance that it is legitimate to foist it on everyone else in society; it seems safe to conclude that arguments from authenticity, then, will have much smaller of an impact on what actually happens with enhancement technology. The emerging picture here is that we should care much more about the moral propriety of cognitive enhancement when it touches on issues that are legitimately regulable in the political process, and (again, without disparaging the intrinsic worth of the inquiry) less so when it touches on the kinds of value-judgments that we typically reserve to the sphere of individual moral self-determination.

Of course, I would say that — because on this view, a well-fashioned argument about the moral propriety or impropriety of cognitive enhancement will carry, on average, as much weight as a well-researched empirical claim about the democratic will on the topic. As someone with a stake in the importance of research on public attitudes, I am naturally going to be sympathetic to a position that places such research on equal footing with more philosophical approaches. But even looking past my biases, I think there is something to be said for the idea that social policy requires roughly equal measures of democratic legitimacy and moral legitimacy, and that even the most convincingly argued moral proposition may be stopped in its tracks by political impossibility. Sometimes I hate that this is true — as, for example, when majoritarian bigotry stymies the progress of fundamental civil and human rights for oppressed groups — but that frustration provides all the more reason to craft moral arguments with an eye to their political efficacy.

In other words, Hamilton and Zreik may well be correct that we should not engage in cognitive enhancement — but they, and others who share their view, could find themselves with no recourse to make that moral conclusion effective in the real world beyond individual-level persuasion. We should still care about the moral propriety of cognitive enhancement, but when it comes to the technology’s fate in the real world, more is needed.

Roland Nadler, Stanford Law School JD class of 2015, is a student fellow at the Center for Law and the Biosciences.