BioSci-Fi: Do Androids Dream of Electric Sheep?, Philip K. Dick, 1968

“If you dial,” Iran said, eyes open and watching, “for greater venom, then I’ll dial the same. I’ll dial the maximum and you’ll see a fight that makes every argument we’ve had up to now seem like nothing. Dial and see; just try me.” She rose swiftly, loped to the console of her own mood organ, stood glaring at him, waiting.

He sighed, defeated by her threat. “I’ll dial what’s on my schedule for today.” Examining the schedule for January 3, 1992, he saw that a businesslike professional attitude was called for. “If I dial by schedule,” he said warily, “will you agree to also?” He waited, canny enough not to commit himself until his wife had agreed to follow suit.

“My schedule for today lists a six-hour self-accusatory depression,” Iran said.

Phillip K. Dick’s 1968 novel isn’t as well known as Blade Runner, the 1982 film (directed by Ridley Scott and starring Harrison Ford) inspired by the book. But the book is a lot more interesting. While the two share some basic plot points, centering around a bounty-hunter’s pursuit of renegade androids, the similarities end there. The film and the book present radically different, in some ways opposite, dystopian futures. Blade Runner features a densely overpopulated Los Angeles, with impossibly tall skyscrapers and streets teeming with people and activity. Dick’s novel is set in a world in which little life remains following a nuclear war. And while both works explore what it means to be human, Blade Runner merely suggests that an artificial intelligence could experience human emotions. Do Androids Dream… digs deeper, using the absence of life as a vehicle to propose empathy as a (perhaps “the”) defining human trait.

In Dick’s novel most humans have emigrated to other planets to avoid radiation poisoning (an option not available to the protagonist, Rick Deckard, whose job is to hunt androids who have illegally immigrated to Earth). Plant and animal life are also extremely rare. The people who remain on Earth are crushed by its emptiness and pine for the presence of life. They own animals as a marker of status (Decker can’t afford a real sheep, but owns a robotic imitation to keep up appearances), but also because they yearn to nurture another living thing.

The androids, or “Replicants,” lack this capacity to love lives other than their own. While they are virtually indistinguishable from humans, they can be identified through a test that measures their involuntary physiological responses (like blushing) to morally shocking hypothetical scenarios, generally involving the treatment of animals. Although newer Replicant models have been programmed to put on a convincing show of empathy, they don’t experience the emotion. (The androids, it seems, are psychopaths.)

More intriguing than the Replicants’ similarity to humans is the humans’ similarity to androids. In particular, the people in the novel can program their own emotional states with a device called the Penfield mood organ. Users of the mood organ can dial up a broad array of highly specific emotions, like “a creative and fresh attitude toward one’s job,” “the desire to watch TV, no matter what’s on it,” or (most valuably) “pleased acknowledgement of husband’s superior knowledge in all things.” In a way, then, the humans’ emotions are every bit as fake as the androids’, but in a different way. Unlike the Replicants, which can only be programmed to simulate expressions of empathy, the mood organ allows its users to actually experience whatever emotion they select. But those emotions aren’t genuine, in the sense that they don’t reflect users’ “true” responses to their circumstances. The very purpose of the device is to allow users to feel differently about circumstances than they otherwise would.

This disconnect between the state of the world and her engineered emotions is precisely what prompts Decker’s wife, Iran, to begin scheduling sessions of depression. When she happens to mute her TV, she senses the emptiness of her apartment building and the world outside:

“At that moment,” Iran said, “when I had the TV sound off, I was in a 382; I had just dialed it. So although I heard the emptiness intellectually, I didn’t feel it. My first reaction consisted of being grateful that we could afford a Penfield mood organ. But then I read how unhealthy it was, sensing the absence of life, not just in this building but everywhere, and not reacting — do you see? I guess you don’t. But that used to be considered a sign of mental illness; they called it ‘absence of appropriate affect’ . . . . So I put [despair] on my schedule for twice a month; I think that’s a reasonable amount of time to feel hopeless about everything . . . don’t you think?”

Although the future is here (the book is set in 1992) and the mood organ isn’t, we are gradually approximating the device through psychopharmacology. Ritalin provides “a businesslike professional attitude.” Xanax relieves anxiety and promotes relaxation.  Antidepressants help people feel more confident and hopeful – perhaps enabling some to experience Penfield setting 481, “awareness of the manifold possibilities open to me in the future.”

Several observers have expressed concerns about these practices that echo Iran’s discomfort. In its 2003 report, Beyond Therapy: Biotechnology and the Pursuit of Happiness, the President’s Council on Bioethics worried that “mood-brightening drugs . . . will estrange us emotionally from life as it really is, preventing us from responding to events and experiences, whether good or bad, in a fitting way.” The Council worried that new pharmacological interventions “will keep us ‘bright’ or impassive in the face of things that ought to trouble, sadden, outrage, or inspire us” – just as the mood organ prevented Iran from truly feeling the sadness of the empty world around her. Philosopher Carl Elliott makes a similar point in asking whether a depressed Sisyphus would be a good candidate for Prozac. Elliott argues there’s something troubling about decoupling our emotional responses from our circumstances and the state of the world, asking “Who is better off: the contented slave, or the angry one? The man who sins happily, or the one who feels guilt and shame?”

Although these philosophical questions about when and how to use drugs to alleviate emotional suffering haven’t yet come to the fore as legal issues, that’s starting to change. For example, in Gonzales v. Oregon (2006) the Supreme Court grappled with the meaning of the Controlled Substances Act’s requirement that drugs only be prescribed for “legitimate medical purpose[s].” Although the case concerned physician-assisted suicide, at oral argument several justices struggled with whether prescribing morphine merely to “make people happy” could be considered a legitimate medical purpose.

These questions will take on increasing urgency as our ability to modify bodies and brains grows. The New York Times recently ran a front-page story about doctors prescribing attention deficit drugs to help kids perform better in school, regardless of whether they’re diagnosed with ADHD. And in Listening to Prozac, psychiatrist Peter Kramer described his experiences prescribing antidepressants to patients who weren’t depressed, but who found life easier and more satisfying when on the drugs. Are these legitimate medical practices, or have these doctors moved from healing to a form of drug-dealing? As these practices proliferate, how do we distinguish between legitimate uses of psychotropic drugs and illicit “recreational” practices?

Our existing system of drug and device regulation, which focuses exclusively on ensuring patient safety, is ill-equipped to consider questions about the “proper” uses of medicine to alter one’s cognition and emotions. Some observers perceive this as a deficiency of the current system, and have advocated expanding regulatory authority to consider not just drugs’ safety, but their social implications as well (see e.g. here, here, and here). For example, Francis Fukuyama has proposed creating a new agency with the power to regulate a device like the Penfield mood organ (or its pharmacological equivalents) based on judgments about whether the technology “promote[s] human flourishing” or “pose[s] a threat to human dignity and well-being.” These proposals raise troubling issues of their own, not least of which is how much power we should grant government to interfere with individuals’ decisions regarding their own bodies and minds.

It’s probably no coincidence that when Dick wrote about the Penfield mood organ, the Rolling Stones were filling the airwaves with lines like:

Mother needs something today to calm her down

And though she’s not really ill, there’s a little yellow pill

She goes running for the shelter of a mother’s little helper

And it helps her on her way, gets her through her busy day

If the popularity of the “little yellow pill,” Valium, was enough to make Mick Jagger sneer, today Americans consume enough psychotropic drugs to make Keith Richards blush. We’re building the mood organ one drug at a time, and we’re every bit as ambivalent about it as Dick predicted. If you find that distressing, adjust your dial.

Matt Lamkin
Twitter: @lawbioethics

5 Responses to BioSci-Fi: Do Androids Dream of Electric Sheep?, Philip K. Dick, 1968
  1. “enough psychotropic drugs to make Keith Richards blush” – that sounds impossible (both the “enough” and the “Keith Richards blush,” probably because he’s a replicant.

    But, of course, is caffeine part of the Penfield Mood Organ, or alcohol, or re-reading Pride and Prejudice (or The Hobbit) again? I know it must get old to have people respond with “spectrum” arguments – you can’t complain about X because we already do A through J – but I do think there needs to be a good argument for why we should only go to P, or to somewhere between N through R.

    Thanks for writing this post. I haven’t read Do Androids Dream of Electric Sheep for about 15 years, though I do think it has the best title of any science fiction book; I need to go back and read it again. Should I watch the movie (for the first time)?

  2. I do see some parallels with the debate on PED and sports. What is the difference with Kobe getting super-experimental therapy on his knee, and the use of EPO? How about a person who uses prosthetic legs to run in the Olympics?
    Natural versus unnatural has never been a really good dividing, line, should be interesting to see how this develops.

  3. Obviously we can change how we think and feel in a lot of different ways. But why would that lead us to think that we can’t (or shouldn’t) draw distinctions among them? Ingesting caffeine or alcohol, or reading a book, before getting behind the wheel can all influence how we drive. But only one of them is potentially criminal, because of the three only alcohol impairs driving skills reliably and substantially.

    We have many concerns and several laws about alcohol use because of its fairly reliable potential to impair judgment. I think some of the concerns about interventions like antidepressants are similarly rooted in their potential to impair judgment, though in a very different way – namely, by decoupling one’s emotional responses from her circumstances. If, for example, an antidepressant caused a parent to feel no grief at the loss of her child, I think that would be troubling – and qualitatively different from the kind of effect you’d get from a good book, caffeine, or even alcohol.

    Peter Kramer describes the experience of one of his patients, “Tess,” on antidepressants. Part of what made Tess miserable was her outsized sense of responsibility to others: “She still cared for her mother, and she kept one foot in the projects, sitting on the school committee, working with the health clinics, investing personal effort in the lives of individuals who mostly would disappoint her.” On Prozac, according to Kramer, Tess’ heightened sense of responsibility waned. She was happier, but something may have been lost in the process. I don’t think you have to condemn Tess (or Kramer) to feel some trepidation about this – particularly when you multiply “diminished concern for others” across the 11% of Americans age 12 or older who take antidepressants (http://www.cdc.gov/nchs/data/databriefs/db76.htm).

    Or consider that “[w]omen are 2½ times more likely to be taking an antidepressant than men,” and that nearly 1 in 4 American women in their 40s and 50s take antidepressants (http://www.health.harvard.edu/blog/astounding-increase-in-antidepressant-use-by-americans-201110203624). Maybe that’s because there’s something really wrong with American women’s brains. But it could also be because of societal circumstances that leave women feeling alienated, anxious, or depressed. If it’s the latter, are we better off with medicated women who feel more content with their lot, or alienated women who want their circumstances to change?

    Our emotions connect us to the outside world. Unpleasant emotions – like disgust, indignation, or anger in response to injustice – have value. Disconnecting emotions from circumstances can have costs. Some interventions seem to pose more danger of this than others.

    That said, while I sympathize with these concerns I don’t think they serve as good bases for legal restrictions. On the contrary, I think we should be getting government out of the business of regulating consciousness – a contention I plan to explore in my next post.

  4. Alcohol and driving clearly can affect third parties. Do parents minimizing their grief have the same effects on third parties? We can tell stories where it does, but they are just stories – the effects aren’t clear.

    The Tess argument, it seems to me, is an argument for good knowledge up front – “you know, if you take this you may find that your anxiety about your responsibilities fades away but so may your sense of responsibility; are you sure you want to do this? – as well as an argument for (relative) reversibility, which might means drugs or implants versus surgery.

    Every time someone takes an action to improve their own, individual life they arguably reduce the pressure to change the societal pressures/world realities that had held them down. Can we forbid them to seek to improve their lives, at least assuming good informed consent and barring clear and negative third party consequences?

  5. It may be that “[e]very time someone takes an action to improve their own, individual life they . . . reduce the pressure to change the societal pressures/world realities that had held them down.” But I think part of the question is whether these practices necessarily improve individuals’ own lives. Granted, people engage in them to alleviate some kind of psychic distress. But the same can be said of countless activities that harm people. To the extent people have good reason to be distressed but alter their brains rather than their circumstances, I think the “story” that this could have detrimental effects on individuals and society is pretty plausible.

    That said, in describing the concerns raised by drugs like antidepressants I didn’t intend to endorse laws restricting these practices. In fact I think the government should pull back (though not withdraw entirely) from interfering with how people alter consciousness. So I don’t think we should “forbid” Tess from alleviating her suffering, even if she becomes less committed to helping others as a result. But the fact that we shouldn’t impose our own choices on Tess doesn’t mean we shouldn’t be concerned about the consequences of these kinds of practices – for individuals and for society at large.

    My point about alcohol vs. caffeine and book-reading wasn’t that we should criminalize antidepressant use. I was just responding to your suggestion that since we already modify consciousness in many different ways there’s no basis for distinguishing antidepressant use from other interventions. In fact we routinely distinguish among various ways of altering consciousness based on assessments of their risks – and not merely risks of harms to third parties. Controlled substances are restricted based in large part on the dangers they pose to users, not to others. Again, I’m not advocating imposing similar legal restrictions (and certainly not criminal penalties) on the kinds of practices discussed in my post. I’m merely arguing that it’s reasonable to be more concerned about these practices than the effects of caffeine or book-reading.

Comments are closed.