BioSci-Fi: Do Androids Dream of Electric Sheep?, Philip K. Dick, 1968

“If you dial,” Iran said, eyes open and watching, “for greater venom, then I’ll dial the same. I’ll dial the maximum and you’ll see a fight that makes every argument we’ve had up to now seem like nothing. Dial and see; just try me.” She rose swiftly, loped to the console of her own mood organ, stood glaring at him, waiting.

He sighed, defeated by her threat. “I’ll dial what’s on my schedule for today.” Examining the schedule for January 3, 1992, he saw that a businesslike professional attitude was called for. “If I dial by schedule,” he said warily, “will you agree to also?” He waited, canny enough not to commit himself until his wife had agreed to follow suit.

“My schedule for today lists a six-hour self-accusatory depression,” Iran said.

Phillip K. Dick’s 1968 novel isn’t as well known as Blade Runner, the 1982 film (directed by Ridley Scott and starring Harrison Ford) inspired by the book. But the book is a lot more interesting. While the two share some basic plot points, centering around a bounty-hunter’s pursuit of renegade androids, the similarities end there. The film and the book present radically different, in some ways opposite, dystopian futures. Blade Runner features a densely overpopulated Los Angeles, with impossibly tall skyscrapers and streets teeming with people and activity. Dick’s novel is set in a world in which little life remains following a nuclear war. And while both works explore what it means to be human, Blade Runner merely suggests that an artificial intelligence could experience human emotions. Do Androids Dream… digs deeper, using the absence of life as a vehicle to propose empathy as a (perhaps “the”) defining human trait.

In Dick’s novel most humans have emigrated to other planets to avoid radiation poisoning (an option not available to the protagonist, Rick Deckard, whose job is to hunt androids who have illegally immigrated to Earth). Plant and animal life are also extremely rare. The people who remain on Earth are crushed by its emptiness and pine for the presence of life. They own animals as a marker of status (Decker can’t afford a real sheep, but owns a robotic imitation to keep up appearances), but also because they yearn to nurture another living thing.

The androids, or “Replicants,” lack this capacity to love lives other than their own. While they are virtually indistinguishable from humans, they can be identified through a test that measures their involuntary physiological responses (like blushing) to morally shocking hypothetical scenarios, generally involving the treatment of animals. Although newer Replicant models have been programmed to put on a convincing show of empathy, they don’t experience the emotion. (The androids, it seems, are psychopaths.)

More intriguing than the Replicants’ similarity to humans is the humans’ similarity to androids. In particular, the people in the novel can program their own emotional states with a device called the Penfield mood organ. Users of the mood organ can dial up a broad array of highly specific emotions, like “a creative and fresh attitude toward one’s job,” “the desire to watch TV, no matter what’s on it,” or (most valuably) “pleased acknowledgement of husband’s superior knowledge in all things.” In a way, then, the humans’ emotions are every bit as fake as the androids’, but in a different way. Unlike the Replicants, which can only be programmed to simulate expressions of empathy, the mood organ allows its users to actually experience whatever emotion they select. But those emotions aren’t genuine, in the sense that they don’t reflect users’ “true” responses to their circumstances. The very purpose of the device is to allow users to feel differently about circumstances than they otherwise would.

This disconnect between the state of the world and her engineered emotions is precisely what prompts Decker’s wife, Iran, to begin scheduling sessions of depression. When she happens to mute her TV, she senses the emptiness of her apartment building and the world outside:

“At that moment,” Iran said, “when I had the TV sound off, I was in a 382; I had just dialed it. So although I heard the emptiness intellectually, I didn’t feel it. My first reaction consisted of being grateful that we could afford a Penfield mood organ. But then I read how unhealthy it was, sensing the absence of life, not just in this building but everywhere, and not reacting — do you see? I guess you don’t. But that used to be considered a sign of mental illness; they called it ‘absence of appropriate affect’ . . . . So I put [despair] on my schedule for twice a month; I think that’s a reasonable amount of time to feel hopeless about everything . . . don’t you think?”

Although the future is here (the book is set in 1992) and the mood organ isn’t, we are gradually approximating the device through psychopharmacology. Ritalin provides “a businesslike professional attitude.” Xanax relieves anxiety and promotes relaxation.  Antidepressants help people feel more confident and hopeful – perhaps enabling some to experience Penfield setting 481, “awareness of the manifold possibilities open to me in the future.”

Several observers have expressed concerns about these practices that echo Iran’s discomfort. In its 2003 report, Beyond Therapy: Biotechnology and the Pursuit of Happiness, the President’s Council on Bioethics worried that “mood-brightening drugs . . . will estrange us emotionally from life as it really is, preventing us from responding to events and experiences, whether good or bad, in a fitting way.” The Council worried that new pharmacological interventions “will keep us ‘bright’ or impassive in the face of things that ought to trouble, sadden, outrage, or inspire us” – just as the mood organ prevented Iran from truly feeling the sadness of the empty world around her. Philosopher Carl Elliott makes a similar point in asking whether a depressed Sisyphus would be a good candidate for Prozac. Elliott argues there’s something troubling about decoupling our emotional responses from our circumstances and the state of the world, asking “Who is better off: the contented slave, or the angry one? The man who sins happily, or the one who feels guilt and shame?”

Although these philosophical questions about when and how to use drugs to alleviate emotional suffering haven’t yet come to the fore as legal issues, that’s starting to change. For example, in Gonzales v. Oregon (2006) the Supreme Court grappled with the meaning of the Controlled Substances Act’s requirement that drugs only be prescribed for “legitimate medical purpose[s].” Although the case concerned physician-assisted suicide, at oral argument several justices struggled with whether prescribing morphine merely to “make people happy” could be considered a legitimate medical purpose.

These questions will take on increasing urgency as our ability to modify bodies and brains grows. The New York Times recently ran a front-page story about doctors prescribing attention deficit drugs to help kids perform better in school, regardless of whether they’re diagnosed with ADHD. And in Listening to Prozac, psychiatrist Peter Kramer described his experiences prescribing antidepressants to patients who weren’t depressed, but who found life easier and more satisfying when on the drugs. Are these legitimate medical practices, or have these doctors moved from healing to a form of drug-dealing? As these practices proliferate, how do we distinguish between legitimate uses of psychotropic drugs and illicit “recreational” practices?

Our existing system of drug and device regulation, which focuses exclusively on ensuring patient safety, is ill-equipped to consider questions about the “proper” uses of medicine to alter one’s cognition and emotions. Some observers perceive this as a deficiency of the current system, and have advocated expanding regulatory authority to consider not just drugs’ safety, but their social implications as well (see e.g. here, here, and here). For example, Francis Fukuyama has proposed creating a new agency with the power to regulate a device like the Penfield mood organ (or its pharmacological equivalents) based on judgments about whether the technology “promote[s] human flourishing” or “pose[s] a threat to human dignity and well-being.” These proposals raise troubling issues of their own, not least of which is how much power we should grant government to interfere with individuals’ decisions regarding their own bodies and minds.

It’s probably no coincidence that when Dick wrote about the Penfield mood organ, the Rolling Stones were filling the airwaves with lines like:

Mother needs something today to calm her down

And though she’s not really ill, there’s a little yellow pill

She goes running for the shelter of a mother’s little helper

And it helps her on her way, gets her through her busy day

If the popularity of the “little yellow pill,” Valium, was enough to make Mick Jagger sneer, today Americans consume enough psychotropic drugs to make Keith Richards blush. We’re building the mood organ one drug at a time, and we’re every bit as ambivalent about it as Dick predicted. If you find that distressing, adjust your dial.

Matt Lamkin
Twitter: @lawbioethics