We all know, or have heard about, someone who’s refused to get a COVID-19 vaccine. While some individuals have medical or religious reasons for avoiding vaccination, for some, other factors influence their decision. Despite the importance of vaccines for public health — and the serious risk associated with being unvaccinated — getting the shot may feel like a betrayal of certain political beliefs.
But where does this feeling come from? Throughout the pandemic, some politicians and other influencers have promoted advice that’s not based on scientific data — sometimes it’s with good intentions, other times it’s intentionally misleading. But the outcome is the same: misinformation.
While some might say making or spreading known false statements related to the vaccine should be criminalized, the First Amendment, which guarantees free speech, continues to provide protection for people who promulgate such faulty information. So, how can the spread of misinformation be stopped without quashing free speech?
I spoke with Mello and asked her to address the Supreme Court’s view on vaccine misinformation — an issue she addressed in a recent Viewpoint piece in JAMA Health Forum. The following Q&A has been edited and condensed.
Several countries have criminalized vaccine misinformation, but the United States has not. Has the Supreme Court’s interpretation of the First Amendment allowed the continued spread of false claims?
The Supreme Court has held that many kinds of false statements are protected speech under the First Amendment. In a 2012 case called United States v. Alvarez, the Supreme Court struck down a law that made it a criminal offense to lie about having received military medals. It refused to hold that a statement’s falsity put it outside the realm of First Amendment protection.
But there are some kinds of false speech that can be penalized by the government, including lying in court, making false statements to the government, impersonating a government official, defaming someone and committing commercial fraud. But it’s a pretty limited list. The Supreme Court’s general finding is that false statements can often be valuable in terms of allowing people to challenge widely held beliefs without fear of repercussions, and that things could go pretty wrong if the government had a wider berth to regulate them.
What risks would be involved in allowing the government to police false claims?
One problem is that we may not all agree on how demonstrably false something has to be in order for it to be restricted. For vaccine risks, for example, some claims about health harms have been persuasively disproven, while others have simply not been studied. So, if I claim that a vaccine was the reason my hair fell out, is that false or just not demonstrably true? Should the difference matter?
A related problem is that for some claims, especially scientific ones, the knowledge base that makes a statement true or false evolves over time. To complicate things further, some people who disseminate false statements know they are lies, while others believe they’re true. Finally, many people just don’t trust the government to not abuse the power to declare something false speech.
All of these challenges make the Supreme Court wary of restricting speech that might ultimately prove to be truthful, or at least contribute to public debate that aids in discovering the truth. The Supreme Court would prefer to let the decision about what’s true be hashed out by “the marketplace of ideas.”
But the interesting thing is, these problems also apply to areas where courts do allow regulation of false statements. Lawmakers have found ways of addressing them, such as requiring the government to prove certain things about the statement or the speaker’s state of mind. It’s not clear, therefore, why the Supreme Court draws the lines it does.
How does our reverence for freedom of speech in the United States intensify our vulnerability to public health threats?
It limits our policy toolkit. Rather than curbing misinformation about health issues, the government is relegated to trying to fight it with counter-speech. Although the idea that clashing ideas will surface the best ideas is appealing to judges, it doesn’t always work out in practice. People’s false beliefs arising from vaccine misinformation, in particular, are extremely difficult to change.
First Amendment protections also make it hard for the government to do things like require warnings about health risks. For example, the Food and Drug Administration fought legal battles for years over its initiative to require cigarette makers to put pictorial warning labels on cigarette packs, with the industry arguing that the requirement constituted compelled speech in violation of free speech rights. The City of San Francisco had similar problems when it tried to require beverage companies to put warnings on their billboard advertisements about the link between consumption of sugary drinks and obesity.
What is the broader impact of taking medical advice from non-medical professionals who may have an agenda not grounded in science or medicine?
Many people — including some medical practitioners — have made it harder for Americans to understand how to protect themselves during the pandemic by crowding the information space with claims that aren’t evidence-based.
It can be hard for people to distinguish between reliable and unreliable sources of information, especially about a new health threat and especially when unreliable information is disseminated by individuals who seem trustworthy by dint of their professional role.
In the case of COVID-19 vaccines, misinformation has led as many as 12 million Americans to forgo vaccination, resulting in an estimated 1,200 excess hospitalizations and 300 deaths per day, according to Johns Hopkins’ Center for Health Security.
What are the ramifications of the continued politicization of the COVID-19 pandemic on our ability to make public health decisions?
Often, when an issue becomes politicized, people view messages from the group they don’t identify with as suspicious, and messages from the group they do identify with as trustworthy — regardless of how well the messages align with the evidence. If we can’t make sound decisions about how we interact with information, we can’t make sound decisions about health.
(Originally published by Stanford Medicine’s Scope Blog on April 21, 2022)