Who’s to Blame for Bad Things on the Internet?

YouTube launched in 2006 with a simple message: broadcast yourself.

And people did. Oh boy, did they ever – not just on YouTube but on Facebook, Twitter, Instagram, and a dozen other sites. They shared the details of their lives big and small, posted pictures of their cats, discussed the news of the world, and got into arguments with friends and strangers alike. Lots and lots of arguments.

The ability of anyone and everyone to share their thoughts and ideas with the world has led to the greatest outpouring of creativity in human history by at least an order of magnitude. But it has also produced a flood of hate speech, bullying, and fake news.  

Mark A. Lemley
Mark A. Lemley, William H. Neukom Professor of Law and Director, Program in Law, Science & Technology

The attacks – often targeted most virulently against women and minorities – have driven some people to suicide, driven others off the Internet altogether, and made it harder and more dangerous for people to speak out. The flood of fake news has left people unsure who or what to trust, reduced Americans’ belief in science and education, opened our public discourse and our elections to foreign manipulation, and driven people to plan to shoot up a pizza parlor supposedly involved in a ludicrous conspiracy theory. And all of it has reduced our civility to one another and our trust in American institutions.

Tech companies, we are told, brought this epidemic of incivility upon us. Their platforms shared and promoted fake news, enabling Russian bots to steal the 2016 presidential election. Or, on the contrary, they’ve deliberately promoted liberal voices and silenced conservative ones. They reward people for the most outrageous posts, driving down the level of discourse to the point where Godwin’s law (“any debate on the Internet will eventually devolve into one party comparing another to Hitler”) now seems a truism. Most of all, we worry, they concentrate too much power in too few hands.   

Since the 2016 election highlighted just how bad things had gotten, the backlash against Silicon Valley has been swift indeed. Once the darlings of the public and the media – the place everyone wanted to go work – tech companies are under siege. Governments around the world demand that platforms make sure that bad things get taken down or, increasingly, that they make sure those things never get posted in the first place. Republicans and Democrats alike want them regulated – though often in mutually contradictory ways. Serious presidential candidates want to break them all up.  

There is no question that tech companies have things to answer for. And we need some oversight over companies with that big a power in our lives. But it’s too easy to blame them for the problems of the modern world.  

The rush to blame tech companies comes from our collective unwillingness to confront a truer, and more troubling, culprit for the misinformation, hate, and incivility rampant on the Internet: ourselves.  

We can’t blame tech companies alone for fake news. People have been sharing fake news as long as there has been language. Were there Russian bots out there posting fake news in hopes of getting Trump elected? Absolutely. But there were fewer than you think. And most of us aren’t friends with Russian bots. If their posts had any traction, it is because we the people read their misinformation, believed it, and shared it.  

Sure, it’s easier to find fake news online, just as it’s easier to find all news online. But tech platforms excel at showing people what they want to see. Headline writers write misleading headlines to draw people in. And people, not Russian bots, share fake news. We show tech companies, headline writers, and our friends that we like it and they send us more.  

Indeed, tech companies – unlike their offline counterparts – can actually do a pretty good job of spotting and killing viral fake news. They flag it with fact checks, deemphasize it in their news feeds, and even ban posters who repeatedly mislead the world. But it turns out that fake news continues because people want it to.

A recent study by Yochai Benkler at Harvard found that people on the left and in the political center share fake news stories just like people on the right, but news media and their fellow citizens catch those stories and correct them, after which they mostly die out. On the right, by contrast, fake news gets shared and amplified even after it is exposed as fake. That’s a real problem for a democratic society, but it’s not a problem caused by tech companies. Indeed, some Republican legislators want to regulate tech companies precisely in order to prevent them from blocking fake news. I wouldn’t have thought the problem with the Internet was not enough misinformation and hate speech, but apparently that’s the view in some quarters.

Bullying and racism too are by no means inventions of the Internet. True, the Internet does provide a place for racists and bullies to find others who think like them. But allowing people to find others is just what the Internet does. We celebrate that fact when it’s a gay teenager in rural Tennessee who doesn’t kill himself because he finds out on the Internet that there are others out there just like him.  

Tech companies didn’t make people think or do horrible things. They may have allowed people to find others who share their beliefs, but those people were already out there to be found.

It wasn’t Russian bots who bullied female gamers and journalists to try to keep video games a white male enclave. It wasn’t Russian bots who marched on Charlottesville, or who circulated ludicrous conspiracy theories about abortion-promoting pizza parlors, who sent death threats to those they disagree with on the left or the right, or who shot up mosques and synagogues (or baseball fields with Republican legislators). It was our neighbors, our friends, our family members.

And that is the problem we must confront.

There is one way the Internet differs from offline communication. On the Internet, as the cartoon goes, no one knows you’re a dog – or a Russian bot. Anonymous speech can be important for various reasons, including confronting a repressive government. But anonymous speech about everyday matters can also be corrosive. Anonymity makes it harder to know who to trust. It can also embolden liars, bullies, and racists, who can pretty much guarantee they won’t be held responsible for things they post online.  

That needs to change.  We can’t change the fact that there are bad people out there.  The Internet – and the 2016 election – have shown us that there are a lot more of them than we thought. But ideally they should have to identify themselves so we can hold them accountable, legally and morally, for the things they say and do that violate the law.  

And tech companies can help us do so. Congress can’t require speakers to disclose their identities online. The First Amendment protects anonymous speech, and that is unlikely to change.  But tech companies, as private entities, aren’t bound by the First Amendment. They can and should require posters to identify themselves online, either with their actual name or with a pseudonym. Identification both allows people to be sued (or prosecuted) if they violate the law and so to be blocked or shunned if they abuse our trust, violating the norms of discourse that have allowed our country to get this far.  

True, sometimes people have good reason for not wanting to identify themselves in public. They may fear an abusive ex, an employer with something to hide, or a hostile government. But even if we don’t want you to identify yourself by name in some circumstances online, Internet intermediaries could require a persistent pseudonym and keep the information about the identity of the poster confidential unless faced with a warrant or court order. Again, the law doesn’t and can’t require them to do so. But it is probably the best practice. It is no accident that sites like 4Chan, Reddit, and Twitter that make little or no effort to identify posters are the ones awash in hate speech and fake accounts. And cases like AutoAdmit (which I litigated) show how hard it is to find those anonymous speakers if the Internet company deliberately chooses not to collect identifying information.

I come to this conclusion reluctantly. I am an old-line, first-generation Internet user, and I cherish the ability it offered to reinvent yourself. But too many people have taken the opportunity to discover their inner rapist, their inner bigot, their inner terrorist. Tech companies have the right and the moral responsibility to police their pages to make them usable for people without having to live in constant threat of rape or murder. And doing that requires knowing who is posting those threats.

Identifying people isn’t a panacea. Donald Trump has shown us that you can attach your name to truly vile sentiments and get away with it. And maybe that will embolden others to attack people online even when their names are attached to those attacks. But the worst of the Internet has operated under cover of anonymity. Removing that cover won’t restore civility to a country that seems to have lost it, but it’s a start.