Regulating Online Terrorist Content: A Discussion with Stanford CIS Experts About New EU Proposals

Early on March 15, a suspected white nationalist terrorist stormed two New Zealand mosques, killing some 50 people. The suspect tweeted his plans and live-streamed the massacre on social media, the footage remaining up for hours. In the discussion that follows, two experts from Stanford’s Center for Internet and Society discuss online extremism, the European Commission’s pending draft regulation of online “terrorist content,” and the possibility of regulating hateful and violent content. Daphne Keller is Director of Intermediary Liability and was formerly Google’s Associate General Counsel. Joan Barata is a Consulting Intermediary Liability Fellow with CIS and has long experience advising international organizations such as the Organization for Security and Cooperation (OSCE) in Europe on freedom of expression.

Why should readers outside of Europe be interested in the EU’s new draft regulation of online terrorist content?

Keller: Europe is very much in the driver’s seat in regulating major platforms like Facebook and YouTube right now. That’s especially the case when it comes to controlling what speech and information users see. Whatever the EU compels giant platforms to do, they are likely to do everywhere —perhaps by “voluntarily” changing their global Terms of Service, as they have in response to EU pressure in the past. For smaller platforms outside the EU, the regulation will matter a lot as well. If readers remember the huge impact of the EU’s General  Data Protection Regulation or GDPR, this one is very similar in its extraterritorial reach to websites or apps built by companies outside the EU. And it has the same enormous fines —up to 4% of annual global turnover. So any company that hosts user content, even tiny blogs or newspapers with comments sections, will need to deal with this law.

What does the regulation say?

Barata: Right now there are three versions of the regulation, which are being reconciled into a single draft in a “trilogue” process between the EU Parliament, Commission and Council. The drafts all define new responsibilities for companies hosting content posted by users, like Facebook, Twitter, Instagram and many others. The aim is to prevent the dissemination of what the text denominates “terrorist content.” Two of the drafts have particularly extreme provisions, including letting national law enforcement authorities skip trying to apply the law or respect free expression rights, and simply pressure platforms to take down users’ online expression under their Terms of Service. They also let authorities require any platform— even very small ones — to build technical filters to try to weed out prohibited content. That’s a problem because, from what little we know about platform filtering systems, there seem to be an awful lot of mistakes, which threaten lawful expression by journalists, academics, political activists, and ordinary users. The most recent draft, from the EU Parliament, is better because it drops those two provisions. But it still retains one of the worst requirements from the other drafts: platforms have to take down content in as little as one hour if authorities demand it. For almost any platform, but certainly for small ones, that kind of time pressure creates a strong reason to just take down anything the authorities identify.

What might constitute illegal “terrorist content”?

Barata: In most of the drafts, the definition is really broad and likely to bring many difficulties in terms of interpretation. It includes terms like incitement, advocacy, and glorification, thus potentially covering a very wide range of expressions. The most problematic aspect is that it does not require any clear, immediate and demonstrable risk of a terrorist act being committed as the result of a certain piece of content being disseminated. This lack of legal certainty is against the clear standards set by the European Convention of Human Rights regarding legitimate limits to freedom of expression. Courts reviewing decisions adopted by competent authorities in this area may have very difficult work to do. Moreover, we may also witness very broad differences across EU member states when incorporating this notion into their respective criminal systems and enforcing it.

Regulating Online Terrorist Content: A Discussion with Stanford CIS Directors About New EU Proposals
Joan Barata

Who decides what counts as “terrorist content”?

Barata: The regulation puts power in the hands of  “competent authorities” that will need to be defined at the national level. The last draft does not specify that these authorities must be part of the court system, although it does at least require that they be protected from most  political influence.  Depending on the member state, many diverging options may be adopted. In any case, if police or other law enforcement tell a platform that they have authority to order content taken down, it is hard to imagine most service providers opposing them or questioning their  authority. In many cases, platforms also will decide for themselves what counts as “terrorist,” because the regulation forces them to enshrine their own definition of terrorist content in their Terms of Service. Such definition would not only need to be in line with the one included in the regulation, but also must be broad enough to cover the different laws introduced by member states. This clearly incentivizes the adoption of an overbroad definition (even broader than the one included in the proposal) of what constitutes terrorist content.

You’ve written about the EU’s proposal, raising the concern that in over zealously regulating this speech governments might push people underground. Can you talk about that?

Keller: This is one of the biggest problems with the EU’s draft regulation. No one knows if laws like this, which skip courts or public processes and give platforms reason to err drastically on the side of taking down legal speech and information, will make us any safer from real-world violence. Lawmakers have not taken the time to consult the evidence or hear from experts on this. As I discussed in a publication last year, security researchers are divided on the question, but a majority doubt that we can meaningfully address terrorism by purging content from Internet platforms. One reason is, as you mention, that people vulnerable to radicalization will be driven into echo chambers and dark corners of the Internet. Other reasons have to do with the inevitable collateral damage from platforms’ rushed content removal decisions. Platforms make mistakes and take down the wrong user speech all the time. For example, YouTube has deleted over 100,000 videos from the Syrian Archive, an organization that documents human rights abuses in Syria. Given current US and EU law enforcement priorities and pressures on platforms, we should expect mistakes like this to disproportionately hurt people who are speaking Arabic, talking about Islam, and even engaging as moderate voices on topics like immigration or ISIS. We should think long and hard — and listen carefully to experts and members of affected communities — before deciding this is a realistic way to deter radicalization.

What other concerns do you have about it?

Daphne Keller
Daphne Keller

Keller: This law puts in place requirements that will be difficult enough for giant companies like Facebook or YouTube, but may prove impossible for smaller platforms. Having someone available to take content down on one hour’s notice just isn’t realistic for platforms run by a single person or by a small a handful of people.  They also can’t afford to risk the regulation’s astronomical penalties—which run as high as 4% of annual global turnover.  The very biggest platforms famously do things like spending $100 million on video identification tools or hiring 20,000 moderators, but no one else can do that kind of thing. Joan and I talk a lot about the regulation as a threat to free expression. But it’s important to pay attention to the competition concerns, too, and note how laws like this help entrench incumbent platforms.

Does the US regulate hate speech and terrorist content? Could it in the way the EU is proposing?

Keller: The US situation is very different, because so much speech is protected by the First Amendment. We do prohibit some extremist content under laws about material support of terrorism. But some really horrific content, like the video of the atrocity in Christchurch, is probably legal here. There is a huge gap between what US law prohibits and what most people think platforms should prohibit as a moral matter.

Importantly, though, platforms are free to ban this legal-but-harmful or legal-but-offensive content if they want to. A 1996 law called Communications Decency Act Section 230, better known as CDA 230, freed them up to do that—and to build robust tools and teams to do the job. People think of CDA 230 as an immunity for platforms leaving user content online, but that same immunity— which allows them to curate and moderate content without assuming legal responsibility for it —lets them tackle things like the Christchurch video.

What would you propose for the EU—and the US—as a workable solution?

Keller: We won’t get to a workable solution until we have a serious and evidence-based conversation. Lawmakers should slow down and consult with radicalization experts about what kind of legal changes might actually make a difference. They should demand real evidence — and not just take companies’ word for it — about how existing filtering efforts really work, and what problems platforms are seeing. Lawmakers’ hands aren’t tied here: we have a growing body of knowledge about the dials and knobs they can adjust to go after illegal content online without taking down vast amounts of legal speech in the process. Lawmakers in the EU and elsewhere can use that information to build much smarter laws.

Online platforms’ use of algorithms seem to push us into groups of “sameness.” Do you think that exacerbates the problem of extremists’ online communities? What can be done to lessen that?

Keller: Advertising-based platforms want us to stay on the site and keep clicking. They are designed to give us what we want — or at least what they think we want based on our behavior and that of other users. Of course, people’s choices to click on garbage content (to use a technical term) can be a lot like our choices to stare at an accident or grab a candy bar in the checkout aisle. If we took more time to think about it, we might do something healthier. So one way forward might be to give users better control over what we see, and let us change our account settings to tell platforms we want a more diverse diet. Another would be to override people’s apparent preferences—to say, even if you think you want a social media diet of junk food or hateful rants, we are going to make you “eat your kale” by injecting different perspectives into your social media feed. I don’t think solutions like that should be off the table, but I do think that we’d want to think long and hard about who gets to make that decision, and who decides what the “healthy” content is.

Is there anything else you’d like to add?

Keller: Well, thanks to readers who made it this far! Words like “European regulation” aren’t exactly clickbait. But the Terrorist Content Regulation and similar laws will absolutely shape speech and information access in the US and around the world.