Stanford’s Daphne Keller on SCOTUS Decision that Google, Twitter, and Facebook not Responsible for Islamic State Deadly Posts

On May 18, the Supreme Court issued a 9-0 ruling in Twitter v. Taamneh and a companion case, Gonzalez v. Google, rejecting efforts to hold Twitter, Google and Facebook culpable for a deadly Islamic State attack discussed on those platforms. Here, Daphne Keller, a lecturer in law at Stanford Law School and director of the Program on Platform Regulation at Stanford’s Cyber Policy Center, discusses the decisions.

What are the key takeaways of the decisions? How important are they?

Stanford policy lab explores government use of artificial intelligence

They are important rulings, and it is also important how sober and straightforward they are. We are seeing so much political theater in Congress right now on the topic of platforms and online speech. These rulings make the Supreme Court look like the grownups in the room.

Both of these cases arose from tragic facts. Plaintiffs lost family members in ISIS attacks in Europe. Their theory was that Twitter, Facebook, and YouTube should be liable because of ISIS’s general presence on the platforms — even though the platforms seemingly took down any ISIS content they found, and the attackers had not used the platforms in planning or executing the attacks. Numerous lower courts have rejected very similar claims, saying both that platforms were immunized under the law known as Section 230 and also that, even without that immunity, plaintiffs’ claims under the Anti-Terrorism Act (ATA) would fail. Late in the Taamneh and Gonzalez cases, the plaintiffs added a theory that the ranking algorithms that platforms use to order newsfeeds or recommend videos were themselves a source of liability, and not immunized by Section 230. Only a few lower court cases have spoken to this but they all — including 2nd and 9th Circuit rulings — rejected this theory as well. Given those consistent lower court rulings, many people were surprised when the Court took these cases.

In this week’s unanimous Taamneh ruling, the Court resoundingly rejected plaintiffs’ ATA claims as inconsistent with basic tort principles. It said that offering a generally available Internet service, including one that ranks content and targets it to particular users based on their apparent interests, does not rise to the level of “aiding and abetting” acts of terrorism. Liability on such attenuated facts would be too far-reaching, and at odds with longstanding common law. As the Court noted, it would effectively make Twitter liable for every act of terrorism by ISIS. Because the defendant platforms faced no liability on the merits of the ATA claims, the Court declined to resolve the question in Gonzalez about Section 230 immunities.

Daphne Keller
Daphne Keller, lecturer in law at Stanford Law School and director of the Program on Platform Regulation at Stanford’s Cyber Policy Center

In a way this is an “everybody, calm down” moment. The court strongly affirmed that basic tort principles apply and protect platforms, just like they protect other communications services. Numerous amicus briefs exhorted the court to reinterpret Section 230 and upset a generation of Internet law, but the Justices did not rise to the bait. (I was one of many people who feared the worst, as I told the New Yorker in this Q and A. I think that outpouring of alarm, including in the very large number of amicus briefs the Court received from voices across the political spectrum, helped them realize the importance of caution.)  A case may come that actually does probe the limits of Section 230 protection — limits that are very real, though also relatively well fleshed out by lower courts. But this was not that case. Hopefully the experience of Gonzalez and Taamneh will put the Court in a better position to think and rule carefully when that case does come.

Justice Clarence Thomas said Twitter and other social-media websites didn’t provide the sort of “knowing and substantial assistance to ISIS” necessary to find them culpable under the Anti-Terrorism Act. How much would you read into the decision regarding the Court’s support for Section 230,  the foundational internet law that shields social-media platforms from liability for user-generated content?

The Court expressly declined to touch the Section 230 issues, but I do think the way it characterized platforms in Taamneh is relevant for the claim, made in Gonzalez, that platforms should lose immunity based on their ranking algorithms. The Taamneh ruling treats algorithmic ranking as a basic part of platforms’ function, and not something that — for tort law purposes — gives them a closer relationship with particular posts or more legal responsibility for what users say online.

That seems pretty consistent with the argument, which the ACLU and I made in our amicus brief, that ranking and ordering content is one of the basic platform functions immunized under Section 230. (These points are also made authoritatively in a brief from Section 230’s authors, Ron Wyden and Chris Cox.) The statute explicitly says that the immunity covers platforms that “organize” content. Any other reading would effectively gut the statute and take away protections for any platform that has ranked newsfeeds (like Twitter or Facebook), recommendations (like YouTube or Etsy), or even search results. That would leave them with bad options: either give up on ranking and ordering the oceans of content they host, or else offer ranked features that have been reduced to the most anodyne and risk-free materials. Neither of those things would be good for Internet users’ ability to speak and access information online.

This is an American ruling. Are there similar cases in the EU that we should be watching, where the outcomes might be different?

European courts actually resolved some of the key issues in these cases years ago, in ways that seem broadly sensible to me. The EU doesn’t have Section 230, but it does have longstanding laws saying that platforms are immunized from liability for user content unless they know about the content, or exercise too much control over it. We don’t know exactly what constitutes too much “control,” but in a 2010 case, the Court said that ads on Google’s search results were immunized, even though they were algorithmically ranked.

The general result of the EU’s rules has been a “notice and takedown” system, kind of like what the U.S. has for copyright, in which platforms remove allegedly unlawful content if they are notified about it. The EU just overhauled that system, in the new Digital Services Act, in part to make it harder for accusers to get platforms to take content down. They were concerned about the well-documented problems with platforms honoring even bad-faith accusations, in order to avoid legal risks to themselves. It is ironic that the EU is moving toward this more speech-protective regime for platform liability, while lawmakers in the US want to do the opposite, adopting rules like the ones the EU just abandoned.

Under EU law, platforms that were told about individual posts containing identifiable, unlawful terrorist content would clearly have to take them down — which as far as we know is what these platforms did. But platforms would not be liable on the theory plaintiffs advanced in Taamneh and Gonzalez, which is that platforms had a duty to go out and actively search for other content to remove. The EU’s highest court has repeatedly said that obligations of this sort must be carefully limited, because they threaten Internet users’ free expression and privacy rights. EU lawmakers rejected such active monitoring obligations for terrorist content, in part because of major human rights and free expression objections raised by UN officials, <civil society groups, and others. In other words, the plaintiffs in Gonzalez and Taamneh advanced a theory that raises major free expression concerns under European standards, and should give us even greater pause under the First Amendment. Fortunately, the Court’s ruling saves those constitutional questions for another day.

Daphne Keller is a lecturer in law at Stanford Law School and director of the Program on Platform Regulation at Stanford’s Cyber Policy Center. Her work focuses on platform regulation and Internet users’ rights. She has published both academically and in popular press; testified and participated in legislative processes; and taught and lectured extensively. Her recent work focuses on legal protections for users’ free expression rights when state and private power intersect, particularly through platforms’ enforcement of Terms of Service or use of algorithmic ranking and recommendations. Until 2015 Daphne was Associate General Counsel for Google, where she had primary responsibility for the company’s search products. She worked on groundbreaking Intermediary Liability litigation and legislation around the world and counseled both overall product development and individual content takedown decisions.