With the United States Supreme Court hearing oral arguments this week in two major cases involving Internet platform regulation—and two more important platform cases likely on the docket for the fall term—Stanford Law School’s Cyber Policy Center held a February 17 panel to sort out the overlapping, often-contradictory legal and policy considerations at issue in the four cases.
Nate Persily, the James B. McClatchy Professor of Law, served as the moderator for “Judging the Internet: The Supreme Court’s Upcoming Cases On Platform Regulation.” The SLS panelists were Daphne Keller, director of the Program on Platform Regulation at the Cyber Policy Center, and Evelyn Douek, assistant professor of law. They were joined by Yale Law Professor Jack Balkin.
Simply put, the first pair of cases implicates when platforms might be compelled to remove content and the second pair goes to the question of when laws can require platforms to keep content up. The panelists agreed that it is extremely difficult to predict the likely outcomes of any of the cases.
“I think you’re going to get a chaotic, incomprehensible set of opinions that are going to leave a lot of options open for justices who do not want to make ideological commitments that might betray them down the road,” Persily said.
The two cases the high court hears this week are the much-discussed Gonzalez v. Google and Twitter v. Taamneh. Gonzalez, the first case involving Section 230 of the Communications Decency Act to come before the Supreme Court, was filed by the family of an American woman who was killed in the 2015 ISIS attack in Paris. The family argues that the content-related immunity that website providers enjoy under Section 230 should not extend to automated recommendation algorithms that push personalized content toward users—such as videos allegedly supporting terrorism. Twitter v. Taamneh considers whether Internet service providers can be secondarily liable, under the Anti-Terrorism Act, for terrorism-related content posted by users. The other pair of cases (collectively the NetChoice cases), which are widely expected to be taken up during the Court’s next term, are Constitutional challenges to new laws in Texas and Florida that seek to curb alleged anti-conservative bias by large social media companies.
Douek and Keller launched the panel by framing some of the most interesting and important questions posed by the four cases. Douek said Gonzalez and Taamneh cases are unusual ones for the Court to choose to hear, given the consequential issues at stake, because in both cases the causal chain is “extremely weak” between the posting of content and the ultimate terrorist acts. “In Gonzalez, there’s no allegation that Google had any role in encouraging the Paris attack or that the Paris terrorists were recruited or radicalized through YouTube or used YouTube to plan or conduct the attack, and there’s no specific allegation of a particular video or of a particular attacker who saw a recommended video,” she said. “Similarly, in Taamneh, there’s no proof that the attacker in that case ever had accounts with Facebook or YouTube and there’s no identification of any particular piece of terrorist content that Twitter knew about, but failed to take down. In both cases there’s a lot of evidence that the platforms did have content moderation programs to deal with terrorist content. It is this extremely tenuous causal connection that makes these cases particularly interesting and potentially scary.”
Douek said that if the Supreme Court rules against Twitter, and does find secondary liability in Taamneh, a narrow ruling would be one way for Twitter to “lose well,” with the Court focusing only on the Anti-Terrorism Act and finding that Congress intended the ATA to provide for a “broad, expansive, and ridiculously large form of secondary liability” given the importance of fighting international terrorism. “The ‘bad’ way to lose would be to not have that limitation and to potentially open the door to all kinds of secondary liability, such as aiding and abetting abortion, or in connection with other kinds of statutes that you might see coming down the line,” she said.
Keller introduced the Netchoice cases, noting that the statutes in question are both “long, convoluted, hard to read, and poorly drafted laws” designed to stop Internet companies from removing, demoting, de-monetizing or otherwise censoring conservative viewpoints. She explained that the Texas law attempts to ban online content discrimination based on viewpoint. “But what it really means to not discriminate on the basis of viewpoint–and this is somewhat debated on the details–is that if you are leaving up anti-teen anorexia videos, then you have to leave up the pro-teen anorexia videos. If you’re leaving the anti-suicide videos, you have to leave up the pro-suicide videos. If you’re leaving up the claims that the Holocaust is real, you have to leave up the claims that the Holocaust is not real. So there’s this really grim array of consequences coming from a rule requiring viewpoint neutrality.”
The Florida law comes at the issue by identifying certain types of speakers who are not allowed to be moderated or removed in any way, including journalists, political candidates, and anyone talking about political candidates. “So if you’re talking about a political candidate in Florida, and you want to add some defamation or some pro-ISIS content, or Holocaust denial, you’re golden,” Keller said.
In the course of the panel, the speakers touched on a host of legal and policy questions at issue in the cases, alternating the discussion between the First Amendment, issues of statutory interpretation, the Dormant Commerce Clause, the Fairness Doctrine, tort law, and policy issues around “big tech.”