International Human Rights Law, Social Media, & Content Moderation
What does international human rights law (IHRL) have to do with U.S. social media companies? How can it help social media platforms resist governmental pressure to censor speech as well as provide a principled lens for assessing a platform’s own content rules? What are IHRL’s limitations in this context and how can they be navigated?
So it’s a pleasure to welcome everyone to tonight’s constitutional conversation. Actually it’s more like an international conversation tonight because our topic is the application of international human rights law to content moderation over social media. This is a I think it’s an issue that I think many of us are unaware of, but is of extreme importance for what we do and what we see on the screens practically every day.
Because the as our speaker is going to be explaining the major social media companies have committed themselves to moderating their the internet, the social media in accordance with international human rights norms. But that is a, it’s easy to say and very hard to understand what that means, because.
International law and international human rights norms came about as a matter of governments. What can governments do with respect to their people? Social media companies are private profit making businesses, so how is this going to work? And it just so happens that the, a meta oversight board of which both Evelyn and I are members are the body that is trying to figure out how to make this happen.
And fortunately, we have, as one of the 22 members of that worldwide board, every continent except Antarctica, and I gather some penguins may be included in the future on this global board. E Evelyn is the real expert on international law in general, and in particular on the relationship between international human rights law and our First Amendment norms and how all of this applies to social media.
So Evelyn Aswad, I just want to introduce her my colleague on the on the Meta Oversight Board is a professor of law and of international law at the University of Oklahoma. She had before entering academia. A quite long and distinguished career at the State Department mostly in human rights.
I think at the end of her service there. She was the director of the human rights program at the State Department and also engaged for I don’t know how many years and dealing with free speech issues at the at the United Nations. So if there is anyone on the planet who can help elucidate for us what’s going on in the application of international human rights to social media, it is Evelyn.
So it’s my I pleasure to welcome Evelyn to the podium.
Thanks Michael for those very kind result. Oops. I forgot to mention Nate. I was going to offer a few comments afterwards. Everybody knows.
All righty. Thank you all for coming out tonight and thank you for the invitation to speak here this evening. Tonight, as Michael mentioned, I wanted to discuss how international human rights law, particularly with respect to freedom of expression can, and I would argue, should intersect with decision making about content on the largest social media platforms.
I’m gonna try to talk, tackle this topic in 30 minutes or less by dividing the issues into three baskets. The first issue basket is what are the global standards on freedom of expression? The second basket is what makes those standards, which were designed for state actors, relevant for private actors like social media companies.
And third, what would it look like for a global social media company to apply those standards, both in how it reacts to governmental orders or requests or pressures for takedowns and with respect to its own content moderation, rules, enforcement mechanisms, et cetera. So let’s turn to the first basket.
What are the global law standards on freedom of speech? The global standard is found in Article 19 of a treaty called The International Covenant on Civil and Political Rights, and I will refer to it as the I-C-C-P-R today. So this treaty took 20 years to negotiate, and it was opened for countries to join in 1966.
The United States did not join until 1992 when President George HW Bush made a big push for the United States to join as the Iron Curtain was crumbling. And we needed a way to influence those emerging countries on civil and political rights. And being inside this tent on civil and political rights was seen as more effective than standing outside the tent.
All right, so let’s look at what Article 19 provides. The first sentence of Article 19 is very short and sweet, and I have it up on the screen. Everyone shall have the right to hold opinions without interference. So this isn’t technically freedom of expression. This is just about having that interior space to hold an opinion without the government compelling you to disclose your opinion and without the government persecuting or discriminating against you.
Because of those opinions, I had the opportunity to look at the negotiating history of how this line ended up in this treaty, and I found that Harvard Professor Zacharia Chaffey, was on the delegation, negotiated this language, and single-handedly fought against numerous efforts to take it out by states.
He was very worried about what was going on with Senator McCarthy in the United States. He was brought up and charged with Senator McCarthy of being a dangerous person in the United States. He was investigated by law school, by Harvard Law School, all this because of his broad commitment to freedom of expression, and he fought and he got that line in.
So that’s the background on the first line. The second line, the second clause is where we get to the statement of freedom of expression. What does that right look like? As you see from the screen, it’s a very broad protection. The right to impart, the right to receive information of any kind across borders over any media.
Quite visionary language. When you think about the time period that this was being negotiated in. And I also had a chance to look a bit at the negotiating history. How did this get in the 1940s and fifties when, not many countries were respecting freedom of speech like this. And really, Eleanor Roosevelt, the US lead negotiator and her team and our allies of the United States did a great job fighting for this language here.
Like the First Amendment, freedom of expression is not absolute at the international level. So the next clause is when governments are permitted, not required to limit speech. And this language that you see on the screen has a three part test. So a government has the burden of demonstrating that it has met each of these tests.
This is a one strike and you’re out test. If the government cannot prove one of these tests is met, the speech restriction is illicit. So let’s go through what they are. In yellow, you see the language provided by law that is known as the legality test, among other things, the legality test means a speech restriction cannot be unduly vague.
You have to give the people who are regulated appropriate notice, and you have to give the government officials implementing the regulation appropriate notice so that they don’t veer into arbitrary, discriminatory, inconsistent ways of implementing the law. So that’s the legality test. The next test is in magenta.
All these reasons are known as the legitimate legitimacy test. So the government can only impose a speech restriction for a legitimate public interest objective. And those are listed here. The rights are reputations of others, national security, public order, public health, public morals. The third test, which is the one I find the most interesting, is the necessity test, which derives from the word necessary in turquoise on the screen.
This test actually has two components to it. The first one is the government has the burden of demonstrating that the speech restriction is the least. Intru intrusive means to achieve that legitimate objective. And the UN’s independent experts have recommended that governments go through a trilogy of questions to assess if they have found the least intrusive means.
The first question is there something the government can do? That doesn’t burden speech, but achieves that objective. If that’s the case, you don’t need to burden or restrict speech. But you would not believe how often in the nine years I was in the human rights law section at State Department, this came up, countries wanted to deal with religious intolerance.
So they would resort to speech bans to do that banning speech that offends religious sensibilities. And we would have to have that discussion with them of let’s see. Do you have laws that prohibit religious discrimination? What about hate crimes laws based on religious animus? Do you train government officials on these issues?
Do you engage in outreach to vulnerable groups? Do you condemn religious intolerance in your society or do you foster it? Oftentimes governments were not doing any of these good governance measures and wanting to resort to speech bans, and the human rights system does not reward bad governance with sensorial power.
So that’s test one. Test two is, okay, say all those good governance measures aren’t enough on the spectrum of tools a government has the government selected the least intrusive one? For example, civil sanctions rather than criminal sanctions, right? So that’s the test for least intrusive means is the government trying to use a hammer when a tweezer would do?
And the third test in the trilogy is the government monitoring that the tool it selected is effective. If it’s not effective, then of course you’re burdening or restricting speech and not achieving any objective, and that’s not appropriate. Now the second and separate test under the necessity test, the pardon in turquoise is proportionality.
The government needs to also prove that the burden on the speakers and the listeners is proportional to the benefit to be achieved. If it’s not, then that’s also a problem in turn of the speech restriction. Okay, so we might say, oh, okay, we’ve covered the global norms on freedom of expression, but wait, there’s more.
So unlike the first amendment, the international system does have some mandatory speech bans. And I wanted to go through an example of one of those with you as well as the interpretations that independent UN experts. That monitor speech restrictions, have some of those interpretations. So here’s an example of one of the mandatory speech bans in the I-C-C-P-R, article 20 clause two.
Any advocacy of national racial or religious hatred that constitutes incitement to discrimination, hostility, or violence shall be prohibited by law. As you can see, there are three parts to this. It, the speaker has to be engaging in advocacy of that hatred. That speech has to rise to the level of incitement, and the harms to be averted are discrimination, hostility, and violence.
Now, I’ll drop an oral footnote with a fun fact. Eleanor Roosevelt and many US allies fought hard for years to keep this out of the I-C-C-P-R. Eleanor Roosevelt felt very strongly that in one of the first treaties on the planet to grant individuals rights against governance, it was inappropriate to empower governments in this way.
And this was likely to be very misused by authoritarian regimes, which she turned out to be quite correct about the country that wanted it in. And that led the charge was the Soviet Union. They wanted it in and throughout the negotiations, it was like a ping pong match. Sometimes the US and its allies got enough votes to keep it out.
Sometimes the Soviet Union and its allies had enough votes to keep it in the last vote the Soviet Union won. So it’s in you might ask, how did the US join the treaty with this provision in it, given our first amendment and the US joined with a reservation that stated, to the extent anything in Article 20 requires something incompatible with the First Amendment, we do not take on that obligation.
So that is how the US was able to join. Now I did wanna share some very interesting interpretive developments that have happened with regard to Article 20 clause two and these developments happened primarily in 2011 and two through 2013. In 2011, the Committee of Independent experts that are elected by the state parties to the I-C-C-P-R came up with a very thorough recommendation for how to interpret Article 19.
And in it they made a variety of very speech protective interpretations that changed the way they had been doing things in the past. But the one I highlight for you this evening is they said Article 20 is subject to Article 19 three. So any speech restriction imposed under Article 20 has to be not vague, has to be the least intrusive means, et cetera.
Excuse me, that brought a lot of discipline to Article 20 and constrained it significantly. In addition, the UN special Rapporteur on freedom of expression, that’s an independent expert appointed by the UN member states to monitor freedom of expression for all UN states, whether they’re a party to this treaty or not.
His name at the time, he was a Frank LaRue of Guatemala, and he came up with a bunch of very speech protective interpretations as well. He said, incitement means when there is speech that is, IM likely to cause imminent harm. That’s how we should be thinking about incitement in Article 20. He also said that word hostility can be interpreted pretty broadly.
What is that? Just a feeling in someone’s heart. He said we should be interpreting that as a manifestation of hostility. So for example vandalism or trespass, a lawless act would be a manifestation of hostility. So that also I think, narrowed article 20 through that interpretation. And in addition, Frank LaRue also said with regard to the least intrusive means test, if the harm to be averted is not near term or is not likely restricting speech is not going to be the least intrusive means.
So I share those interpretations with you because I think not always do I find that PE people have seized on those 2011, 2013 changes in the UN human rights machinery approach to these to these standards. Okay, so now that we’re all experts on the global standards for freedom of expression, we can turn to basket number two.
Why does this apply to a private company? This was all set up to apply to governments, to state actions, to state actors. So how did this come to be that it’s applicable for for a private entity? So 30 years ago or so? The international community at the United Nations was struggling with, what are we gonna do with the fact that companies have become so big, so powerful, so rich, so impactful, but they aren’t constrained by international human rights norms?
This seems like a gap in the international system, and there were lots of discussions and fights, but there couldn’t be any agreement on how to solve this problem. So the UN Secretary General appoints a special representative on business and human rights, professor John Ruggie of Harvard. And he engages in six years of totally energetic outreach to companies, to governments, to indigenous peoples labor groups civil society, academics, you name it.
He was talking to them and he’s able six years later to present to the un, which had been so torn over this issue, a consensus framework for how to think about the relationship between companies and human rights. So that framework is known as the UN guiding principles, oops, missed that slide.
Sorry. On Business and human rights, which I will refer to as the UN gps. So what did the UN GPS do? What did it call on companies to do? It calls on companies to respect international human rights. It defines respecting international human rights as proactively seeking to avoid infringing on them. And to address infringements when they occur.
So let’s unpack that. First of all, what human rights are we talking about here? There are human rights at the UN level. There are human rights systems at regional levels. What exactly are we talking about? NGP Principle 12 makes clear, as does its official commentary, that this is a UN framework and it is pegged to UN standards.
All the examples listed, everything people are told to look at are un treaties, UN declarations, UN resolutions, and those of one of the UN’s specialized agencies, the International Labor Organization, I raised this because often people start to conflate different things that are going on worldwide.
They conflate the UN’s human rights norms with regional human rights norms. And I’m gonna drop here a super long oral footnote to share with you why that’s a problem to conflate the two. So let’s take as an example, the European Regional Human Rights System, they have a European Human Rights Treaty.
They have a European Court of Human Rights. They have a provision in their treaty that’s very similar to Article 19 in many ways. But the interpretations of their court are very different on freedom of expression and result in much less speech being protected than in the UN system. So examples of that, the European Court of Human Rights has a margin of appreciation where it defers to governments, and that results in allowing a lot of speech to be removed.
The UN has specifically said, we don’t do that. We don’t do that. Deference to governments. With regard to the three part test we were talking about on legality, the European Court finds many things past the vagueness test that would not pass the vagueness test at the UN on the necessity test. The European Court of Human Rights only does a proportionality analysis.
It does not do the least intrusive means analysis. So that results in far less speech being protected as well. So it’s important to not conflate all these things that are going on because that would be unworkable for us to understand how a company could align with the norms if we’re mixing norms from different systems.
Okay. Now let’s talk about what does it mean for a company to proactively seek to avoid infringing on human rights? So the NPS give a lot of examples of what’s expected of companies. They have to adopt a human rights policy. They have to mainstream it through their company. They have to engage in something that did not exist when I was in law school, which is human rights due diligence, right?
I learned about corporate due diligence. I started out as a corporate lawyer. I did corporate due diligence, but this is human rights due diligence. They have to assess how their business operations intersect with potential human rights problems, know their risks, and they have to develop plans of action for how they’re going to avoid, mitigate, minimize those risks, right?
That’s called knowing and showing their human rights risks. Now, the NPS do not require companies to go violate local law when local law in a country is not up to par with international standards. But they do have to know their human rights risks and they have to show what they did to try to avoid being complicit, facilitate, et cetera, those human rights violations.
We need to keep in mind here that the NPS are not a legally binding treaty. They are a framework. They’re not legally binding on companies. They’re not legally binding on governments. So this is a framework. The international community has called on companies to respect it. The US government has on at least three separate occasions, said it expects us companies to treat the nps as a floor and not a ceiling.
I think that’s the expression they use in their operations. So calling on companies to respect this. That said, it is voluntary for companies to opt in, and in many ways what it requires is a norm building moment, right? A moment where stakeholders demand this of the companies to help incentivize them to do it.
Okay, so this gets us to basket number three, right? What would it look like for a company, a large global social media company to agree to apply the NPS in a decision making about content on its platform? As it would have to engage in this human rights due diligence and see where there are intersections between the business, operation and human rights.
Obviously, in conducting that analysis, it would find freedom of expression is one of the rights that could be affected by what the company does, and that risk of affecting freedom of expression would present itself in at least two ways. First, and I’ll talk about this one and then get to the other one.
The first one is. A global social media company is very likely to work in a jurisdiction that has laws that do not comport with international human rights standards. So when that country orders or requests the company to take down that speech under laws that do not comport with international human rights standards, that company is gonna end up in a human rights scandal, right?
It’s gonna end up aiding and abetting suppressing legitimate speech protected under international law. A country could also do this to one of the companies with pressure known as jaw boning, right? Not through a law, but leaning on intimidating, coercing the company. So in that situation, again what they need to do is do the research to understand that’s going to happen in the jurisdiction that they’re operating in, and develop an action plan of what they are going to do when that quite foreseeable and an inevitable event happens.
What are they going to do to minimize the risk, to avoid the risk to protect people’s rights of freedom of expression? So here I would like to highlight that there is an international multi-stakeholder initiative, the Global Network Initiative, that brings together companies like Microsoft Zoom, Yahoo Google, meta and Telecom companies.
Academics NGOs, investors who are committed to this, knowing the freedom of expression risks and showing what a company can do to avoid them. And they work together towards that goal. And the companies are subject to periodic audits by independent auditors to assess in randomly selected cases over the rating period.
Did they do the, just that, did they know their risks and did they have an action plan to avoid engaging in assisting in a human rights violation essentially? So I will say when I was at the State Department, these issues started coming up with social media companies and there was no guidance.
The US government had given them no guidance. They had no internal procedures. It was just like scandal after scandal, and there was no framework to assess these issues. And now I think that there is this framework that many companies have adopted, and it is having an impact in terms of companies resisting and not just, assisting actively in perpetuating these freedom of expression violations around the world.
All right. The second way it can come up for a social media company in this risk of infringing on freedom of expression, is in its own content moderation, rules enforcement mechanisms, systems, right? They are the largest, most powerful speech regulators in the history of humanity, period. And what they do can infringe on freedom of expression.
So when people were thinking through how are we gonna apply that UNGP standard to the company’s own content moderation the un expert on freedom of expression at the time, at this point it was David Kaye, who’s a professor at uc, Irvine. He proposed these social media companies should subject themselves to the Article 19 three part test legality.
They should not have vague rules. They should give users notice, and they should give their enforcement machinery sufficient notice to enforce the rules properly, not in an arbitrary or discriminatory manner. They should, for example, also go publicly demonstrate that they are meeting the necessity test, including the least intrusive means test.
And he repeated that trilogy of questions. We went through the first question. Dave should think through. Are they doing something? That is causing perhaps the problem they’re trying to solve. Do they have a good governance measure that they can deploy that does not burden speech to deal with the harm they’re trying to avert?
I think this would require looking at design choices and other things that they have the power to affect. Again, human rights law does not like to reward the speech regulator with sensorial powers if they can do something to deal with a problem without burdening speech. The second question in the trilogy is look at your continuum of digital content moderation tools.
And they have so many, right? Geoblocking, there’s so many beyond just taking a, taking something down or leaving it up. Look at that spectrum and choose the least intrusive means. Don’t use a hammer when a tweezer would do. And the third one, assess if what you have done in burdening speech is effective in solving that problem or not.
Are you just burdening speech and nothing is happening, no solution is being solved. So essentially what Michael mentioned at the beginning, at the oversight board, we are trying to subject meta’s content moderation. Decisions through that three part test, legality, legitimacy, and necessity to see if the comp, what the company is doing at each level to maintain some kind of oversight and constraint on what they’re doing.
Personally, I think the framework that human rights, law principles provide give us a way of asking the right questions and of pressing in a good direction on the otherwise quite unfettered power of these companies. So I think there’s value in that. As a concluding, a few concluding thoughts, I might say.
I think that it’s particularly timely to think about the NPS and social media companies now because of the Supreme Court cases last summer in Murthy and Moody, right? In Murthy users were alleging jaw boning and the Supreme Court put a very high bar for asserting standing requiring users to show quite a significant causation analysis to show they had standing to alleged jaw boning.
But under the nps, rather than putting the burden on the users, the burden is on the companies to stand up to the job boning, to know that risk and to react to it in a way that minimizes the risks to freedom of expression. I. In Moody, the Supreme Court made key that made clear that key aspects of content moderation by a platform are protected by the First Amendment.
And that requiring viewpoint neutrality is not a legitimate aim of the government. And some scholars pointed out that that was great for protecting corporate free speech interests, but it didn’t protect the individual’s free speech interests. Now, I’m not arguing that anything about our First Amendment jurisprudence should change, but if the company’s voluntarily take on the U nps, the focus comes back to the individual and how the company is treating the individual and how it’s treating speech.
So I think that’s a benefit as well. Now, I’m not so naive to think that turning to international human rights principles results in roses, rainbows, and ponies, and everyone living happily ever after. There are a lot of judgment calls, there’s a lot of evolution. But that is the Human Rights Project. When Eleanor Roosevelt was sitting there in the 1940s working on the Universal Declaration of Human Rights, people must have thought she was crazy.
I. Right. But that’s the project is aspiring to a world where human dignity and human rights are protected from the powerful. And I think this gives us a stepping stone to try to make that a reality. Thank you.
Wonderful. Thank you for those great remarks. And thank you Michael for and Morgan for the invitation here. How could anyone be against social media companies respecting human rights? It’s like being against being contra human rights is like being against, puppies or dessert.
You’d have to be a monster. I’m here to tell you I am that monster. And I’m gonna talk a little bit just to provide, maybe be almost needlessly provocative here to talk a little bit about why I think that that this approach is the wrong one. But let me say this that through many years of arguing with folks on this, that I feel strongly about this.
I believe I’ve lost the argument, and that’s okay. Because if the worst thing that happens is that these companies end up obeying human rights norms, I don’t think it’s a tragedy. But I’ll explain why I think this is the wrong approach. Let me start by saying. Look, I’m not going to say that companies should go out and just start violating people’s human rights.
Whether it’s Exxon or whether it’s Monsanto, or whether it’s Nestle, right? Human rights and respect for people’s dignity obviously is something that these companies should respect just as we as individuals should be respecting human rights, right? But that’s the golden rule, right?
The companies should not do unto others that they wouldn’t want having do unto them, just as we shouldn’t go out and injure people or violate their rights. And so too, for social media companies, right? They shouldn’t just willy-nilly be assistant in genocide or pick your social harm.
But I think for social media in particular, to think, to take the rules that were forged to restrain governments is a mistake. And here’s why. I think that it’s fashionable to say that Facebook, for example, is the public square, right? Zuckerberg has said that Jack Dorsey has, and Elon Musk have said that about Twitter and X the Supreme Court basically has said, said it mainly with respect to the internet, which is an important difference.
But thinking about these social media companies as the public square, I think that’s just wrong. Okay? I think that these are highly regulated speech environments. The purposes that the. Social media companies bring to bear in the construction of these products are very different than the construction of the public square by a locality or, a state.
And all of the community guidelines of these companies, if you subjected them to First Amendment analysis, would all be unconstitutional. That’s true. Whether you’re talking about hate speech or bullying or doxxing or graphic content. Okay. Or self-harm videos or threatening harm against animals is one of the things that Facebook prevents in this community guidelines.
Now I wouldn’t say that they’re all necessarily contra international human rights law just because they, the First Amendment might go farther, but never, but many of them would be. Because they you, there are certain things that we would even expect a company to do that we would say a government should not be able to do.
Secondly. The decision, let’s say in constructing a social media feed is very different than, say, building the Boston Commons, right? The algorithmic choices that are inherent in deciding what at the top of your feed and what goes at the bottom is a very different question than who’s allowed to speak in, say, a public park or something like that.
Those decisions about what kind of content comes at the top and what comes at the bottom, and the prioritization of that necessarily involve what we as First Amendment folks would call content-based discrimination and usually viewpoint based discrimination. Inevitably, that is going to be the case, so much so that if the if the government, had to make some of those decisions right, that it would be unconstitutional.
Now the third point on this is that these companies do, of course, have speech rights themselves. I think Evelyn was good in pointing out that they might have free speech rights as the Supreme Court has said in moody. But that doesn’t mean they shouldn’t then impose upon themselves maybe some of these international governance norms.
But let’s just think through this for a second. Suppose that Michael and I decide that we’re gonna start a conservative social media company, right? We’re gonna say, all right, look, for various reasons we don’t think the market is serving our purposes. We’re gonna say this is going to be an environment in which we are, we’re gonna limit the discussion to particular set of issues or particular opinions.
We’re not saying that the government should do that. We’re just saying that we wanna have a safe space on the internet to have this kind of conversation. Suppose that we get millions of people who join that that environment and pick your, we don’t have to be conservative. Suppose it’s, pick your ideology or pick your topic.
Those decisions about how we’re going to restrain and govern the boundaries of that speech environment are very different than the kinds of questions that you’re gonna ask if the city of Palo Alto were doing the same thing, state of California or a national or international body, right? They’re just different kinds of questions.
Now, that doesn’t mean we’re gonna go out and, just willy-nilly try to violate human rights, but especially when you’re trying to think about the effect on speech, right? Necessarily the decisions that the company is making are going to be the kinds of decisions that we think are problematic when a state makes the same decision itself.
Alright? Because as a secondary point here, the legitimate business interests of the company are sometimes gonna be very different than say, what we as First Amendment scholars would call compelling state interests, right? The decisions that Facebook makes or Twitter makes, or ex TikTok as to how the speech environment is going to be constructed, might be, for example, for the company to make money, right?
Might be to try to foster engagement, might be to create an environment that is seen as pleasant to be in, as opposed to one which is respecting just all opinions all the time and forcing them on all of the users. And those differences I think are absolutely critical in understanding the difference between international human rights law and the kinds of principles that a company should apply.
And finally, as we think about the application of that framework, the international Human Rights framework to companies, particularly when it comes to social media companies, where most of the moderation is going to be done by algorithmic or machine learning tools, you cannot be as surgical as you would be if you’re a government that is doing it.
Facebook makes more decisions on speech every day than the US Supreme Court has ever made in its history. Okay, actually it probably does it every few hours. It makes more decisions on that. And so you are going to have rules of the road that, that apply, that do the filtering of content that are going to be over and under.
It’s not gonna be as necessary as it would be if we were doing it in a kind of surgical manner for the government. Now, lemme give you some examples as to things that are protected speech, but that social media companies ought to have the freedom to to moderate. I’m gonna talk about nudity, I’ll talk about Holocaust denial, incitement, and then campaign advertising.
I’ll do them very quickly since I’m just got a few more minutes. So for, and this builds on actually a decision of the Facebook oversight board with respect to nudity. So the oversight board wa had a case dealing with sort of breast cancer checking videos in Brazil. And Facebook took it down because it violated certain rules about nudity on Facebook, and they actually reinstated it.
But and the oversight board said, look, it’s important that users be able to see this kind of video. Now I can totally understand to say that a government ought not have the ability to ban nude expression, right? That we can see why it’s something, whether it’s breast cancer videos or art.
Makes perfect sense. But if you’re looking for nudity on the internet, you’ve got a lot of places that you can go, alright, the Facebook doesn’t that a decision of Facebook to say, look, we just don’t want to be in the business of trying to police every single piece of content to figure out whether it’s a health video, whether it’s art.
It’s like we’re just gonna say no nudity. It’s go somewhere else. If you would like to find that. Now that is over-inclusive, that’s not surgical. But it doesn’t have really the impact on free speech that I think it would if a government was doing secondly, Holocaust denial. Now because of the hate speech kinds of rules in international human rights law.
This is a complicated one. Legitimate people could disagree as to whether protecting holocaust denial is contra or in air consonant with international human rights law. Facebook has flip flopped on this, but does international human rights law suggest that? Because under the First Amendment you certainly have free speech, right?
To deny the existence of the Holocaust, as well as almost all forms of disinformation. A but could a platform decide, you know what? We just don’t want that to be our environment. Yes. People, you want to go on the internet, you wanna find Nazi websites, you can do that. But we’re just gonna say for business reasons and other reasons, civility, we’re gonna say not on our platform.
We have this kind of speech. Third has to do with incitement. I thought it was interesting in the point about incitement to discrimination and racial hatred, et cetera. There is a debate in international human rights law as well as in First Amendment law about how it has to be imminent lawless action.
Okay. And, there are really good reasons why we do that in First Amendment law. We don’t want to over overstate, we don’t wanna trust the sensor to just go out and say, all right, this person’s causing a danger. But does a platform have to wait until the absolute moment that the fuse is going to be lit in order to make a decision about imminent lawless action?
It, how could it, some, especially how should a Silicon Valley company think about the likelihood of genocide and Myanmar? At what point does it have to decide that it’s really imminent before it, it acts? Now reasonable people could disagree. Just to be clear, I’m not saying that these companies should adopt these more speech restrictive policies.
In fact. Contra human rights law. Maybe if we were to decide we’re just gonna have a totally libertarian, Q anon friendly four chan kind of speech marketplace, we could do that. I think that, is also protected under, but the decision that we’re making in order to do that is different than when the government would make last point.
Let me talk about campaign advertising. And this is something I Evelyn’s written about a little bit. You’re probably familiar with United States Supreme Court issues. This decision in Citizens United says that corporations have free speech rights when it comes to advocating for the election or defeat of candidates.
I, it would be, I would be hard pressed to say that you can’t say that advertising is compulsory on a platform. You can’t say, I think that under international human rights law, or even a comparable First Amendment law, that you’d say a company needs to allow people to advertise. Most of us find those pretty annoying, right?
All the more So can’t a company say, as Twitter did and several of the other ones. You know what? We’re not gonna have political advertising because people find it annoying. Similarly, we’re not gonna have pharmaceutical advertising. We’re not gonna have certain types of things that we just don’t want on our platform, because we’re gonna try to create a particular environment.
And the point here is just that the government couldn’t do that and shouldn’t be able to do it because it, ’cause it violates free speech rules. But that a platform ought to be able to, and so the decisions that the platform is going to make, I think, are qualitatively different than the ones that a state should make.
Now, so what does that mean in the end? Should we just let platforms have unlimited power over all of this? No, there are things that you can do in order to try to limit platform power to try to make sure that we know what’s happening under the hood at these companies. So I’ve been big advocate of greater transparency, compelling them to essentially show us their homework as to how they’re engaging in content moderation.
As well as I think, antitrust enforcement and trying to make sure that you don’t have one company that dominates the speech marketplace. There are things you can do outside the speech environment and speech restrictions that try to go after the power of these companies as well as the opacity of their, some of their content moderation roles.
But that’s where I think that we should be focusing our efforts. But to, to wholesale, transfer international human rights law into the practices of the companies is, I think, a mistake. But. I wholeheartedly admire those who make the other argument on this. And I think Evelyn’s one of the best interlocutors on this.
I think, as I said, I’m happy to lose this argument because unlike most of the arguments these days, political or otherwise, I think the stakes are relatively low if I were to lose this one. And so I’m happy to seed the ground to, to, to my opponents in this, which is a very rare thing for me to admit.
So thank you.
You’re all welcome to line up here on the other side if you have questions, but first maybe
sure. I’ll just share a few thoughts. Thanks Nate. Very thoughtful as always. I was just going to note that in 2018 I wrote a lot of the article about this application of the three part test to social media companies. And the one part I was not comfortable with was the legitimacy test.
Should a private actor have a greater range of reasons for which it can limit speech than the government? And would that be a way of taking into account, some of Nate’s well-founded concerns? I had asked in this law of your article to have a multi-stakeholder discussion debate.
’cause where do you draw that line? Ultimately is anything that helps the company make money a fair objective. It doesn’t infringe on speech ’cause it helps the company make money. Or where do we draw that line? And I’m asking it ’cause I don’t know where, I’m saying maybe we should explore more objectives, but where would one draw that line?
And I assume on a lot of issues, we would like to have some consistency by the companies rather than having them just be purely commercial decisions by those within the companies, by investors, by advertisers or the political wins of the day. And I think what the framework brings is some consistency and some public demonstration of the companies of why they’re meeting the criteria.
And I think, that shouldn’t be lost as an added benefit in all this, that they have to justify themselves. But I do take the point that as an entity that makes money, maybe they should have more objectives. But I don’t know where we would draw the line so that doesn’t subsume the whole the whole project.
I’ll say one, one thing here, which is that should the same rules govern all social media companies. Right now, as you said, the international human rights standards are a floor, right? So it doesn’t mean that you’re saying that, if the market is working well enough in the social media ecosystem, then you could have some that are very restrictive on speech.
You could have some that are more libertarian, right? And so long as I, I do think that when it comes to the internet that yes, international human rights standards and the First Amendment apply that we cannot allow for prohibition of certain types of websites, inconsistent with First Amendment law on the internet.
But for a social media company, I think it’s actually different. So long again, is there’s some kind of competition between these companies. And that we have different types of speech marketplaces that you can opt into.
Thanks so much. This is fascinating. I want to build a little on some of the distinctions. Nate’s starting to introduce I know nothing about international human rights law with respect to the speech issues, but if I’m just thinking about this from the standpoint of somebody who knows a little about Modern First Amendment doctrine, it seems like there are a variety of different doctrines that could be highly relevant, and I’m just curious how international human rights law deals with those doctrines.
So things I have in mind are in the First Amendment context. We might care a lot about. Ownership of the property as a relevant consideration for the authority of the government to intervene and restrict speech on that property. So we have a kind of siloed set of doctrines that deal with public forum law or other doctrines that deal with non-public forum, whatever that is, where the government is just restricting use of its own property as opposed to reaching out and restricting other the ability of people to engage in speech either through, let’s say criminal prohibitions or just on private property.
Kind of related doctrine that I’d be curious about is governmental speech doctrine. So the extent to which the government can partner up with individuals to engage in speech and through that partnership actually control to some extent what those individuals are able to say. The extent to which the government can promote speech in ways that either distort the marketplace or potentially even in boundary cases, maybe not assert that it’s its own speech ’cause it’s speaking out in ways that actually have the effect of discouraging private speakers from speaking.
And so there are all sorts of like really tricky lines to draw here, but at least all of these lines proceed from a well-developed body of case law about governmental speech. And then the other one that I’m curious about is one that’s died down a little bit, more recently, but used to be a big deal in the forties and fifties and sixties, which is first amendment’s, laws differential treatment of different media.
And so if you had a case in involving broadcasting, that’s different than a case involving megaphones, which is different from a case involving parades and et cetera, et cetera. And we up until we had a shift to this new neutrality system used to treat all of those little sub-domains as unique in themselves.
I’m, I was just curious, like does international human rights law tend to take that kind of very domain specific approach or take a kind of broader sweeping approach based on some general set of principles like neutrality? Thank you. Thanks. Very thoughtful question. And I guess in a nutshell, international human rights law is not as developed as the US First Amendment law and does not have all these nuanced areas of development as you’re pointing out.
So there’s, there isn’t much to offer, but I do think its simplicity is its benefit here. Because what it’s asking for is the rules should not be vague and the company should not use hammers when, treason would do, which is an easier one to apply than a complicated doctrine of law.
But I do think some scholars have made the points that you’re making as, could we not learn from some of these areas of First Amendment law to develop how we think about social media content moderation. And I think it’s worth thinking about. Definitely. Let me give you a hypo Yeah. Which is not so hypo.
Okay. It was my first Amendment exam this year, which is what happens if the US government owns TikTok. Okay. And it’s it brings all of those questions right to the fore because that’s been proposed in, as we think about whether it’s gonna that be a for sale. There was some talk of, having a sovereign US sovereign wealth fund owned TikTok.
It’s what would that mean? I think almost all of the content rules under TikTok would be unconstitutional if the government adopted them. And then we get into this tricky question about, how to, what happens with a government controlled algorithm relating to speech and when it are the decisions about the prioritization of speech in that on government TikTok, US TikTok going to be government speech, right?
Or is it going to be regulation speech environment that I tried to make it less of a headache for my students? And when I gave that question, I said, I. Congress says, us TikTok is a public forum, and so therefore, what does that mean going forward? But it’s not clear that would, could us TikTok be organized its algorithm in a way that it’s pro-American, right?
Because you would just say, all right, that’s like government speech. I think that raises all kinds of really interesting questions. I’ll just say one last thing, which is that we always try to argue by way of analogy when you talk about the medium question what is a social media company?
Is it a parade? Is it a newspaper? Is it a cable station? Is it a bookstore? Because we try to like latch onto all these different First Amendment and another precedent. And it has elements of all of those things, right? But one thing I’m, I hope in the next few years as the, we get this sort of avalanche of internet cases as we’ve started with Moody and Murthy, is that they start, recognizing that, yeah, you can reason by analogy here, but that this is really a very different kind of system, right?
And that we need to start really thinking about the relationship of these companies and what they’re providing in a different way than we would some of the legacy media.
Am I up? So my question is about. Funding of courts. Something that I know you’ve done some thinking about is how much more funding there is for human rights tribunals, and particularly the European Court of Human Rights, applying Europe’s regional human rights rules and comparatively in the Inter-American human rights system, which has much more periphery expression rules.
The court there just, can’t hear that many cases. So I’m interested both in how you think about funding and how that shapes the perceptions of what the human rights rules really call for. And then on top of that I’ve heard the Meadow Oversight Board referred to as the best funded human rights court in the world in the sense that there are a bunch of really respected experts such as yourself there thinking about human rights questions that the real courts like can’t manage to get to.
On the other hand, there are questions justified or not about the independence, of course of the, of Meta’s oversight board. So I’m interested yeah, that is the universe I would like you to talk about where the rules come from given who can afford to adjudicate cases.
I think that’s a fascinating question. And I have a couple of thoughts to share. It’s correct that when you’re from a region that funds its human rights court, you have more jurisprudence coming out than other regions, right? And that’s what’s happening with the European Court of Human Rights.
It has a flood of jurisprudence and that can influence the rest of the regions. The UN doesn’t have a flood of money on human rights mechanisms. When I mentioned the committee that monitors implementation of the treaties, those are all people doing volunteer work, pretty much. Professors and their research assistants, it’s all pro bono.
It’s doing it on top of their day jobs. The special Rapporteur position that I mentioned, when David K did it, it’s unpaid. It’s still unpaid. It’s him and his students making that happen. So the UN is not, funding to provide a resource for a lot of jurisprudence. And the states of the un, including the United States, do not wish to have a legally binding mechanism adjudicating I-C-C-P-R rights.
And thus, there is no jurisprudence from a legally binding body. We as the United States, when we join human rights treaties, we join them as non self-executing treaties. Which means you cannot go to court and raise claims under the I-C-C-P-R, but those rights are protected through other domestic legislation.
So the US is in compliance, but it becomes an issue of jurisprudential competition. We don’t have American judges interpreting I-C-C-P-R rights to compete with all the other judges who are coming up with decisions around the world. And we’re we’re losing in that jurisprudential competition.
So then the issue of meta like a company, right? Funding this human rights oversight body, essentially funding a body to critique it can perpetually throughout the year. Not every company’s going to be wanting to do that, right? Who wants to pay money to have a group of people say, you got it wrong, you got it wrong again, you’re not doing this.
Look at that mistake. You should be doing that. And that’s a real challenge here, I think, is if we think this is a good idea, which is due, I do having human rights oversight, having outside oxygen coming into these companies, how to incentivize them funding these types of bodies. It’s not easy and it shouldn’t be.
Only the most wealthy companies can do it, but I think it is gonna take people realizing it’s a norm building moment. And stakeholders should be seeking this. They should be seeking this type of independent scrutiny. So I’m not sure if that answers your question fully on these competing fountains of jurisprudence, but those are my thoughts.
Yeah,
I think someone right behind you. So let’s have one more. Great, thank you. We talked about how, we talked about companies themselves as rights holders for speech, and we talked about users as rights holders as against the co companies when we analogize them to acting like governments under international human rights law.
But does this, is there also a human, international human rights law intersection for when companies themselves can be liable for themselves as actors based on their algorithm amplification of speech? So the Myanmar case is already brought up, but in there’s also cases of like individuals children suffering harm.
As for a recent case on with some TikTok challenge where people are harmed, what’s the, is there an international, right? Is there an international law component to that? And would that be a personally case where the incentives that are, that these companies are facing are intention with their legal liabilities to prevent harm?
Yeah. Great question. As it stands the international law obligations do not directly apply to the companies, right? Some of this is getting tested a bit in US courts in terms of corporate liability for human rights abuses under the alien tort statute, right? And whether companies can be liable for aiding and abetting in various human rights violations and facilitating them those violations.
We don’t see those cases, right? Es in the social media world, maybe it’s because of section two 30, they’re protected, right? We do see it when it comes to a huge range of other companies that, that’s being tested in the alien tort statute context. So I would say evolving watch this space, but I’m not sure there’s much to watch yet with, on social media due to section two 30.
But you could, envision situation in which the company is an active participant in the facilitation of, actual physical harm, right? Human trafficking for example, where they know that they’re part of this, that might take it outta the section two 30 realm, right?
But there are situations on the margins where you could see that they would be actively engaging in something that violates human rights. That’s not by sitting back and letting the user speech happen. But there are other things that would be a kind of facilitation test that they would be failing.
So please join me in thanking Evelyn Aswad and Nate Persily for very interesting presentations.
The next and actually last constitutional conversation of the academic year will be a week from tonight. In the same place, same time be Professor Charles Tyler of the University of California at Irvine, who’s gonna be talking about a development in real, in constitutional methodology where the court allows the history of particular provisions to be relevant to their constitutionality today.
For example, if a the six person jury was adopted as a way to make it easier to convict African Americans or maybe the other way around or if various provisions of, law, having to do with education we’re enacted in a kind of an anti-Catholic way.
Do those, how do those, the genealogy of those statutes affected the constitutionality today. So that’s been coming up with increasing frequency and Chas Tyler will be talking about it next week. So look forward to seeing you there.
