MC “Weekly” Update 9/6: We will not be silenced!

Alex and Evelyn discuss a bunch of things that happened while they were on “summer break”: OpenAI recommending using GPT-4 for content moderation, but not enforcing its own political content moderation rules for ChatGPT; the EU’s Digital Services Act coming into force and a report from the Commission that has us worried; Meta rejecting its Oversight Board’s recommendation to suspend Hun Sen’s account; a bunch of First Amendment decisions; and much more!

Show Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • OpenAI published a blog promoting how the company’s most powerful large language model, GPT-4, is being used to update platform policy and enforce content moderation rules faster and more consistently than human reviewers. – Priya Anand/ Bloomberg News, Reed Albergotti/ Semafor, Simon Hurtz/ The Verge, Lilian Weng, Vik Goel, Andrea Vallone/ OpenAI
    • Did they forget a section on the importance of human review? Not quite, but you have to actually read the blog to see that this is experimental and focused on updating platform policies and then assisting human experts with policy enforcement.
    • Alex has been testing GPT-4-based moderation tools in the classroom with his students and surprised Evelyn with his optimism. – Casey Newton/ Platformer 
  • Meanwhile, the company is failing to enforce its own policy against using ChatGPT to create materials that target specific voting demographics. Everything is a content moderation issue, and the policy you have is the policy you actually enforce. – Cat Zakrzewski/ The Washington Post 
  • Apple is back in the news again under pressure from a new child safety advocacy campaign pushing the company to do more to combat child sexual abuse material (CSAM) after the company scrapped plans to scan user content for CSAM. – Tripp Mickle/ The New York Times, Lily Hay Newman/ Wired 
  • Meta announced it took down the largest Chinese influence operation, known as “Spamouflage,” saying the campaign was fairly basic and ineffective despite operating across thousands of accounts across more than 50 apps. – Sheera Frenkel/ The New York Times, Sarah E. Needleman/ The Wall Street Journal

X-Twitter Corner

  • Musk is threatening to sue the ADL, but that doesn’t actually mean he is going to sue the ADL. It’s yet another humiliating example of Musk undercutting the authority of X “CEO” Linda Yaccarino. – Sebastian Tong/ Bloomberg News, Jordan Valinsky/ CNN

Happy DSA Day!

  • The European Union’s Digital Services Act (DSA) came into force for the largest online platforms and search engines on August 25. – Théophane Hartmann/ Euractiv, Chris Velazco/ The Washington Post
    • Companies released blog posts about how oh-so-seriously they are taking their obligations with a mix of actually positive steps and completely performative measures. – Nick Clegg/ Meta 
    • Meanwhile, the European Commission released a “Case Study” on risk assessment under the DSA for Russian disinformation, and boy-oh-boy do we have thoughts. It’s a scary document that seems to validate concerns from those who worry the DSA will be used to repress speech. – European Commission 
  • Meta decided not to follow the Oversight Board’s recommendation to suspend former Cambodian Prime Minister Hun Sen’s account. The decision raises questions about what the multi-month Board case achieved and how Meta views the purpose of the Board when it disregards its expert input in high-profile cases like this. – Meta Transparency Center
  • Casey Newton has an in-depth report on why the notorious Kiwi Farms website is still up and what content moderation looks like at the infrastructure layer. – Casey Newton/ Platformer

Legal Corner

  • Another U.S. Supreme Court content moderation showdown seems inevitable as the Biden administration filed an opinion encouraging the Court to take up the NetChoice cases challenging Florida and Texas laws that would restrict moderation action on political content and accounts. – Rebecca Klar/ The Hill, Makena Kelly/ The Verge, Cat Zakrzewski/ The Washington Post
    • The solicitor general’s brief stated the obvious by arguing there is a circuit split, the questions in the cases are important, and all parties want the review. 
  • A federal judge in Texas ruled a state law requiring age verification for adult websites is unconstitutional, blocking enforcement due to a “chilling effect” in a state where sodomy is illegal. – Ashley Belanger/ Ars Technica, Adi Robertson/ The Verge
    • The Texas Office of the Attorney General is expected to appeal the decision in the case brought by the Free Speech Coalition, the adult entertainment industry trade association.
  • A federal judge in Arkansas ruled that a law requiring age verification and parental consent to create an account on social media websites is likely unconstitutional, granting NetChoice’s request to block the law from taking effect on September 1. – Andrew Demillo/ Associated Press, Rebecca Kern/ Politico
    • Evelyn is not quite sure what to make of these two pretty decent opinions that faithfully applied precedent, but it will definitely be a big year in First Amendment law for the internet and we will be here to cover all of it!

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Transcript

Evelyn Douek:

A number of people had new job titles for me, obviously. New coach of the Matildas, which would be exciting. Also, head of X’s trust and safety, and Trump’s First Amendment constitutional lawyer, both of which, thank you very much, fans of the podcast, so-called fans.

Alex Stamos:

I’ll take jobs you cannot pay me any amount of money to take for 100, Alex.

Evelyn Douek:

Exactly. Hello and welcome back to Moderated Content’s weekly… Slightly random and not at all comprehensive news update from the world of trust and safety.

Alex Stamos:

Download weekly. Either, yes.

Evelyn Douek:

Yeah, not at all weekly. Not at all comprehensive. But definitely an update from the world of trust and safety and college sports, apparently, with myself, Evelyn Douek and Alex Stamos.

So, Alex, when I announced that our unplanned summer break was happening on a couple of the platforms, I said this is just for boring logistical reasons, respective travel plans, just didn’t keep lining up. But then, I said people could give suggestions of the best conspiracy theories about why we’d been silenced, censored for a month and people really delivered.

Alex Stamos:

Pure cause.

Evelyn Douek:

Yeah.

Alex Stamos:

Here comes people, the truth.

Evelyn Douek:

That’s right.

Alex Stamos:

The truth is, they don’t want you to know.

Evelyn Douek:

Exactly. So, the obvious explanation was, my guess is that you took the Matildas loss pretty hard and couldn’t front up to host the number one sports – Content Moderation podcast until it was a distant memory. That one’s pretty close to the truth and it’s still too soon. So, yeah.

Alex Stamos:

I’m pouring one out too. I’m pouring my hot coffee on my lap. Oh, God, oh. Everyone out for Matildas, that was a mistake. So…

Evelyn Douek:

Thank you. Okay, a number of people had new job titles for me, obviously. New coach of the Matildas, which would be exciting. Also, head of X’s trust and safety, and Trump’s First Amendment constitutional lawyer, both of which, thank you very much, fans of the podcast so-called fans.

Alex Stamos:

I’ll take jobs you could not pay me any amount of money to take for 100, Alex.

Evelyn Douek:

Exactly. There is no world, no salary that could make me do that. So, after reviewing the terms and conditions of those jobs, I politely declined and returned to my podcast day job. And then, there’s the conspiracy theories of course.

So, clearly, our enemies in the Moderated Discontent podcast have tunneled under the Stanford campus to try and mount a sneak attack, but their tunnels accidentally drilled into the secret biohazard lab under the Hoover Institution releasing a cloud of sentient malaise, which is excellent. Definitely, right.

Alex Stamos:

Well done. Yes. We got to censor of this because we can’t let this truth out. We cannot let the truth. You heard it here first, but there’s no way. They will let this podcast get out. It may be posted.

Evelyn Douek:

That’s right. We’re rapidly running around trying to hoover up…

Alex Stamos:

That would show.

Evelyn Douek:

… all of that sentient malaise, and get it back underground. There were also conspiracy theories about cephalopod illuminati. And obviously, that the government had assumed a role similar to the Orwellian Ministry of Truth. And finally, shut us down. So, a little close to the bone, that one. But we persevere.

Alex Stamos:

Oh, okay, great.

Evelyn Douek:

All right. So, as it turns out, if you take a month off in Moderated Content, there’s a lot to cover. And so, there’s a bunch of stories on the table for today. One is that I’ve been wanting to follow up with you about for a while, Alex. So, during our break, OpenAI had said that it had been using GPT for its latest publicly available large language model to Moderate Content and suggested that other platforms could consider using its API to do the same.

And Casey Newton in his newsletter, Platformer, which we have talked about many times and is excellent, and I think we’re going to talk about again today. I had a story about this, and there were some quotes from you in there being relatively optimistic actually about this technology and using it for content moderation.

I was a little surprised. I wanted to push you on this a little bit because we’ve talked a lot here about the risks and dangers of platforms over indexing on technology to do content moderation and under investing in humans to do the work. And so, I’m curious, is this technology materially different? Is there something about this that makes you more optimistic than other technology?

Alex Stamos:

I was actually surprised at how GPT-4 did. So, the basis of this is I teach a class in the spring with Dr. Shelby Grossman on trust and safety at Stanford. And we put together computer science students with political science and international relations, and folks from a variety of majors into these teams to try to solve a trust and safety problem over 10 weeks.

So, obviously, there’s a limited thing they could do. But there’s just a pick, if something bad happening on a platform, and then come up with what your policy results are, come up with some technical results, they build a content moderation bot. We do this all on Discord as our platform for testing. And then, they do a big poster session at the end and they demonstrate what they learned and what they were able to build. And we get these guest judges from industry, and it’s actually a really fun time. And the students always do incredible work.

And for years, what we’ve been doing is giving the students multiple options for, okay, when you’re building your content moderation tool, you need to have some kind of automate component, but how you do it is up to you. Here are a number of options. And this year, because GPT-4 was just coming out, I was able to get API keys from OpenAI, it was very nice to them to donate it. And we gave API keys out to our students and we said, “Hey, one option is you could try to use GPT-4, right? Let’s see what happens.

And they had to go and actually test it. They had to create test sets of data that they run and they create we’re called confusion matrices, which measure the true positives versus false positives, the true negatives versus false negatives, and to do a comparison of what worked best. And a bunch of these teams, we had 30 some teams, a bunch of them found that GPT-4 was actually the highest performing of the tools they tried.

They could also try prospective API. They could try a number of dedicated content moderation APIs. A number of them built their own AI systems, so they used off-the-shelf stuff, and then they trained their own models. And GPT-4 did shockingly well. When you prompt it with… So, a lot of people use GPT-4, they use it in ChatGPT. There’s actually more complicated ways you can use it in the API. And one things you can do is you can give it examples, and then say, “This is what I want from this example.”

And so, that’s how most of them did it. In the API, they would say, “Here are five things I want you to label as good. Here are five things I want you to label as bad.” And it did shockingly well. Now, that doesn’t mean that you can solve all kinds of problems with it. But out of all of the solutions, it was the one that I thought was for English text, which is a very limited…

Evelyn Douek:

Important qualifier, yup.

Alex Stamos:

It’s important qualifier. But for English text it worked shockingly well. That being said, not at all economical, right? So, it worked fine for these little bots.

Evelyn Douek:

Right.

Alex Stamos:

No platform could really afford to use GPT-4 at its current cost. But it was a good demonstration of perhaps future LLMs as the costs go down and as people have the ability to deploy this stuff in-house on their own infrastructure, that this is a direction, a shockingly good direction to have LLMs help you do content moderation.

Evelyn Douek:

That’s the first optimistic thing I’ve heard in this space for a little while.

Alex Stamos:

Yeah.

Evelyn Douek:

And I think it is important to… One of the things that frustrates me in this conversation, and we’ve talked about this before, but the importance of keeping in mind all of the various trade-offs that happen in this space. And the ways in which like technology, there may still be errors or there may be problems with understanding context, but there are other benefits. For example, there’s been tons of reporting about the mental stress and problems for moderators who are forced to look at a lot of disturbing content all day.

Also, the speed and capacity of the technologies. But, I mean, I’m curious, is it that GPT-4 is just comparatively was surprising and really good by comparison? Or is it in terms of a baseline and how much do you think human, how much do we still need humans in the loop, basically?

Alex Stamos:

They absolutely need humans in the loop.

Evelyn Douek:

Right.

Alex Stamos:

But if you’re going to have an automated system, I was surprised. I expected that the custom-built things that the students put together where they trained models on specific classifiers, more traditional classifiers would’ve performed better, which our students have done for years is build their own classifiers built up on training sets.

I’m guessing, if you were Google or Facebook sized, you could still build a dedicated classifier that does better than an LLM. But if you’re not, if you’re a smaller company, then LLMs turn out to have to be shockingly good. And the other thing that surprised me is how little work it took.

So, you only had to do prompt engineering. So, the level of skill to build an API, a Python API that calls it to an LLM, where you prompt it to do things correctly versus to go to PyTorch, and to build a classifier yourself, totally different skillset, right? And so, that’s like for the effort put in versus the quality of the output, I was surprised.

Evelyn Douek:

Okay. But I guess, in one story that points to potentially the limits of this kind of approach, the Washington Post had a great story, I think it was last week about ChatGPTs own content moderation, and it’s notional policy that it bans political campaigns from using ChatGPT to create materials targeting specific voting demographics.

So, for example, something, I can’t remember the exact examples. But give me an argument for a voter in a 40-year-old black woman in Texas for why she should vote for Trump, that you can create pretty specific propaganda materials or political materials. And ChatGPT notionally has a rule that this is banned. And the Washington Post found that that rule is on paper only.

And as always, the rule that you have is actually the rule that you enforce, and this one is not being enforced and that you can still go and get this content, which is I went and did it myself the next day and found the same thing. And so, there’s real limits here on the content moderation that the ChatGPT is doing.

Alex Stamos:

Yes. When I talk about using ChatGPT for content moderation, we’re talking about a much more specific rules around hate speech or specific text. I think ChatGPT and any other large language model provider should not make any promises around political content. It is effectively an impossible promise.

And I think this is one of things that we need to come to a better consensus on is, we’re seeing these stories like you and I have talked about, it’s like 2015 to 2016 again, where it’s like back then the New York Times and the Washington Post was like, I saw something on social media I didn’t like.

And now, they’re writing stories, I saw something come out of ChatGPT I don’t like. And those are, one, kind of useless stories, two, obvious, and three I think push us towards a direction that is not healthy, which is you need to have reasonable standards for what these large language models will and will not put out. And to say it cannot put out anything political, it’s just not a reasonable standard, right?

It’s just not something that’s ever going to be enforceable for any kind of fully functional. I also think a little bit of the content moderation work on the systems is going too far because it is making them not very useful. I saw a great example where somebody was trying to translate text, and they’re asking Bain to do text translations. And if you use the Bain large language model, it said, “I don’t want to translate this for you because it’s anti-Semitic, right?

Because the Washington Post write story of, I could make Bain say something anti-Semitic. Then, if you’re trying to read something in Arabic that is, or Farsi and you’re trying to do a report on like, “This is what is going on in the world.” Your translator needs to be able to accurately translate for you even if the output is a bad thing.

And so, I’m not very happy with the direction kind of like the media blob is taking on this because it is not a healthy direction. And I think OpenAI needs to do a better job of saying, “There’s some things that we can’t prevent and that’s just going to be it, and you’re going to have to live with it.” The truth is, you can get a large language model to say lots of different bad things, and there’s no way to prevent that.

And that what you really need to focus is, if their license says that you can’t use it, then that’s great. Then, that’s actually an appropriate… This is a commercial product. The other thing that’s silly is, you would have to be really stupid to use ChatGPT to power an actual political campaign because you create this huge record that could be subpoenaed later. You have to pay a ton of money, like you and I talked about, anything where people are pushing the edge of the LLMs, they’re going to do it locally using open-source models that they can fine tune. And so, there are going to be political campaigns use LLMs to create a huge amount of content. They’re not going to use ChatGPT to do it.

Evelyn Douek:

Yeah, I completely agree with that. And I think the underlying premise of this is that, “Oh, no, this kind of content would be so convincing,” and that’s just not true. We’ve talked about a bunch of studies recently that just show political persuasion is really, really hard. It’s not that people see something, and then suddenly change their votes. That’s not how it works.

And so, I think we don’t want to fall into exactly that same pattern of over-hyping the dangers of this kind of thing. But we also, I think don’t want to fall into this same pattern where like, companies try and have it both ways where they seem to have this policy on paper that says, “Oh, here we are, we have this responsible policy.” But then, don’t actually enforce it in practice. I think companies need to own what their policies are, and then we can have a more rational conversation about that. Okay, speaking of trade-offs, so Apple is back in the news again to do with child safety issues. I’m going to throw this one over to you, Alex, what’s going on?

Alex Stamos:

Yeah. So, there’s a new kind of political group that has raised a bunch of money, that is blaming Apple for not doing enough around child safety. The context here is that Apple came up with a very controversial set of proposals a couple of years ago that I was against. I thought they went too far. I wrote at New York Times op-ed with Matthew Green of Johns Hopkins about the problems with that, that effectively they built a very complex cryptographic system that would allow them to scan people’s devices on the device itself for CSAM.

They tried to balance all the privacy issues with math. My concern there was just by opening, I felt like they’re being a little too cute in… But once they open the door to on-device scanning that saying, “Oh, well we have this complex math, was not going to hold back the legal demands, and the fact that they’re going to embolden folks to ask for more. I think on-device scanning is a great use for things where you protect the user and especially for devices that kids are using, which Apple has continued that work.

So, they’ve continued the work down the path of, if you flip this button, nudes cannot be sent or received by your child, which does not call the cops. It just stops images from being sent. I think that’s a totally appropriate use of on-device scanning. But that’s different than we’re going to call the cops or something that’s on your device.

Anyway, this group doesn’t like that they gave that up and is pushing against them. I was quoted in this article. One of the things I don’t like about this group is, they don’t have a limiting model here. They don’t have a theory of how far you should go. My position here is, if you’re going to call for any kind of weakening vendor encryption or any kind of scanning that could be considered intrusive, you should have a theory of like, “Well, where does that stop? And what are you willing? What can you still allow to be private?” And there doesn’t seem to be that here.

So, I don’t think it’s going to go where much many places. But a big… I mean, I think the big battleground here is the UK because the UK is actively looking at a law that would effectively require some kind of scanning like this. And that is a big active fight around parliament right now. As of today, there’s still big arguments going on.

And so, I think part of this is to try to influence the UK, and then eventually Europe to require scanning. And I think civil libertarians are putting up a big fight in the UK. But unfortunately, the UK’s history around civil liberties is not so great.

Evelyn Douek:

Excellent. Okay. Another headline in our holiday period was around Meta’s quarterly adversarial threat report. And the fact that it has done its biggest single takedown of Chinese influence campaigns in the last period. So, Alex, this is obviously right in your wheelhouse. What happened? And why does it matter?

Alex Stamos:

Yeah, so this is their regular quarterly report in which they talked about taking down standard campaigns from Turkey, from Iran. There was an interesting one on Russia on the way Russia is trying to influence people by instead of directly pushing content of creating lookalike content that pretends to be legitimate media and trying to push it.

But the most important one was China, that they had taken down 7,700 Facebook accounts, 954 pages, 15 groups, 15 Instagram accounts. So, a huge global network that the Chinese were running. This is a huge big deal. It’s shocking to me that folks aren’t covering this at all. I mean, a much smaller, much, much, much smaller Russian campaign in the 2016 election was something that was the first thing everybody, including like uncles and aunts asked me about for years right after we released.

And then, Meta does this much larger Chinese campaign, and everybody just yawns. And so, I think it demonstrates a couple of things. One, the PRC as we’ve talked about is by far the most active now nation in trying to manipulate social media and global media to their way. They are just able to go well beyond what the Russians are able to do, but using the same kind of techniques.

But they also is that, we still need to have active work here. And there’s a couple of ways that cutting gets us, one, X has gotten rid of their team. So, what we would’ve seen a year ago when this adversarial threat report comes out is that Twitter on the same day would do a press release saying we have taken down X thousand accounts, we think our friends at Meta, and then all of that data would be provided to folks like us, and we’d be able to do it right up. The data, Meta does provide that data to us and other researchers via a portal. But Twitter, now if you write about the stuff, they threaten to sue you.

And so, almost certainly, this Chinese campaign was humongous on Twitter, but nobody can find out because, one, they’ve gotten rid of the team that looks at this kind of stuff, and there’s nobody there working with Meta, it seems at this point, so that even if stuff is handed to them on a silver platter of, here are the exact IP addresses, your exact phone numbers, here are the links to Twitter. Often, what you see in these accounts is that you’ll have cross-linking between platforms, and so that helps you indicate of these accounts are tied to the same people, none of that’s going on. And second, independent researchers like us aren’t able to do the work anymore because Elon Musk will sue you if you criticize him.

And so, there are a number of people who are trying to politicize any kind of content moderation work. I just want to tell them that people who are saying that doing content moderation is innately a political thing are in this case helping out the Chinese government. You are helping out the People’s Republic of China in their attempt to manipulate global opinions in their direction.

So, if that’s what you want to do, that’s fine. But a number of the people or who say these kinds of things and to try to politicize any of this work and try to attack folks who do any independent researchers also are very much anti-China. Now, in this case, Musk has huge economic interests in China.

So, the fact that he’s ignoring Chinese influence campaigns, perhaps not shocking considering how much of his net worth is tied up in the PRC, but his political supporters probably should be a little more careful in thinking about what the outcome is when you make it impossible for companies to do this kind of work.

Evelyn Douek:

Okay, well, that is a perfect segue, thank you. To our X Twitter corner. Okay. So, I was offline most of the weekend, hiking in the beautiful Rocky Mountain National Park in Colorado. Absolutely, stunning, highly recommend. And I come back to civilization and find all of these headlines about Musk is going to sue the anti-defamation league for defamation, and that that’s somehow hilarious.

And I was just feeling like deep, first of all, get me back to the mountains. But second of all, it’s amazing to me that we’re still writing these headlines that Musk is going to sue the ADL based on a tweet saying that he’s going to sue the ADL. I’d just like to remind people of all of the headlines that we’ve seen over the last year or so of Musk going to set up a content moderation council and things that have aged extremely poorly.

So, I don’t know if it’s worth going into so much the ins and outs of this latest particular anti-Semitic set of tweets and rants and things that are going on over the platform. And I think it’s possible that we should be talking about this a lot less than we have been. This is maybe the realization that I had staring off into the distance from the top of a mountain because I think a lot of this is just… It reminds me a lot of Trump in the way that he commands public attention by threatening to do things that really, I’ll believe it when I see it.

And so, we will cover a complaint or a lawsuit once it gets filed, if it gets filed. But I think we need to have a lot more skepticism about the way that we think, and talk about these things.

Alex Stamos:

Yeah, I totally agree. And the fact that people just amplify everything he says when he does not follow through is problematic. I think this particular story, there are two things that are newsworthy. One, this blowup is a great demonstration of why Linda Yaccarino is not going to make it as Twitter CEO because the entire blowup was downstream of her taking a meeting with the ADL. She’s trying to get advertisers back on the platform. She would like to get the ADL to at least be neutral towards that.

And so, she has a meeting with them. There’s some tweets that go out that are like, there’s a frank exchange of views like the useless stuff that activists and corporations say after any kind of meeting between activists and corporations. And that’s what kicked off this whole band, the ADL thing that then Musk supported and amplified, and then led to him making the threat.

And so, one, he’s directly undermining Yaccarino, right? She takes a meeting on a Friday, and by the Monday of a holiday weekend, the person who is theoretically not the CEO of her company is threatening that the org that she just met with and tried to be friendly with is going to be sued. I think the other thing is, it is important to point out how Musk’s behavior really empowers some pretty horrible people.

I’m going to read, just so people understand how this is being seen. Andrew Torba is the founder of Gab. He has said a lot of truly anti-Semitic things. He’s written a book about Christian nationalism. If you look at the cover of the book, it looks exactly like the fascist poster from V for Vendetta. So, this is not subtle.

Evelyn Douek:

Fuck. Right.

Alex Stamos:

Right.

Evelyn Douek:

Jinx.

Alex Stamos:

Jinx. So, and this is what he said about Musk. She said, “In under five years, we went from having every single one of our guys banned from the big tech platforms to the richest man in the world, noticing, naming and waging total war on our largest enemy, the ADL while running one of those platforms. Let that sink in, keep the faith we are winning.”

So, whatever Musk actually does or something, some of the worst people in the world, these horrible anti-Semitic people are… Believe that they are winning, and it is very much enticing them to the platform and it is creating, I think a death spiral for Twitter. So, I think we should not… I’d rather people talk less about the lawsuit and talk more about, oh, this is what these people think. People who used to be the platform to have been brought back, believe that the richest man in the world is on their side, which is not. I would not want Andrew Torba saying that I’m on his side personally.

Evelyn Douek:

Right. Yeah, Linda has lasted longer I think than I thought we should maybe start the Head of Lettuce versus Linda watch to see how much longer she will be there because it is truly amazing, the humiliation that she’s prepared to keep injuring.

Alex Stamos:

Right. And it’s going from… She’s in the transition period between… Wow, that was a silly mistake you made, but at least you gave it a shot to never being able to work for a legitimate company again. She’s going to… And Musk doesn’t care because he’s so incredibly rich, but I would guess that Yaccarino wants to have a job again. And if she’s seen as directly facilitating fascism, then that’s going to be tough.

Evelyn Douek:

Right. Okay. Over to Europe.

Speaker 3:

[foreign language]

Evelyn Douek:

A very belated happy DSA day to all who celebrate. The Digital Services Act has officially come into force on August 25th, and the VLOPs, the very large online platforms are being forced to start complying and they have released a slew of blog posts talking about how also very serious they are about complying with their obligations.

So, we’ve seen a bunch of stuff rolling out from Meta, YouTube, Snapchat, at TikTok, a bunch of places. We’ve talked about some of these before. I think the thing to say here is how like, some of this legitimately really good, some of it is hilariously performative. So, if we take for example, Meta’s blog post on the compliance steps that it’s taking, some of them are actually really substantive and useful depending on if they actually get enacted.

So, for example, EU users are now going to get more transparency and opportunity to appeal around things like the demotion of posts, not merely posts being taken down when things are de-amplified. They’re going to get extra transparency around that, which seems good. That seems like a really good thing. It is a content moderation action that the platform is taking that can in many cases have as severe impacts as the removal of a post. And so, that seems like a thing that is really good.

But then, we’ve also seen a bunch of these platforms, we’ve talked about TikTok before. But also, now Facebook introducing more availability of chronological feeds to respond to the EUs demands for chronological feeds, which we have talked about recent research on this from fantastic social scientists, and things showing that those aren’t going to have the impacts that policymakers think they are. And it’s really just this performative, angst about algorithms that has resulted in these platforms rolling out these features that get them the compliance tick. But really, aren’t going to have much good in the world. So, it’ll be interesting to see.

Alex Stamos:

It’s a big win for one of your old Harvard colleagues who… Congratulations. You have gone that the EU to do something useless, so congratulations.

Evelyn Douek:

Right. Yeah, so I mean, it’ll be interesting to see which side of this wins out in terms of the substantive versus performative aspect of the DSA as it rolls out in the coming years. The big platforms are delivering their risk assessments to the commission. That’s something that will then be audited. And then, eventually, there’ll be much more transparency to the public around that. But that’s not where we are. It’s still very much in the implementation phase.

In the meantime, something I think we should talk about here is the commissioner has released a case study, a DSA case study about how it thinks about risk assessment and risk mitigation under the DSA. And it uses the case study of Russian disinformation campaign. So, I have a bunch of thoughts about this. But Alex, I can see you just chomping at the bit here. So, what do you think of this case study?

Alex Stamos:

Right. So, let’s just set it up what this thing is. It’s got the gripping title, Digital Services Act: Application of the risk management framework to Russian disinformation campaigns, published by Unit F2 Digital Services of director at F. Platform’s policy and enforcement of the directorate general for communications networks, content and technology.

So, this is a great start because the group that created this is named after like… Comes out of Terry Gilliam’s, Brazil, right? From a bureaucratic naming perspective, right? Okay. So, this is a report, 74 pages about… I think the European Commission, this group within the European Commission belief of how the Digital Services Act should apply to Russian propaganda.

We are no fans of Russian propaganda on this podcast. I feel like you and I have demonstrated that. I’ve demonstrated that to a level that has gotten me death threats and other kinds of fun stuff. So, we’re not big fans of Russian propaganda. This report is bad. It is bad in many, many ways. It is bad in its lack of methodology of what they identify as Kremlin aligned. It is not something we would publish in the Journal of Online Trust and Safety. This would be set back. This would be desk rejected of not having appropriate documentation of methodology for the work they’ve done.

It’s really bad in its smearing of anything the Russians do from a propaganda perspective being the same kind of thing. It includes fake accounts. It includes real accounts that seem to be people who are probably being paid by the Russians that are labeled… That aren’t labeled. It includes talking about Sputnik and Russian today in the such. And it blends all of that stuff together in basically saying, if the Russians have any outlet on western social media, American social media as seen by Europeans, it is a bad thing.

But I think this report is really, actually demonstrates some of the… If this is the direction we are going, it demonstrates that the critics of the DSA have said, the DSA is about censorship and it’s about control of what people can see and say online. That’s not about responsibility and transparency, but it’s about censorship. It means, that those people are correct. Because this is about saying any kind of content that is pro-Russian is bad.

They dress it up in lots of words. But they spirit all together in a way that basically implies that if the Russians are able to get anything out, that it’s a bad thing. And I think that is incompatible with European values, is incompatible with the risk categories they talk about here, about negative impact on the exercise of fundamental rights.

Europeans have a fundamental right to perhaps hear about a side that I completely disagree with in this war, but of to see media that does not follow the beliefs of Brussels. That is also a fundamental, right? It does not take into account any of that, and it doesn’t call for anything that specific as much as basically we want you to oppress any kind of Russian point of view, which I think is completely and totally inappropriate coming from a government.

If this was written by some pro-Ukrainian group, that’s totally fine. If this is from an activist group, it’s totally fine. Coming from Unit F2 of directorate F of the directorate general, I think it’s completely inappropriate. And it’s really disturbing to me because it does demonstrate… I think it does demonstrate that Europe is on, at least some parts of Europe, is on the war path to really control what people say and can see on social media in a way that is really incompatible with European values.

Evelyn Douek:

Yeah.

Alex Stamos:

Anyway, I have some opinions about it. And I’m sure you’re pro, right? You look at this thing…

Evelyn Douek:

Yeah. And now, for the opposite point of view. No, look, I completely agree with all of that. Just to be clear, the report concludes that the level of Russian disinformation on social media platforms during the invasion of Ukraine qualifies as a systemic risk under the DSA.

And so, therefore, the VLOPs have to do something about it. And then, it concludes that what the very large online platforms have done is insufficient and ineffective, and hasn’t adequately reduced the risks of this campaign. And the basic metric that that appears to be based on is that, there was engagement with some set of accounts. And as you say, it’s an extremely broad set. It’s very opaque what the methodology is here. But we’re not just talking about like, RT and state propaganda, which would be one thing.

And also, by the way, not clear to me that you can effectively… That all of that content is illegal under human rights law. But anyway, it also, as you said, lumps in all of this… And basically, anything that is pro-Kremlin or pro-Russia in this situation. And it’s just an extremely broad definition of what is basically, “Harmful speech.”

Alex Stamos:

Or maybe even like, there’s a whole section about what they call Z propaganda. But from the definition, it sounds like almost anything that has a Z in it, which is every story from the beginning of the invasion when you saw lines of tanks and guys with Zs painted on them, is that Z propaganda? It goes back to this idea that you have around this old European idea… Like in Germans just banning the swastika, which is like, okay, well, good luck having any kind of historical record in that.

But if your goal is to memory hole a horrible part of your past, and you don’t mind overcorrecting there. But in this case, when you’re talking about coverage of current events, really critical coverage of current events. There’s no way enforcement in the way that they’re calling for would not over censor, just even if you believe it’s okay to get rid of every Russian point of view, which I don’t. And I think is not compatible with European values. I mean, just to take a step back, we’re supposed to be the good guys here, right?

Evelyn Douek:

Are we the baddies? Yeah.

Alex Stamos:

Yeah. It’s like Europe and the United States are supposed to be free countries where people can have these kinds of points of view. Do you want to stop Russia from secretly manipulating your networks? Absolutely, right? And I feel I have a lot of credibility here on trying to do that. But that doesn’t mean that you should be able to wipe out their point of view or the things they say or even imagery or propaganda they push if people are talking about that. That is a normal part of a functioning society.

And for European bureaucrats to say like, that stuff should just be completely memory hold, which is effectively what they’re saying here, is just completely ridiculous. It’s not enforceable. It is not what Europeans want. But it is scary because nobody is talking about this report. Very few people are talking about the DSA, and it demonstrates the democratic deficit in Brussels that there perhaps is not the pushback on this kind of overreach that would stop it from happening.

So, if they effectively are able to use the DSA to go after American companies, and then people are just like, “Oh, yay, they’re going after Google and Facebook, that’s great because American and I don’t like them, and they don’t pay taxes.” And this and that, and this and that, then we could end up with a significant erosion of the fundamental rights of Europeans under the guise of protecting the fundamental rights of Europeans.

Evelyn Douek:

Right. And I just want to really highlight that point because one thing for the commission to release this, but it’s another thing that this report hasn’t got any coverage, and I don’t think has really been acknowledged for what it is. And I think part of it is, it has dressed up this pretty controversial and extreme political position and conclusion in extremely technocratic dry, boring language.

I slogged through this report, but my goodness, it was hard work. And I think that it’s basically saying, this political point of view that platforms need to do more to take this content down is just a technical conclusion based on the language of the DSA. And I think that’s a pretty scary thing. I think you read the document, there’s no important discussion of freedom of expression or the trade-offs, like the risk of like, if we take this position, what’s going to be lost, the false positives, and how do we think about those kinds of problems that the report basically assumes that there is a clear bright line between the good stuff and the bad stuff.

And we all know that platforms are really great at differentiating between the two, and there’s no problems whatsoever when we require them to be really heavy-handed about this. And I think just really importantly, it gives lie to the party line. The line about the DSA has been that this is all about process and transparency and due process.

So, even on DSA day or around then when Thierry Breton, the chief enforcer of the DSA was tweeting about it, he posted, tweeted X that content moderation does not mean censorship. In Europe, there will be no ministry of truth. And it’s all about transparency.” And I think that this report really gives lie to that point of view because it is quite clear that what this report is saying is that four platforms to comply with the DSA and mitigate systemic risks in Europe, they need to much more aggressively censor Russian disinformation however, that is defined in these reports.

Alex Stamos:

Which includes not just disinformation, includes just propaganda.

Evelyn Douek:

Right. Exactly.

Alex Stamos:

It also hints towards an extraterritoriality that Americans should, and everybody else in the world should find scary. And that they talk about the terms of service, and they basically talk down the fact that these things are Geoblock. For the most part, there are things, there are state media that are sanctioned in the EU that are blocked, at least that is a public democratic thing of countries saying, “You’re not allowed to have Russia today in this country.” I don’t agree with that decision, but at least European citizens can vote on that.

But then, they’re basically implying that it is not sufficient to just Geoblock, that that stuff should be completely removed from the platforms, and that the DSA is not fulfilled with only Geoblocking. They don’t say that explicitly, but that’s effectively the outcome of their entire section on terms and conditions. Not good Europe, and not a good look.

The hard part for me is to figure out how influential this report in this group is. This just some like sub, sub, subgroup in Brussels that is trying to make their bones and say some crazy stuff, or is this actually going to be influential, is the context that I have difficulty understanding.

Evelyn Douek:

Yeah, I agree. I also don’t know. I don’t know who does know because as with everything, this is all going to be about how these things are enforced. It’ll also be really interesting to see how much platforms push back. We have talked a lot about the importance of platforms pushing back on demands from governments in other contexts, and we’re not often talking about the European Union. We’re often talking about… We’ve talked a lot about India or those kinds of markets.

So, it’ll be interesting to see how much pushback there is in this pretty important market for the platform. So, it’s going to be an exciting couple of years as the DSA rolls out. But yes, this was not an encouraging report.

Okay. To Cambodia, another thing that has happened in the last few weeks is that, we have talked about the Meta’s oversight board recommendation that Meta should suspend entirely the account of Hun Sen, the prime minister for inciting violence and threatening his political opponents. That was a decision that took eight or so months for the oversight board to come to after deep thinking and consultation, and really thinking about human rights.

And then, Meta finally released its response to this and said, “Ah, nah.” Basically, was their response there. They’re not suspending Hun Sen’s account. Facebook’s response, Meta’s response, I apologize, was showing some… I think annoyance with the board’s inconsistencies, which I think was not unfair. If the Meta said, “Look, the oversight board has previously underscored in multiple cases the importance of voice in countries where freedom of expression is routinely suppressed, and the importance of having the platform as a place for people to share information about what’s going on.”

And this is in a context where Hun Sen had threatened to suspend Meta entirely in the country if his account was suspended. And so, Meta’s basically saying, “The existence of social media is an important place for dissent.” My question is… But what’s the point of the oversight board? There are difficult equities here. This is a difficult question in balancing certain things. I don’t know if it’s so difficult when there’s direct incitements of violence against political opposition.

But I can understand that there’s some equities on both sides. But the whole point of setting up the oversight board you would have thought was to outsource that decision to a bunch of experts, so that meta could just point to them and say, “Look, they’re the ones that made the decision. Our hands are tied. So, sorry in terms of how to think about this.”

So, I’m left wondering at the end of this eight-to-nine-month process, what the point of the oversight like, what the point of that whole thing was? And what Meta views as the role of the oversight board ultimately in its content moderation ecosystem. If in cases like this, it’s going to just disregard the recommendation.

Alex Stamos:

And it’s bizarre to me, they’re disregarding this one, and that like, one, this is exactly why you have an oversight board like you’re talking about. If you have this global group of thinkers who are trying to think through a difficult situation, which is what should people in Menlo Park do about platforming a pretty horrible person. I mean, he was part of the Khmer Rouge.

So, I feel pretty confident saying that he’s not a great guy, right? And he’s done horrible things in Cambodia. But what’s the responsibility of people in Menlo Park to make that decision? That is actually a very difficult issue, which is why you have these people. And so, yes, I completely agree. Why have the oversight board if this is the thing you’re ignored on?

If you’re going to ignore it on, here’s something happening in the United States that’s super political and directly tied to the operation of the company. At least that makes sense. This one doesn’t even make sense to me of why they would ignore this one. So, yes, there does not seem to be any practical reason other than they just don’t want to be blocked in Cambodia, which is a place where they make no money. And only horrible things can happen.

It makes me feel like the people who learned very hard the little lessons in Myanmar around the Rohingya, that was a big learning lesson inside of Facebook that unfortunately, those people might be gone of understanding that sometimes that optimizing for access, optimizing for not being blocked means that some pretty horrible stuff can happen that your platform gets pulled into.

Evelyn Douek:

Yeah, and you’ve got to wonder what the people on the oversight board are thinking today. They tweeted that they stand by their original decision, which is like, “Oh, I’m so scared.” This is a pretty explicit rejection of what they’ve recommended, and I don’t see them getting up in arms about that either. And so, I’m just curious to see what they think of their role here and how they approach their role given this pretty explicit and severe undermining of their authority.

Okay. We mentioned the Platformer and newsletter earlier in the podcast. And so, I just want to throw back to you, Alex, you wanted to give a shout-out to a recent issue that I also thought was excellent.

Alex Stamos:

Yeah, and Platformer just covered the Kiwi Farms issue. I think it’s an excellent recounting, and a good discussion of some real challenging equities to be balanced about how should content moderation happen at the infrastructure layer. Kiwi Farms is a really horrible place that wants to be a horrible place.

We talk a lot on this podcast about people who at least are trying or who have said, “We do not want to be a place where you do things like directly attack people, call for violence or try to get people to commit suicide.” Kiwi Farms is all those things. There are a number of deaths that have been linked to Kiwi Farms. And Platformer does a good job of covering how Kiwi Farms was taken bound by Cloudflare. That’s something we discussed on this podcast as being inevitable because Kiwi Farms is effectively calling for the stalking of an individual, and it could led to violence.

And Cloudflare took them down, and it’s jumped from provider-to-provider. It still exists, but it’s hard to find. And how should we think about that in the equity? So, anyway, it’s a good… Platformer is a great resource for this because Casey goes and finds these stories that get minimal coverage in the mainstream media or it gets covered in the media and it’s very shallow. And he goes and actually, finds people and talks to the folks behind the scenes, and he has built a reputation that he will treat you fairly. And so, a lot of people who are still at companies I think still talk to him. So, it’s a recommended subscribe for me.

Evelyn Douek:

Totally agree with all that. I will say Casey was picking up on reporting by Nitasha Tiku in the Washington Post as well on this particular Kiwi Farms issue that I thought was an excellent story. A deep dive on how activists have continued to approach this, and try and get Kiwi Farms taken offline. So, yes, co-sign that recommendation.

Okay. Let’s go to the legal corner. I apparently, can’t go on holiday without some significant development in First Amendment land at the moment, and there are a billion things that have been going on. But I’m going to highlight three, and I’m going to do it briefly given that this is already an extended episode that have happened over the last month.

So, on August 14th, the solicitor general submitted her brief to the US Supreme Court in the net choice cases about whether the court should grant cert. In these cases, as a reminder, these are the cases that arise out of laws from Texas and Florida that put a whole bunch of requirements on platforms to continue carrying certain contents. And also, a whole bunch of transparency measures about reports and explaining their decisions and things like that.

This is a massive circuit split. It’s the most circuit splities of splits between the Fifth Circuit and the Eleventh Circuit where the Eleventh Circuit struck down the Florida version of the law, and the Fifth Circuit upheld Texas’s law. And so, it was absolutely no surprise really, that the PSG said that the court should grant cert, and their brief… Essentially, stated the obvious that the court’s below disagreed, these are really important questions and everyone agrees that the court should weigh in.

I think one of the things that was more interesting about this brief from Biden’s SG was a surprisingly libertarian brief. So, in terms of talking about the government’s capacity to regulate content moderation, it was essentially saying that the government doesn’t have a lot of capacity to regulate the editorial discretion of platforms in terms of the content moderation that they do, which is an interesting thing to see from the SG of the Biden administration that has been talking a lot about the content moderation that the platforms need to do.

And so, that was fascinating. There’s also… They split the baby on the transparency stuff and said, “Look, the requirement that platforms should individually explain every single decision that they have to make.” The sheer volume makes that impractical and is unconstitutional. But said that the court shouldn’t weigh in on these generalized transparency requirements like, aggregate content moderation reports and things, and leave those for another day.

So, it’ll be really interesting to see how this plays out, but it looks like the court will grant review. And this will be something that we’ll be talking a lot about in the coming months. And it’s interesting to see that the Biden administration really seems to be throwing its hat in with the platforms in terms of how it’s thinking about these laws. So, that’s fascinating.

Two other important and surprisingly good decisions came out in the last week or so. So, in Texas, Texas’s Age-Verification Law was enjoined by a district court for violating the First Amendment. This is a law that required porn sites basically to verify the age of their users, and also to post disclaimers, which are health warnings about the dangers of pornographic content.

And basically, the court said, “We’ve had cases about this kind of stuff before. What are you even doing here? There’s nothing that I can do as a district court. There’s pretty clear Supreme Court authority on the fact that age-verification requirements like this have an unconstitutional chilling effect and violate the First Amendment. Thank you very much.”

And basically, also said that the health disclaimers were going to be extremely ineffective. It’s pretty clear what they’re trying to do here, that they’re notionally supposed to be there to prevent minors from engaging on watching porn. But the court says, “Look, if you read these warnings, they use all these massive words that miners aren’t really going to like. It talks about pornography being potentially biologically addictive, desensitizes brain rewards circuits and weakens brain function, blah, blah, blah. This is not the kind of thing that a 10-year-old is going to be like, “Oh, okay, I better not do that then.”

Alex Stamos:

Right. As the father of teenage boys, I’d be like, “Oh, wow. They would be absolutely deterred from this.”

Evelyn Douek:

Exactly. So, that was a surprisingly good decision out of a district court in Texas. And then, Arkansas’s Law that would impose age-verification requirements on social media companies more broadly, although not all of them was also enjoined. So, the Arkansas law would’ve required some social media companies to verify the age of all account holders, and then get parental consent for minors.

It was not clear who the law applied to. So, there’s this funny part of the decision where Snapchat is like, “We originally thought we weren’t going to be targeted by the law, but then it turned out that we were.” And then, in court, one of the experts said, “Yes, Snapchat would definitely be covered by this law.” And then, the state government lawyer got up and said, “Oh, no, Snapchat’s definitely not covered by this law.”

So, no one knew who the law was going to apply to, which was part of the unconstitutional vagueness of this law. And then, just in general, again, the court said, “Look, long-established precedent says that these kinds of verification requirements are unconstitutional because they impose significant burdens on adult access to constitutionally protected speech. And that is well-established law.”

So, it’s going to be a huge year in First Amendment law because all of these are almost certainly going to be appealed. So, we will be talking about them no doubt as we go on. So, stay tuned for the best Content Moderation – Sports podcast content that there is out there as we continue to cover these developments. And speaking of our sports segment, Alex, do you have an update for us?

Alex Stamos:

So, one more legal update that I just saw come in, it actually happened a couple hours ago, but I just saw the result is talking about the UK child safety stuff, Ofcom has basically said, “A notice can only be issued where technically feasible.”

Evelyn Douek:

Yeah.

Alex Stamos:

So, there actually is big news today that the government is giving up a bit on trying to force scanning and breaking of event end.

Evelyn Douek:

Yeah, and congratulations to all privacy advocates and people that have been working really, really hard for this outcome and raising the alarms about it. So, it’s a significant achievement.

Alex Stamos:

So, now what everybody’s been waiting for, all our listeners for their only sports news…

Evelyn Douek:

Right. Blah, blah, blah, blah.

Alex Stamos:

Yes, yes.

Evelyn Douek:

Right.

Alex Stamos:

So, we’ve already heard about the Matildas losing. In other news, the in conference realignment updates, my California Golden Bears as well as my Stanford Cardinal have found a new home from the rapidly dying PAC-12, which was down to four. The two of them will be joining the Atlantic Coast Conference next year because…

Evelyn Douek:

I have a question.

Alex Stamos:

Yeah, the Underwood?

Evelyn Douek:

Yeah, where’s the Atlantic Coast? How does that possibly make sense?

Alex Stamos:

Right. I think this is like, one of those theories that all the oceans are connected. So, really, you’re just looking at the Far Eastern Atlantic…

Evelyn Douek:

Gotcha.

Alex Stamos:

… when you look out from literally the rim of Memorial Stadium in Berkeley. You could see the Pacific Ocean or now the furthest eastern reach of the Atlantic Ocean, yes. And so, it is good for these schools that they’re not left totally behind. They had to take somewhat embarrassing cuts in money, but they will survive that.

But again, college football incredibly stupid and is destroying itself because of the race for money from ESPN and Fox News. So, but football season started, so we can actually enjoy it. The Cal and Stanford are both want to know, beating upon cupcake opponents. And so, their real tests are this weekend. Auburn is visiting Cal. This is an opportunity for… Cal is really pissed about being treated this way, and being seen as being second rate.

So, it is a definite potential trap game for Auburn. I will be there looking very much forward to that game. And then, Stanford is playing at USC, which per my rule to always root against USC in everything, if the gaping mob of the earth could open up and swallow the University of Southern California, I would probably be okay with that.

And so, let’s root for both Stanford and Cal this weekend. And there will be updates next week on what is going on in college football. There’s also a huge upset by Colorado, which was an incredible game if you’re into college football to watch. Deion Sanders is the coach now, and is incredibly quotable. So, it’s going to be fun to watch the buffs this year because they’ve gone from sucking to defeating, what used to be one of the best teams in the country in TCU. So, anyway, it’s going to be a really incredible season, I think.

Evelyn Douek:

Okay, excellent. We promised an update next week, and we will stick by it pending some big disaster. We are back from El summer holidays, and excited to get back into the more regular schedule.

Alex Stamos:

We are back.

Evelyn Douek:

Yeah. So…

Alex Stamos:

And then, we’ll be teaching sooner. Are you teaching this fall?

Evelyn Douek:

I’m teaching a small one L reading group about judicial opinions and a public law workshop at Stanford. So, nothing directly on topic, but I am really excited to start seeing… The students have been around and they’re very excited, and that’s always so refreshing to see, especially the one Ls come in. And it reminds you why it’s pretty cool to be here.

Alex Stamos:

Oh, they’re so cute and very young.

Evelyn Douek:

Yeah, it’s like, they’ve got a couple of months of that [inaudible] optimism left, so yeah.

Alex Stamos:

Yes.

Evelyn Douek:

And have you started teaching?

Alex Stamos:

In a couple of weeks, Riana Pfefferkorn and I will start teaching the Hack Lab, our intro to cybersecurity class, which should be a lot of fun as it always is, and something we have to constantly update. It’s always fun to teach international policy and political science and law students how to hack things. I feel like I’m giving children knives when I do this, so letting them out on the world.

Evelyn Douek:

What could go wrong? Okay.

Alex Stamos:

Yeah, here’s Metasploit, have a great time, yeah.

Evelyn Douek:

So, with that, this has been your Moderated Content, weekly update. This show is available in all the usual places, including Apple Podcasts and Spotify. And Show Notes are available at law.stanford.edu/moderated content. If you missed us at all, let us know. We barely ever ask for reviews, but it’d be nice to give us some love there, and we will be back in your feeds next week.

This episode wouldn’t be possible without the research and editorial assistance of John Perrino, Policy Analyst at the Stanford Internet Observatory, and is produced by the wonderful Brian Pelletier. Special thanks also to Justin Fu and Rob Huffman.