MC Weekly Update 10/9: Social Media During War

Alex and Evelyn discuss how the horrific events in Israel over the weekend make clear how important social media is during fast-moving historical events, and how X/Twitter has fundamentally degraded as a source of information. They also discuss China’s ramped up crack down on app stores, and the Supreme Court’s cert grant in the Netchoice cases, that could reshape the internet.

Show Notes

Transcript

Evelyn Douek:

Like many people, I think I acutely felt the loss of what Twitter was this past weekend. It was never perfect, but it was a place where if you approached it carefully and with sufficient awareness and skepticism, you could get information about unfolding events faster than the legacy press would update their stories, and you’d also get a much more varied, much richer account of what was going on. You’d get all sorts of perspectives and hear all sorts of voices and see things that you wouldn’t see in the mainstream media.

Hello and welcome to Moderated Content’s weekly slightly random and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek, and Alex Stamos. There is, of course, one big story dominating many people’s thoughts right now, and that is the horrors and atrocities unfolding in Israel and Gaza over the weekend. Words fail in this situation, which is unfortunate when you’re trying to produce a podcast.

One thing we are not going to do here is the thing that many people on social media do, which is give hot takes or repeat unverified claims about things that are not in our area of expertise. We are not a foreign policy podcast, geopolitics podcast. We are certainly not Middle East experts. But because everything is a content moderation issue, this story is also a story about social media and platforms and content moderation. Social media platforms are a key part of any conflict now.

In fact, they are themselves conflict zones. They are a frontline in some ways, and certainly there’s trails of evidence and where many people get a lot of their information and where a lot of the narrative is sought to be set out and controlled. Like many people, I think I acutely felt the loss of what Twitter was this past weekend.

It was never perfect, but it was a place where if you approached it carefully and with sufficient awareness and skepticism, you could get information about unfolding events faster than the legacy press would update their stories, and you’d also get a much more varied, much richer account of what was going on. You’d get all sorts of perspectives and hear all sorts of voices and see things that you wouldn’t see in the mainstream media. I really miss that this weekend. But Alex, I’m curious what your experience was over the past weekend trying to follow these unfolding events.

I assume that when the news of the conflicts broke on Saturday morning, you went and booted up your academic access to Twitter’s API so that you could carefully track everything that was happening on the platform and give us an account of what was happening and the news as it unfolded.

 

Alex Stamos:

Yeah. First, this is an incredibly horrible thing. I was speaking to some Israeli friends this morning and effectively everybody in the country is affected. Everybody knows somebody who’s died or is possibly even a hostage, and the entire country is mobilizing. This a small enough country that there’s nobody who isn’t touched by this. But yes, like you said, this isn’t a foreign policy podcast. Let’s talk about the online side. Yes, unfortunately, that API access, as our listeners know, does not exist anymore.

Normally during a circumstance like this, SIO would’ve spun up our collection of all the public discussion on this topic so that we could do some analysis of what was going on and then tip off if we saw anything that was inauthentic. If we saw a copypasta and fake videos and all that kind of stuff, we could tip off Twitter. The access doesn’t exist. The team at Twitter doesn’t exist. I think this moment is really where the product decisions have been made over the last eight, nine months have really come home to roost and that Twitter is completely useless.

In fact, it is a negative force now in these kinds of situations, which is directly against what Musk wants it to be or says he wants it to be. It is not the most accurate real-time events. It, in fact, has now been designed to be easily manipulatable. And as a result, it has, ironically, by “allowing” all of the speech of real people, of individuals who don’t have troll farms behind them gets buried and eliminated from the timeline. The example I posted was just something… Again, we don’t have API access.

This is just me scrolling through and trying to read stuff on Twitter, and super obvious, I posted something on Mastodon, if you go to cybervillains.com/alex, that’s where my Mastodon is, of just a basic montage of the exact same video posted over and over again with the text, “If Russia did this in Kyiv, it’d be all over the news. Everyone would be screaming genocide. But it’s happening in Gaza and no one cares about civilian casualties. Israel is a terrorist state.”

This statement was posted by dozens and dozens, perhaps in the end, hundreds of accounts with a video of what looks like buildings on fire. Well, it turns out the video is from a celebration in Algeria of the win of the local football soccer team, which then distributed fireworks and flares to all these people. One, I had no idea this had happened, but you can find contemporaneous videos of the event on Facebook and other places from 2020. It’s very clearly not Gaza. It’s pretty incredible.

Talk about marketing, as well as a lack of real fire codes, I think, in Algeria, but it does look like the city’s on fire, but it really is a celebration. If you look closely, you see lots of fireworks and stuff. It’s clearly not war. You can hear people cheering and celebrating. But this video is posted over and over again, and this is exactly the kind of thing. It’s like super basic, copypasta, exact same text, exact same video. Community Notes was doing their best.

There’s a number of people who labeled Community Notes on this, but they weren’t able to keep up to it and a bunch of the accounts are blue check marks. Therefore, the reason I wasn’t even following any of these accounts, it showed up in my feed because they were blue check marks and they’re paying to get lift. A bunch of product decisions have made Twitter useless and actively negative in this case. First the blue check mark, that you can buy blue check marks and there’s no verification.

It is absolutely the idea that professional propagandas are not going to pay eight bucks a month was just ridiculous. It was never a good one. We’ve talked about this, but it’s clearly whoever’s behind this, the troll farms, whoever’s doing this, they’ll pay the eight bucks a month for the uplift. The second was the elimination of the Twitter team that does this because they’re “propagandists” and they attack elections and yada, yada. Just all this stupid stuff, right?

There used to be a dedicated team at Twitter whose job it was to build tools to look for this civic spam, to look for people manipulating the platform. Who’s behind this? I don’t know. It is consistent with what we’ve seen out of Iran. We know Hamas is basically giving interviews to The Wall Street Journal saying, “Oh yeah, Iran helped us plan this.” If Iran is doing propaganda support for Hamas, it would not be shocking.

If you go to io.stanford.edu and search for Iran, you can find a bunch of reports that we did with data provided from Facebook and Twitter and such on Iranian social media manipulation. This is totally consistent, the language, the message, all that kind of stuff. It doesn’t mean it’s them. Again, we can’t really do the research right now, but I would not be shocked if it turned out to be wrong. But in any case, if you get rid of the team that looks at that kind of stuff, these guys can run wild.

And then the third big decision that I think is driving probably not this one, but a bunch of other things I’ve seen is you have a bunch of these accounts that are like, “I’m an open source intelligence. I’m a Middle East expert.” These people who come up and then have these completely fake either videos that have been mislabeled, videos from other conflicts, videos from other continents or situations where they’re saying, “This is going on in Gaza,” and you look close and you’re like, “I’m pretty sure that’s Nigeria.”

But it doesn’t matter because of people’s emotions getting it up and they mislabel this stuff. They’re doing it because they’re grifters, because Musk has created this economic incentive that if you get people to retweet your stuff and to interact with it, you now get a direct check from Twitter. I expect these are probably just our traditional spam farms around the world have figured out, oh, here’s a way I can make a bunch of money, like the famous Macedonian civic spam farms that infected Facebook before they got kicked off.

That these guys are back and active because you’re getting… In this case, you don’t even have to sell ads. You get the money. You get a check straight from Twitter into your account. A number of people have pointed this out, and again, Community Notes is doing… I think Community Notes continues to be one of the best products that Twitter ever shipped in this area, and people are doing their best to try to keep up with it, but it’s not working. It does not offset the fact that all these product decisions have created a huge economic incentive to try to clout chase with fake information.

A huge failure of Twitter and an incredibly sad time for it because now is a really good time for people in the Middle East, as well as around the world to understand really what’s going on.

 

Evelyn Douek:

Right. I mean, to be clear, in moments like this, it is an impossible task to do effective content moderation. There are a number of intractable impossible trust and safety choices that a platform has to make in these moments. I mean, I was talking to people who knew people who were finding out about their friends, other family being killed or taken hostage from videos that are appearing on these websites or on Twitter, on social media.

And that’s a horrific way to find out about these things that are occurring. On the other hand, there are obviously significant equities on the other side for this kind of what is evidence not being taken down, not being disappeared by social media platforms.

There’s a whole bunch of academic literature and thought around how platforms should moderate in this context and things like evidence lockers for taking down what could later become evidence in war crimes tribunals and things like that and the intractable choices that platforms have to make in terms of thinking about what is in the public interest and what people don’t need to see or what people certainly don’t need to come across unwittingly in their social media feed, given the graphic nature and the highly confronting nature of a lot of this footage.

It is hard to be a platform moderator in these moments. It is a difficult choice, a difficult position to be in, but this is a situation where Twitter is not only… It’s not that it stopped trying, it’s also making all of these choices that make it actively hostile to being a trustworthy, productive environment in these moments. You’ve named a bunch of choices. I mean, this is not just product choices made nine months ago. These are product choices made this week. For example, the choice to start removing headlines from the link cards that news organizations and news articles have.

When you’re scrolling through your Twitter feed now, if someone’s posting an article to a news website, you no longer get a link card that says, “Here’s the headline and here’s what this article’s about.” You just get an image. It becomes impossible to tell what the link is about, what the story is about. You have so much less context about what’s going on. It becomes a lot less obvious where the authentic information is in your feed. The news stories aren’t jumping out as much.

The ostensible reason for this is that it’s aesthetic. Musk is saying that it makes everything a lot simpler and a lot more cleaner, maybe, but they’re also saying the quiet part out loud, which is, of course, this is preventing a lot of traffic going to news sites. I mean, Musk is actually just trying to prevent or is being actively hostile to people getting access to authoritative information. He wants people to stay on X regardless of whether that is beneficial for them or the information environment more generally.

 

Alex Stamos:

Well, to back up, you’re totally right that every social media company is now wrestling with these trade-offs of they don’t want incredibly violent imagery. But if you want to document in real time the atrocities that are happening, you have to allow it. They don’t want celebration of terrorist attacks, but you want documentation of terrorist attacks. It is an incredibly complex place to be, and you want to facilitate real time information sharing without facilitating the spread of lies of what’s going on.

But at a minimum, you could prevent large groups from manipulating. You could prevent people from saying the same thing over and over. You can prevent a video that is known to be false or being posted over and over, and they’re not even doing that. There’s the normal conflict between equities here and then there’s where Twitter is, which is five miles down the road of completely not caring.

And like you said, the decision to remove the headlines, one, it seems clear that there’s also announcements or discussion that apparently they’re going to create ad units that are impossible to tell that they’re ads. Does the FTC say anything about advertising evolution?

 

Evelyn Douek:

No. I mean, there’s just certainly no laws about that. I mean, you would know that I guess, unless you accidentally fired the entire department that used to work on those issues.

 

Alex Stamos:

Right.

 

Evelyn Douek:

Right.

 

Alex Stamos:

Right. It’s a fun place to be if you’re a company with a FTC consent decree to just ignore all the FTC rules around online advertising. Good luck with that. But it’s just a terrible product management decision because it makes the platform actively use our hostile. You hear people doing this all the time that they think something’s a link, they click on to, it’s a photo, so it just blows up. In other cases, they don’t click it because they don’t think it’s a link. It’s impossible to tell.

We talked about this a while ago that there’s this huge upswell of support from Musk’s purchasing of Twitter among a certain group of people in Silicon Valley, including a bunch of people who really take a lot of pride in their positioning of being early product managers at companies that they were. They are very quiet now that Musk is demonstrating that he could not get hired as an APM, as an associate product manager, at any company in Silicon Valley. His product manager decisions are terrible.

It turns out there’s a reason why these people have jobs to figure out how people use these things and to try to make it easier for them so that they spend more time on the site. This is the kind of thing that would get you… It would literally get you fired if you implement something like this at any other company. It’s just very sad, and especially in the situation where real-time information is really a lifesaving thing. People are trying to communicate peer-to-peer and subject.

I think what we’ve seen is a bunch of people on WhatsApp, you’ve seen people on Telegram, so now it’s turning into you have to get information. You can’t trust anything that you get just from the public media. It has to be from somebody you trust of, “I saw this thing, or there are terrorists on this road, or they’re shooting people in this location.” That’s the kind of thing you have to get from somebody you trust. You can’t trust it if it’s on Twitter.

 

Evelyn Douek:

Right. We’re a year down the track now almost of Musk’s acquisition of Twitter, not quite, but getting there. There are other sites that have really started to establish themselves. But I mean, I was finding that there really is nothing like what Twitter was. There is no substitute. Nothing has really established itself as the alternative, I think. I have been finding that there’s a lot more influx and a lot more activity on some of these platforms in the last few weeks given some of the choices that Musk has made and some of the things we’ve talked about on this podcast.

I mean, in particular on Threads and Bluesky, I am seeing a lot more people join, but it was still really obvious over the past few days that it just doesn’t have that critical mass of a constant flow of up-to-date information. I’m curious, Alex, whether you think this is still a temporary interregnum between the Twitter era and whatever comes next, or whether we’re just going to be in this post single public square kind of age where it is a lot more fragmented and we’re never going to get back something that is like what Twitter was. Do you have a prediction on that?

 

Alex Stamos:

Yeah. One, I want to point out that once again, some of your old colleagues at Harvard who said that everything that was wrong with the information of violent was a monopoly issue had been demonstrated to be totally wrong. It turns out when you have a fracturing of the social media ecosystem, people get to choose what consensual reality that they live in. They get to choose platforms that are actively supporting false content. I don’t know if things will reconsolidate. Threads, I think, was very disappointing for a number of people.

It’s either algorithmic, it’s the discovery. There’s not one specific reason, but Threads really fell down and was not the place where you could find real-time information. Probably the closest I saw was Bluesky. There are some people on Mastodon. Mastodon’s discovery mechanisms are still a mess. The whole federated system means that people have completely different views into this big fractured namespace, and it’s just hard to maintain a conversation because you really don’t know what people are seeing.

I think Bluesky had the best conversation because a number of the journalists and other people who verify information before posting it have decamped to Bluesky, but it just doesn’t have… There was way less engagement on Bluesky than Twitter. Now, the engagement on Twitter was garbage. It was lots of clearly manipulated stuff. It was lots of copypasta. There’s lots of accounts that are only three, four, five months old that are applying to every single thread on this stuff. Lots of people trying to raise money.

There’s obviously a lot of grifters of, look at this picture of this baby who was hurt by an Israeli bombing. I saw the same ad over and over again of what looks like a GoFundMe, but it’s clearly just a scam site, that kind of stuff. I don’t know what’s going to happen. I hope we return to a place where at least one of these platforms gets back to the level of usefulness Twitter was during breaking events, but right now we don’t have it. You know what I was doing? I was watching BBC World this weekend a lot.

 

Evelyn Douek:

Right. I was refreshing The New York Times homepage much more often than I normally do.

 

Alex Stamos:

Thanks to Stanford Libraries, we have access to Haaretz and a bunch of different international newspapers and Israeli newspapers that you can watch their live streams. What was interesting because you saw BBC world was showing video clips and they very aggressively hit. We have verified that each one of these is real, and they mentioned the person who took the video. This is from Ahad in Gaza, and this is from this guy, this is from this guy.

It was really interesting to see the kind of mainstream media specifically respond to this moment of really aggressively being like, you could see these clips on TikTok or Twitter, but we have verified that these are the real places and that these were actually taken by real people.

 

Evelyn Douek:

Right. I mean, yeah, it’s not obvious that it’s a terrible world or a bad outcome for us that we are returning to more authoritative sources, more mainstream. I guess you feel that lack of the real time inflow, but there are obviously benefits to relying on purely verified information as well. But we are in this privileged situation, as you say, Alex, where we have lots of access to a wide variety of authoritative news sources that a lot of it is behind paywalls for many people, and a lot of it isn’t at their fingertips.

They’re not going to have access or be shown that information if they’re not going out and seeking it. Thinking about the broader public impact on the public sphere from that being the only place to get that kind of news I think is a more challenging question.

 

Alex Stamos:

This thing’s not going to be over. I mean, I think we’re going to return to this, because there have been these Israeli-Palestinian blowups that last a couple of weeks. That’s not this, right? The involvement of Iran, and Iran’s really explicit supporting here, and the fact that this is possibly aimed at disrupting the Saudi Israeli peace talks, this is pulled in a bunch of other Muslim countries that do not like Iran. Sunni countries.

We have the possibility for a prolonged regional conflict here, the United States gets pulled into, in which case, having good information on what’s going on is going to be really important and it’s going to be a big deal in our election cycle for the next year. Having false information being spread continuously about what’s going on in a conflict, the United States is definitely going to get pulled into is going to be bad. Everything’s lining up for 2024 to not be fantastic from the health of the information environment perspective or the health of our democracy.

 

Evelyn Douek:

We’re not in the prediction business, but absolutely one thing that can be sure is that this is going to be… There are no good outcomes. This is going to be horrific. It is going to be drawn out, and I’m sure that we will be talking about it again on the podcast. There are also no good segues from this, just my heart aches. This is an awful thing to watch and thoughts with people affected. Moving on though, just to another couple of stories that we wanted to cover this week before closing out.

Alex, The Wall Street Journal had some reporting that had my Stamos Spidey sensors going off because it’s your favorite topic is Apple and China. This was a story about meetings between Apple staff and Chinese officials about upcoming rules that will require Apple to be much more restrictive on foreign apps in its App Store in China.

These rules by next July mean that Apple will no longer be able to offer apps like some of the Western social media apps that people have otherwise been able to download apparently, unless they’re registered with the government under new rules that have been issued a couple of months ago. I was actually a little bit surprised by this framing that this was something that hadn’t happened already, that so many of these apps were still available in the App Store in China, but curious for your thoughts.

 

Alex Stamos:

As everybody who listens to this knows, I’m not a big fan of Apple’s decisions in the People’s Republic of China and how much assistance they’ve given to the Chinese Communist Party. This is an interesting thing in that Apple has censored the App Store in the past, but has been, I think, on request. The folks who run the great firewall website, so there’s a nonprofit that tracks what is blocked, they have also done work too. You can basically query Apple’s web services and you download a manifest of every app that is in the App Store.

You do so for different regions and compare them, and they’ve demonstrated these are the apps that are blocked in China. But it’s generally been on demand. As this Wall Street Journal article points out, funny enough, a number of American Western social media apps, Instagram, Twitter, Facebook, YouTube, WhatsApp, have been allowed even though they’re blocked by the network. China never took the step of specifically asking them to be taken down and had relied upon the network, but there’s a lot of VPN options.

Again, Apple will remove VPNs have asked for, but they pop right back up. It looks like the PRC is inverting this. Instead of saying, “We’re going to ask you to take stuff down,” it’s going to be, “You have to have permission to put stuff in there.” It’s going to turn it from a deny list to an allow list model where an app has to be registered. It has to be approved by the Communist Party. That is going to be a massive change. I mean, you’re going to end up with tens of thousands of apps get removed, I’m guessing, from the App Store.

It’s a really, really big deal. In the end, it looks like Apple’s going to do it. Ever since COVID and some of the conflicts between the US and China, Apple has been trying to unwind what they have done in their collaboration with China and has been totally ineffective so far. Unfortunately, it does look like this is going to work. The question then becomes, what precedent has been set for the rest of the world? Is India next, for example, of having to have Indian registered apps be in the App Store, which would be a much bigger deal?

I mean, this is a pretty big deal, but the great firewall’s pretty good. The Chinese have the best great firewall of anybody. They have the most effective online censorship capability of any authoritarian state. India’s blocking, for example, is quite weak and easily bypassed. If you end up with Apple following through for India and other countries, it will have a much larger effect because they’re not currently able to rely on their ability to control the network itself.

 

Evelyn Douek:

Right. Just another example that we’ve talked about before, App Store is becoming another leverage point for the content moderation wars. We talk a lot about pressure on platforms, but the pressure on App Store is 100% going to increase in many places beyond China. Okay, heading over to the legal corner. The tech law Super Bowl is officially off and running. Last week, the Supreme Court granted cert in the NetChoice cases which arise out of Texas and Florida laws that regulate social media platforms.

Now, we have talked about these laws a lot on the podcast, but I just want to stop, set the stage a little, talk about what’s at stake, because I’m sure we’ll be talking about it a lot in the months to come. These are huge cases that could potentially radically transform the internet. We’ll be talking about them a lot. The laws out of Texas and Florida do two things. They first attempt to restrict content moderation that platforms can do, and then they also place a bunch of transparency requirements on the platforms.

The court agreed to hear two questions, those two buckets of questions. The first question will be what you might hear called about the must-carry provisions, and that’s because it forces platforms to carry content that they otherwise wouldn’t. In Florida, this is that they can’t de-platform political candidates. I wonder what the politicians were thinking of when they passed that provision saying that these platforms can’t kick off certain politicians and journalistic enterprises.

In Texas, the restriction is that platforms can’t have viewpoint discrimination, obviously arising out of conservative claims that platforms are biased against conservative content. These are really important test cases, I guess, about state regulatory power.

The states argue that platforms are basically common carriers like the telephone, something like that, and they shouldn’t be able to pick and choose what they carry on their services, whereas the platforms say, “No, no, no, we’re like newspapers and the content moderation choices that we make are our First Amendment rights to have the product and express out values in this form as we want to.” That is really important. There’s a lot of daylight between those two positions.

There’s been a lot of daylight between the two courts, the lower courts that address this issue. The Florida law was struck down and the Texas law was upheld. We’re waiting to see what the Supreme Court will do on that. And then on the transparency provisions, the court only took the question of whether it was constitutional for the state’s to require platforms to give an individualized explanation. Every time they make a content moderation decision, they have to tell the user why they made that content moderation decision, what the reasoning was behind it.

They took up the challenge to that law, but they left aside, they didn’t take up the challenge to more generalized transparency requirements. The laws also required a bunch of things, like the platforms have to have a clearly accessible, acceptable use policy, their community standards basically published. The Texas law would require them to release a biannual transparency report. All of those laws actually have been left in place. They’re going to go into effect, and that means they could still be challenged at a later date.

We’ve talked about this a bunch on the podcast, the various challenges that we’re seeing to transparency laws, but they’re not going to be at issue in this case. The only thing that’s going to be at issue is whether states can require platforms to give every single user basically an explanation for content moderation decisions. The platform argument there is essentially this is unduly burdensome. This is impossible for us to comply with.

YouTube says, “Look at all of these millions of comments that we moderate all the time. You can’t possibly require us to give all of this reasoning. And in fact, you are only requiring us to do it in order to stop us making content moderation decisions in the first place.” That’s the argument. These cases are about these laws, but they’re also about so, so much more. They are about the general power of states to regulate platforms and also to force transparency from some of the most powerful corporations in the world.

There’s a saying that hard cases make bad law, but this is also a case where easy cases might make bad law, because these are laws that have been enacted with pretty obvious political animus towards the company’s at issue and the content moderation decisions that they’re making. In a way that really offends my First Amendment sensibilities. These are politicians that come out and say, “We’ve got to get those big tech liberal elites because they’re moderating conservative content and we want them to promote more conservative content.”

That’s not the role that we envisage governments playing in our public sphere. On the other hand, I do worry that in an overreaction to that, we could get an overly broad ruling that basically incapacitates states from making any much more measured, much more reasonable regulations in the future. Whenever I talk to reporters about this, they always ask me, what’s going to happen? What is going to happen? I say, one of the reasons why I love my job, why I love what I do and why I love what I study is because I think this is one of these rare cases where the stakes are extremely high.

This is extremely important, but the predictions are really difficult and the politics around this are really weird. The political breakdowns around these issues are not as obvious or as clear as you might think. I think a lot of people might think, look, these are Texas and Florida laws. They’re coming out of conservative states. They’re motivated by conservative fears about big tech elites. We now have a six-three court. Aren’t they going to get a green light? I think that’s not quite true.

The laws here actually challenge some very longstanding conservative judicial philosophies around private power and the marketplace being the way that these things are sorted out and the idea that states just shouldn’t interfere with private discretion to make these kinds of decisions. That’s a much more longstanding fundamental, I guess, libertarian ethos of the First Amendment that I think will be motivating a lot of the justices in this case too.

Meanwhile, liberals have often been much more pro-government regulation of the marketplace, much more open to regulation of powerful corporations and the power that they wield. But of course, the politics for them will be challenging in these cases too. I genuinely don’t know what is going to happen. I can’t count noses in this case, but I do know that the outcome will be extremely important and could fundamentally reshape our public sphere for decades to come. One to watch.

 

Alex Stamos:

Okay. It’s a good time to be in your job. It feels to me, as the non-expert, that the Supreme Court is finally going to have to wrestle with what politicians learned the hard way over the last couple of years, which is that these broad statements about corporate responsibility, censorship, all this, they’re very hard because in the end, each of these individuals is angry when they see something they don’t like and they’re also angry when something they do like is taken down.

These broad statements of what is the responsibilities of the platforms, what power they should have to shape conversation is never an easy one because very few people come at it from an actual consistent perspective. It is always based upon the exact situation, the exact speech.

A number of the conservatives who are complaining about censorship these days also have a history of sending lots and lots of letters to the platforms complaining about the stuff they see that they don’t like and are today passing laws to censor content as seen in libraries and online seen by children, for example. The Supreme Court doesn’t have to deal with the fact that you can’t magically pass… There’s no Supreme Court decision that says the internet looks exactly like my own political positions.

 

Evelyn Douek:

I think it’s also they’re going to have to deal with the fact that these aren’t a new beast. I think that it’s a little disingenuous to say, “Oh, the First Amendment obviously decides this issue one way or the other.” Because in this argument about are social media platforms more like telephones or are they more like newspapers, I mean, the honest answer has to be they’re a little like one and they’re a little the other two, and they’re also like something else completely, entirely.

I think that the law hasn’t really had to confront that issue head on yet, but the debt has come true. Here we are. The First Amendment’s going to have to grapple with what are these very consequential, but entirely novel intermediaries that are so important in our public sphere now. I don’t know if you have a sports update. It’s a weird week for a sports update, but I do have one.

 

Alex Stamos:

Yeah, I just think thematically, I’m not sure I can make that pivot right now.

 

Evelyn Douek:

That’s fair. I have one story that I think actually is really inspirational from the weekend, which is the new marathon world record for men’s running. Kelvin Kiptum broke the world record running in Chicago over the weekend and running a staggering two hours and 35 seconds, which is just completely obliterating the previous record. He’s 23. I love the marathon. I think it is an incredible feat of human endurance and will. Why not leave on that somewhat optimistic, inspiring note about what is possible?

 

Alex Stamos:

That was good. That was a good pivot. Also, a good Greek word, marathon.

 

Evelyn Douek:

All right. And with that, this has been your Moderated Content weekly update. This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn’t be possible without the research and editorial assistance of John Perrino, policy analyst at the Stanford Internet Observatory, and is produced by the wonderful Brian Peltier. Special thanks also to Justin Fu and Rob Huffman. Take care, everyone.