MC Weekly Update 11/21: ClosedAI (Happy Thanksgiving!)

Alex and Evelyn discuss the ongoing drama at OpenAI and how to think about AI safety; whether app stores should be doing age verification; India’s jawboning of streaming platforms; rumours about rumours on TikTok; the scary threat to free expression coming from AGs investigating groups when Musk complains about them.

Show Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • In one of the most surprising (and rapidly developing) tech stories of the year, Sam Altman was ousted as CEO of OpenAI. The reasons are still unclear, and the story still changing as we were recording. But at least partially the story is about AI safety, and what it means to pursue responsible development of AI – Karen Hao and Charlie Warzel / The Atlantic
  • Meta is advocating for online safety legislation that requires parental approval for children under 16 to download apps, shifting the burden to app stores for age verification and parental controls. – Sarah Perez/ TechCrunch, Cristiano Lima, Naomi Nix/ The Washington Post, Antigone Davis/ Meta
  • Meta announced it is opening up its Content Library and API more broadly – Nick Clegg / Meta
  • Everything is content moderation, and India is the most important jurisdiction for the future of online free speech, streaming platform edition, with Netflix and Amazon Prime self-censoring the content they serve in the country – Gerry Shih and Anant Gupt / The Washington Post
  • Osama bin Laden’s Letter to America on TikTok didn’t seem to go viral until the media drew attention to them. Would be nice to know for sure though! – Drew Harwell and Victoria Bisset / The Washington Post, Scott Nover / Slate
  • Musk launches a ridiculous lawsuit against Media Matters for reporting that Musk doesn’t like but admits is true. That’s not surprising at this point. But more surprising, and scary, is the State AGs who are willing to go along with it and have announced their own investigations. – Adi Robertson / The Verge

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Transcript

Evelyn Douek :

After all of the windup in last week’s episode, I actually don’t know what happened with the big game. I was traveling over the weekend, and we talked a lot about the big game, and I have no idea. So, did we win?

Alex Stamos:

Who’s we?

Evelyn Douek :

I know. I don’t even know. I mean, obviously, we… I will tell the answer to that once I know who won.

Alex Stamos:

The Matildas did not win.

Evelyn Douek :

Damn it. Hello, and welcome to Moderated Content’s weekly, slightly random, and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek, and Alex Stamos. We are recording roughly around noon on Tuesday, Pacific, which I think is important to Mark, because we want to start with talking about the only story that anyone’s talking about in tech right now, which is the OpenAI debacle. And this story is moving at such a pace that it almost certainly, I cannot promise in any way that it’ll probably-

Alex Stamos:

Probably changed since you said it was noon.

Evelyn Douek :

Exactly. Right, exactly. I can’t read Twitter fast enough, and speak at the same time. I need to work on my multitasking skills to be able to report on this, while remaining current. So, the latest update at the moment, I mean, I’m not going to give too much background, because I assume all of our listeners basically will have followed the broad brush of this story is that ousted CEO Sam Altman is back in talks with the OpenAI board about his possible return from where he now sits, he’s at Microsoft, where he’s been hired since being ousted on Friday. And his replacement CEO, Emmett Shear, has told folks that he will leave the company, if the board can’t provide evidence of wrongdoing, and the reasons why they fired Altman in the first place. They had given some vague reasoning about him not being adequately open and candid with the board on Friday. Okay. We are not an AI podcast, or a corporate intrigue podcast, but this does have a safety angle. And so, Alex, why don’t you give our listeners the broad contours of the safety story here.

Alex Stamos:

Yeah. So, I mean, this story is crazy, and I think we’re going to have to do follow-ups here, because the corporate governance components here are also fascinating of OpenAI, GlobalAI LLC owned by this LLC, which is controlled by a nonprofit, that’s all nuts. On the safety side, so this is where the word safety is overloaded, because the way you and I talk about trust and safety, and the vast majority of trust and safety groups in Silicon Valley are about specific misuses of a technology by human beings to harm other human beings. So, you use this social media site, and you use it to send death threats, that’s bad. Use this video site to send child abuse materials, that’s bad. When you talk about safety in an AI context, there is the same kind of safety issues, which are also alignment issues, which is, is this model racist? Can it be used to create phishing? Can it be used to create disinformation?

There’s a lot of people who see that kind of safety as being only pushed by the enemies of progress. But in truth, there’s a real commercial need for that, right? If you are going to pay a bunch of money to run a GPT-4 model up in Microsoft Azure, and you’re going to have it interact with your customers and you’re a big company, you don’t want that thing to then cuss at them, right? For all the discussion of creating grok in these non-woke AIs, in reality, you’re just creating AIs that are professional and aligned in that, you can interact with your employees or your customers, if you are paying for it, right? And in the end, most of these companies want to make money. But there’s another kind of safety, which is the people who believe the AI is going to kill everybody, or destroy humanity effectively.

Evelyn Douek :

That sounds a little unsafe to me.

Alex Stamos:

It sounds a little unsafe, but it’s also just like a… It’s a completely different category of issue. It’s one that I don’t feel as comfortable talking about, and a lot of people… There’s people who talk about that who are just nut jobs, and who you can dismiss. And there’s really smart people, like the chief scientist of OpenAI, who really get fixated on the possibility. And so for people who are deep in this space, who are seeing what is coming in the next 12 to 18 months, for them to be really worried about these safety issues does give me pause that I have to…

I am naturally doubting of those kinds of claims, but the fact that really smart people make them makes me a little worried of people who are very hands-on, and see the inside of these companies. And that seems to be the safety issue that was pushed here. On the trust and safety, normal safety, OpenAI is the leader, by far. They have been the most responsible company. They’ve had dedicated teams who do this kind of testing. They write papers. In fact, they wrote a paper on disinformation and AI models that our colleagues at SIO help contribute to. They do a lot of great work. But in the end, if you believe that AGI is going to end humanity, that doesn’t really matter. If you speak to the AGI-

Evelyn Douek :

If your chatbot doesn’t swear, right?

Alex Stamos:

If the chatbot doesn’t swear. Yeah. And that seems to be the-

Evelyn Douek :

It’s very polite, while it’s incinerating all the humans, it’s great.

Alex Stamos:

And that seems to be the motivation of the chief scientist at OpenAI, who triggered this whole thing and went to the board, and got them to do this little coup. So, it’s hard to make a statement right now, because we don’t know all the details. I mean, this is going to be, it’ll be fascinating to see which of the streamers come out with the TV show, or the movie about this board meeting.

Evelyn Douek :

All of them will. I mean, this is succession redux. Yeah.

Alex Stamos:

Every big journalism writer in San Francisco is writing to their editor right now, saying-

Evelyn Douek :

I got a great idea.

Alex Stamos:

Mike Ege, Davey Alba, Sheera Frenkel, every single one of them is writing to their editor saying, “I need three months off to write a book.”

Evelyn Douek :

Right.

Alex Stamos:

And then they’re going to sell it to Netflix, and it’s going to be, by next spring, we’ll be watching it. So, until we get more reporting out of exactly what happened, I think what would be really interesting is, one, is there a specific thing? Did the board just disagree with Altman’s stance at DevDay, and some of his things, or was there a specific decision he’s made, and is there something on the roadmap that worries them? That would be a fascinating thing if they’re like, “Oh, GPT-5 just did this thing?” And then Ilya goes to the board and says, “You’re not going to believe what GPT-5 just did, and we had to kill it. I had to pull the plug, or I threw water on the console.” I mean, whatever the story is and therefore… And Sam’s like, “Oh, it’s not a big deal. And that’s why, they fired him. Unless there’s something like that, then it really does look like the board is misaligned themselves, and mistepped here. But it’s hard to make that judgment, without knowing exactly what happened.

Evelyn Douek :

Yeah. It’s wild that we know so little about what’s going on when, ostensibly, the concerns are about the end of humanity. It seems like something that we should want a little bit more visibility into. At this stage, it seems like barely anyone knows why Altman was fired. It’s incredible that it hasn’t come out. With all of the Silicon Valley reporters reporting around the clock, there’s still no exact reason why he was fired.

Alex Stamos:

Oh, staking out the office. I think it was The New York Times had a TikTok of, when did pizza get delivered to the OpenAI office on Sunday?

Evelyn Douek :

Boba tea. Yeah, exactly.

Alex Stamos:

Yeah. I am absolutely sure. I mean, they’re obviously watching the entrances and exits. There’s somebody probably parked outside Sam Altman’s house seeing who he’s meeting with. Yeah. And still with all that, we don’t know exactly what happened in that room. It’s very old school, in a way. It’s hilarious, right? Because it used to be that if IBM made a decision in the ’70s, you would not read about it within six hours of exactly what conversation happened. But that’s what happens at Facebook, and Google, and all these other places, and the fact that OpenAI is still a black box, it’s kind of-

Evelyn Douek :

Yeah. Well, closed AI. I’m sorry, I couldn’t even commit to that joke. Yeah, thank you. I mean, it’s interesting to hear you say that you feel uncomfortable talking about it, because I sort of do as well, having a good sense of how seriously to take these claims. Do you personally have like a P(doom), which is what people talk about probability that AGI will end humanity. Is this something that… How do you think about this?

Alex Stamos:

The reason I have, I’m trying to be modest about this, is one, the AI revolution is fascinating for people of my generation, people who went to, did computer science and electrical engineering in the late ’90s, of us, like young Gen X elder millennials. The AI revolution is the first technological step for which our educations completely did not prepare us, right? So, I went to a top-three school on this, everything up to this point of like silicon design, graphics, security systems, operating systems, it’s all the same stuff. It’s just different applications of it, and faster, and smaller, and better, and the AI stuff is just completely and totally different, right? And so to get at all cognizant, I’ve been basically, part-time, taking the Stanford classes on this, and reading the books, and watching Andrew Ng lectures, and going back and reading statistics books. We didn’t learn the basic mathematical, you were not required to learn the mathematical underpinnings of how, statistically, transformer models and some of these models work.

And so that just gives me a pause, because I don’t feel like I have the grasp of the technology in the same way I would on a traditional systems security issue. I mean, it just always seems to me that for every technology, you have the doomers, but it’s very rare for the people who are on the cutting edge to be the doomers. And so that’s the one thing that really does scare me here. The people who are on the cutting edge of building social media networks were never saying, “This is going to end humanity.” They saw that there’s some possible downsides, but they’re generally very positive about technology. If you end up in this situation, where a decent number of people who work directly hands-on, and see what’s going on, are this scared, then that’s worrying to me. So, I don’t have a personal probability, but looking at smart people that I believe I should trust, it is a little bit worrying.

That being said, functionally, how do you functionally end up in a situation where one of these models is so smart that ends up doing something bad to humanity? Part of having the discussion, I think, is just we have to be thoughtful about how do we use these systems, and it’s just like watching Terminator, and you’re yelling at the screen, “Do not hook Skynet up to the nuclear weapons. That’s just a bad idea.” In the end, human beings are deciding how these things are being used, right? But I get more worried about the short-term impacts.

I think there’s a lot of positive impacts in cybersecurity. In the cyber side, we don’t have enough people, and so having computers do a lot of more work for us is great. But we’re just starting to see the use of AI in offensive purposes, and I think that there’s going to be fascinating uses of finding vulnerabilities, finding bugs, writing an exploit code, writing, very smart malware that does not need a command and control channel to operate. These kinds of things, we’re at just the start, and so I do see not the end of humanity, but I do see some real potential risk over the next five years, just in the traditional cyber realm.

Evelyn Douek :

Fascinating. Yeah. Okay. Well, as you say, we will probably have to have follow-up conversations about this, as we see what happens over the coming days. Our commiserations to all tech reporters to end their Thanksgiving plans and, I guess, enjoy a side of tech news with your Turkey, I suppose.

Alex Stamos:

Now, you know what it’s like to be an MC responder over Thanksgiving-

Evelyn Douek :

Right.

Alex Stamos:

… or working in DevOps, right? The people who work at walmart.com do not get to take Thanksgiving off, right?

Evelyn Douek :

Yeah, that’s a shame. As you were saying before we recorded, because it’s not Thanksgiving in Russia, and China, and so they don’t take the time off themselves.

Alex Stamos:

Holidays are always huge for breaches. Yes.

Evelyn Douek :

Right.

Alex Stamos:

The quiet times are Lunar New Year, and Orthodox Christmas. But those pretty much never line up with Thanksgiving, or Western Christmas, or Western New Years.

Evelyn Douek :

Right. Well, busy Thanksgiving for a lot of people then. Okay. So, from something that isn’t necessarily squarely in your wheelhouse to something that is, Meta has been talking this week about its proposals for online safety legislation that would require parental approval for children under 16 to download apps. The idea being that this would shift the burden from the individual apps to verify the ages of their users to the app stores. So, for example, Apple App Store and Google Play Store. And so, the cynical take on this is that Meta is selling this as a way to ease the burden on parents rather than having to approve every single app download. And as people move across different apps, and things like that, they can just approve it at the app store level. The cynical take is that this means that Meta gets let off the hook, and it moves the burden upstream from their individual app to the app stores with Apple, and Google. So, what’s your take on this?

Alex Stamos:

So, I think they have it half right. I do think there needs to be more responsibility than the app stores and the platforms, for a very practical reason, which is if you’re a parent and speaking as a parent, the moment at which you have the most leverage over a kid’s device is when you buy it, and you set it up initially. And so I do think there needs to be a platform-level control here, but I would do a different control. What I would like to see is a strong age identity pass through to all the apps, that you set the age of a device user when it’s initially set up, and that it’s very hard to reset that. That takes a hard reset of the phone, and perhaps that gives an indication if it’s something on a family plan, with both Google, and Apple have the idea of what families look like within their ecosystems that gives you a notification as a parent, and then you pass that through.

So, instead of having it on a per app basis, that when you download Instagram, Instagram gets from the operating system, this person is 12, and then Instagram can make the decision of you’re not allowed on, or they can make the decision of you’re allowed into Instagram Kids, or a shunted, limited version, and then, “Oh, you’re 13, you’re allowed into Instagram, but we’re turning off DMs, and you’re 14, you’re allowed to have DMs, but you can only DM other people who are less than 15.” It’s not perfect, but it is much better than what is envisioned by a bunch of the child safety legislation, which is effectively every single adult has to show ID to get online, right?

Evelyn Douek :

Right.

Alex Stamos:

And so as an option that is better than the status quo, but not as aggressive, and much better from a civil liberties perspective than what is being envisioned by CSSA and other laws, I think having it on a per device basis. But I don’t like their implementation. I’d rather pass it through, because different apps… Some apps can allow you to operate. They’ll just be different. Like YouTube has a kids mode that is actually reasonably effective. You can lock your kids into it, if you know how to change your DNS settings, or if you run your own router. That is not a reasonable expectation for most folks, but if I was able to say onto like this iPad belongs to a nine-year-old, then YouTube just hard locks into kids mode. It cannot be unlocked on that device, until you hard reset the entire device.

Evelyn Douek :

Right. Yeah. And, I mean, the other thing about this, that’s obviously correct, is the push for federal legislation on this issue rather than what we currently have, which is going to be a patchwork of state laws requiring different standards of verification, and different ages, and different sort of due diligence, all those sorts of things, that’s going to be an absolute mess for everyone, and allow-

Alex Stamos:

Total disaster.

Evelyn Douek :

Yeah. Allow safety arbitrage, and all sorts of things that is just not going to be effective, but is currently the situation, where we have these states passing age verification bills that are probably going to be unconstitutional, and even if they weren’t, it would be totally ineffective. So, it’ll be interesting to see what happens with that. Meta is also leading out with a bunch of announcements at the moment. This one really struck me as odd, because it announced today that it is opening up its content library, and API more broadly as the most comprehensive access to publicly available content across Facebook, and Instagram of any service that they’ve built, anything that they’ve built to date that allows researchers access to information in almost real-time what’s going on across the platform. Need a little bit more information about what’s going on, but this seems pretty good, actually, and a welcome step in the direction of more transparency at a moment where we see basically everyone reneging from it.

And so what I can’t work out is why they announced it today in the Thanksgiving dead zone, while everyone’s talking about OpenAI, when this actually seems like a pretty good measure.

Alex Stamos:

Yeah. I mean, I think it’s a great move. Meta is completely swimming against the current here where other companies, as we’ll discuss, are limiting and/or threatening researchers, and it’s clearly in response to DSA, although they’re doing it globally, which is great. I mean, one of our fears with DSA is that Americans will not benefit from the transparency parts, and clearly they’re trying to be compliant with EU law, but then having the same standard globally. I think that’s fantastic. Like you said, to dump it in this dead news time, normally, what’s a dead news time before Thanksgiving, and whatever bandwidth is completely saturated by people discussing, all of a sudden, who are experts in nonprofit board dynamics.

It makes me wonder, I mean, it’s a big company, and so you might just have a deadline. There might be a letter that’s due to a senator. I mean, there’s all kinds of situations that might trigger a weird internal deadline for something like this, or they just had nothing on the news calendar this week. The truly negative stuff will be coming out on Thanksgiving, so that’s always a fun time to be able to look at the press wires, that’s when you’ll hear breaches, and bad things, and human rights reports. We’ll see what actually gets released from there-

Evelyn Douek :

Tune in next week.

Alex Stamos:

… in a couple of days.

Evelyn Douek :

Right.

Alex Stamos:

Yeah.

Evelyn Douek :

Okay, excellent. Yeah. No, it’d be exciting to see what research comes out of that, and if it’s truly a step forward. Small news bite here, but in it’s very on brand for us. Now, everything is content moderation theme and also, please keep an eye on what’s happening in India Theme. Another great story in the Washington Post this week about how the Indian government has also brought Netflix, and Amazon Prime Streaming to heal, and using the threat of criminal cases, and mass public pressure to shape what Indian content that gets produced. There are a couple of high profile incidents where they get specific shows called back, and then, of course, the way that this works is then from then on, basically, these platforms start to self-censor preemptively to avoid the problems where they’re getting in trouble with the government for content that might offend either the BJP, or anti-nationalist, or offensive references to religion, these kinds of things that have been getting them into trouble.

So again, not at all surprising that this is also happening to streaming platforms of the same theme that we’ve seen happening to the social media platforms, but everything is content moderation including these services.

Alex Stamos:

Yeah. I see this as, I mean, it’s interesting, because now we have content moderation lining up with an ongoing problem, which is American multinational media companies taking into account the concerns of oversea authoritarians, and what they make, right?

Evelyn Douek :

Right.

Alex Stamos:

Which has always been a problem with the movie studios now for decades, is the movie studios have decided, oh, they can make a gazillion dollars overseas, especially in China, but you have to create content that is really the Chinese will approve of, and this goes as far as Disney making a Mulan that is filmed in Xinjiang, where there’s an ethnic cleansing happening. There’s a remake of Red Dawn. Have you ever seen Red Dawn, the 1984 version?

Evelyn Douek :

I have not, no.

Alex Stamos:

Oh my God, I can’t believe it. One of the greatest Reagan era movies ever made, the greatest Reagan era movie ever made, Red Dawn, a group of American teenagers defend Colorado from a Soviet invasion. Right.

Evelyn Douek :

Fantastic. I’ve got my Thanksgiving movie all lined up now, thank you.

Alex Stamos:

Oh, absolutely. And it’s got every ’80s actor in it, it’s just like, it’s the entire… You’ll watch it, and it’ll be like, “Oh my God, I can’t believe all these people were in this movie.” It’s so bad. It’s good. I’m not going to tell you anything more about it. It’s just incredible.

Evelyn Douek :

Ride in with good drinking games.

Alex Stamos:

We’ll do a viewing. Right. Avenge me. Anyway, sorry. There’s lots of lines. It was remade in 2012 and clearly, the Soviet Union didn’t exist anymore, so you need a bad guy who could effectively invade the United States. But it’s being made by a major studio, so they can’t use China, so they end up saying that North Korea invaded the US, which is like, there’s no plausible world in which North Korea could possibly invade-

Evelyn Douek :

The teenagers actually might be able to handle that one.

Alex Stamos:

Yes. Right. Right. I mean, North Korea has a massive standing army, but no way to get to North America.

Evelyn Douek :

Right.

Alex Stamos:

Right? And so, it’s just this whole crazy thing that there’s all these problems with the movie, but it just starts with this ridiculous thing, because they can’t talk about an actual possible adversary who theoretically could evade the United States, because they also buy a lot of movies. So, I just see this as in the same vein of… It’s unfortunate, but it’s clearly American cultural hegemony, that everybody complains about, is going to be tempered by the desires of overseas viewers, which is totally reasonable. But then also, the autocracies, either the hard autocracies of the PRC, or the trending autocracies like the Indian government.

Evelyn Douek :

Right. Yeah. I mean, that’s just the really sad thing about this, right? It’s like, it’s been this way with China for a very long time, and China is an autocracy, and so, you don’t expect, I guess, any different. But it is just increasingly sad to see this happening with India, which was, ostensibly still is, the world’s largest democracy, but increasingly in name only.

Alex Stamos:

It’ll be interesting to see whether it leaks back into what we make in the US, right? Because that’s what’s happening in China, is even if you’re not shipping your movie to China, you can’t make a movie that’s critical of China anymore for American audiences. And so it’ll be interesting to see if India gets to that level.

Evelyn Douek :

Right. Yeah. I mean, with a lot of these series, it seems The Washington Post reporting is not just that they’re not offering them in India, it’s they’re shelving these projects entirely, and not offering them on their services at all. So, that’s one sort of start to answer that question. Okay. Continuing a discussion that we had last week about TikTok. The story causing a big fuss this week was that teens hosting videos on TikTok expressing sympathy with Osama bin Laden’s letter to America. That was a two-decade-old letter he wrote, critiquing the United States, and explaining the September 11th attacks, or not explaining, but trying to, or at least his version of events. And the fear was that this was something that a whole bunch of young people had gotten into on TikTok, and was spreading, and expressing sympathy with the most famous, infamous, successful terrorist of the last century.

And so, there was reporting about this, about how whether… Oh, kids these days, but also whether TikTok was doing enough to moderate this. And then there’s been the backlash to the reporting saying, actually, there’s no evidence that this was going viral, or doing real numbers, before the media reported on it and drew attention to it, at which point the reporting on it absolutely did go viral. And we’re in the same situation that we were in last week, where we were talking about, “Well, it would be really nice to know which version of account of events is accurate here?” But we just don’t have enough insight into what’s going on the platform. Have you seen anything to suggest otherwise, Alex?

Alex Stamos:

Yeah. So, one, I agree. I think this is nutpicking. There’s no evidence that this was really trending among young people, until it got picked up by Fox News and such, and then the numbers went off the charts. One of your problems here, but like you said, is it’s very hard to know what’s trending on TikTok. There are some numbers available just by viewing programmatic access. There’s an API that I do not know of any legitimate researchers that use it, because their API terms are still not acceptable to academic researchers. So, I think TikTok needs to work on that, because this is exactly the example of the kind of thing you would like credible voices who care about peer review, and care about their reputation to be able to come out and say, “Yeah, we looked, and actually this is not that accurate.” But you can’t really do that, because it’s very hard to tell what’s really trending on TikTok from an API perspective.

The other interesting thing about TikTok is, everybody’s experience is so incredibly customized that even if you have an API, knowing how often people are running into this in a normal day, that is something that TikTok can calculate themselves. But it would be very hard to think of an API that’ll allow you to really figure that out. Yeah. From a condo moderation perspective, they have gone and they’re taking down all these videos. I think this has been a big wake-up for TikTok, and it’s just traditionally not been a news outlet, right?

TikTok, one of the things kids liked about it was, you could avoid Biden versus Trump and all the other stuff that dominates Twitter, and Facebook, and where all the olds are. But the teenagers are really getting motivated/ radicalized by the Hamas-Israel conflict, and I think this is not TikTok’s fault. I think they’re just reflecting what young people think, whether you agree with it or not, and I’m not. You and I perhaps have disagreements here. I’m pretty anti some of these positions, but people are allowed to have it, and there’s no evidence that TikTok is actually the one driving that. I think they’re just reflecting it.

Evelyn Douek :

Yeah. Right. There was a Pew survey released in the last week or so, that said 43% of US TikTok users say they regularly get news from the app, up from 22% in 2020, so nearly double, and not at all surprising. These are apps that we’ve talked about this in the context of threads as well. These are apps that aren’t really trying to be news platforms, unlike maybe the early versions of Facebook, and Twitter that really leaned into being a place where you went to get current affairs, but you can’t avoid it. Once you have a critical mass of people there, and they’re using it regularly, the news comes, and it’s something that they’re going to have to deal with, and they’re going to have to have policies around it. As you say, their policy was to be aggressively removing this content. Even more remarkably, the Guardian, the news website removed the Letter to America from its website, so that the people couldn’t link to it and find it there, which is quite a remarkable thing.

Alex Stamos:

That I thought was stupid, right? This is historical, like the Mein Kampf, or the Unabomber Manifesto. When you get rid of historical documents, it’s ridiculous, and people are pointing out that the Director of National Intelligence links to it, right? I think you can get from cia.gov, a dni.gov, because you can go to these archives of here’s all Bin Laden’s writings. And the DNI does not put up there, because they agree with it. What I would’ve preferred The Guardian to do is just added a header of like, “Hi, 19 year old, you don’t remember 9/11? Here’s a link to our coverage of that day.” Right?

Evelyn Douek :

Right.

Alex Stamos:

That would’ve been the appropriate use of that, instead of driving them to Wikipedia, and all the other places where it’s available. So, that was just bizarre. I mean, I understand that that was a weird feeling for The Guardian, but it would’ve been a cool opportunity for education of young people who just don’t have… Their only exposure, apparently, to Bin Laden is reading this letter, which happens that… I mean, look, I’m not a terrorism expert, but Brian Fishman, our friend, has written a lot about this, and he’s a real expert in Al-Qaeda, about how Al-Qaeda had both their Islamist, or everybody has to convert to Islam/ we hate Jewish people, and other kinds of Islamist beliefs, and then also these really weird beliefs about global warming.

And so it’s just very weird to read this letter, and to be like, “Oh, he was right about global warming,” And then to ignore the part which is like, “Oh, and everybody should convert to the most extreme version of Islam.” It is just kind of a very specific reading of even this letter, even if you think this letter is an accurate representation of his views, which it’s not. It’s a piece of propaganda. Even then it was pretty specific, and it would’ve been nice to see The Guardian actually just highlight that instead of deleting it.

Evelyn Douek :

Yeah. I think our colleague, Renee DiResta said it best to The Washington Post where she said, “Don’t turn the long public ravings of a terrorist into forbidden Knowledge, something people feel excited to go rediscover,” which I think is right. It was a bizarre choice by The Guardian. I understand the idea that when it’s happening on TikTok, they aren’t the ones providing the context. But as you say, there are other options for them to provide that context rather than just removing the content altogether. Just, I want to go back on the API point, because I think this is really important, because TikTok does say, “Look, we have a research API, and so all of these questions could ostensibly be…” Is this one way?

We were just praising Facebook, Meta, for having this API. Is it fair to say that TikTok has the same, and I think you mentioned a bunch of the limitations. I mean, one of them is that you have to propose specific research topics, and be an accredited researcher at a university, and then you hear back three to four weeks later about whether your research proposal has been approved. And in this case, I don’t know what happens when most of the content is being deleted. I don’t know exactly what researchers get access to, but you also mentioned some other terms that are problematic, which I think it’s important to also be specific about. So, if you want to explain what that is?

Alex Stamos:

Yeah. I mean, so I want to be careful here, because this was an earlier version, but there were, and I’m no longer in-charge of this part of the Stanford Internet Observatory. I’m just a teacher.

Evelyn Douek :

Right. And a podcast host.

Alex Stamos:

I can give you the names of people. And a podcast host. Yeah. But in the original terms that Stanford looked at, there were requirements around review of content of what was being written, in research and stuff, that no academic researcher would possibly agree to. And so, hopefully, they are working to remove that. And they just need to have terms that, from their perspective, they have legitimate privacy issues, right? Unfortunately, GPDR is not very compatible with academic research, something we’ve talked about on this podcast before. The DSA is supposed to fix some of that, but it doesn’t do so cleanly.

But in the end, it’s up to TikTok to create terms that are reasonable for normal researchers. And it’s certainly doable, because there’s no company that’s had more privacy lawsuits, and therefore the largest privacy legal team than Meta, right? Meta’s privacy legal team is larger than the general counsel team at most tech companies, and they were able to find a way, just as we’re talking about, to allow academic researchers to get access to it, and to have a reasonable level of risk management with the Europeans. So, hopefully TikTok gets there.

Evelyn Douek :

Okay. So now, we have to go to a place, we don’t actually tend to go to that often anymore, but we have to go to our X Twitter corner. Yeah. Which is a truly appropriate sound. I mean, it always is, but for this particular update, so I mean there’s been a ton of stuff here with Musk’s totally undeniable, blatant antisemitism, and personal abhorrent tweets, and behavior, which is not something that we need to give more airtime, or focus on here in this platform. But one of the things that I think is worth covering is this lawsuit that Musk has announced in the past couple of days against Media Matters. So, this is almost basically a redux of a story that we talked about before, where X was suing CCDH for publishing research about objectionable content that it found on its platform. In this case, last week, Media Matters released a report about how X has been placing ads for Apple, and IBM, and a bunch of other large, large companies, large advertisers against pro-Nazi content, and I don’t say that euphemistically, or sort of generally.

This is like quotes from Adolf Hitler, and other defenses of Nazism that these companies’ ads were appearing against. And then a bunch of these companies pulled their advertising from the platform, including Apple, and a bunch of other extremely large advertisers. In Response, Musk has announced that the company is suing Media Matters, wonderful demonstration of legal brilliance. The lawsuit itself actually confirms all of the reporting from Media Matters. It says, “Yes, these ads did appear against this content. But the allegation is that Media Matters set up a specific account, and curated their feed specifically, and refreshed the feed a number of times, in order to get those screenshots.” But that doesn’t deny the fact that X, the platform, is still serving ads against very brand-unsafe material. And these were some examples that Media matters found, but obviously, they wouldn’t be the only examples on the platform. Alex, just want to check in, is that really pro-free speechy, that approach?

Alex Stamos:

Gosh, yeah. So, somebody says something about you, and then you want to use the courts to punish them for saying something critical about you. Is that Pro-free?

Evelyn Douek :

Which you admit is true.

Alex Stamos:

Which you admit is true. Yeah. No, I would say not. But, I mean, you are an actual lawyer. I mean, you’re an actual professor teaching free speech issues, not to humans as we’ve determined, to GPT, law GPTs, but what do you tell those bots about? Would you say that that’s free speechy to use the power of the courts, at least in the civil situation, to try to punish true criticisms of you?

Evelyn Douek :

Yeah. I mean, so there’s a joke that whenever you ask a lawyer a question, the answer is always going to be, it depends.

Alex Stamos:

Yes.

Evelyn Douek :

It depends. This is one of those rare ones, where no, this is not it depends. This is just not a pro-free speechy behavior. This is very anathema to how the marketplace of ideas is supposed to work.

Alex Stamos:

Right. Okay. So, we’re right back in the weird place where, one, I am somewhat critical of the original research. Just like with the CCDH, I did not like. CCDH and Media Matters are both advocacy orgs, right? They’re advocacy orgs that want specific political outcomes, that are politically motivated, and they use research in support of that. I think that’s totally fine. The research is not… This Media Matters research is, there’s nothing that seems to be totally inaccurate, but it is not the full context here, and it is very aggressive in its language. If this was submitted as a paper to a journal article, to the Journal of Online Trust and Safety, and I was tagged to be a reviewer, I would recommend rejecting it. But it’s not an academic paper. It’s a blog post that has accurate screenshots, and accurate statements, and they did not explain all their methodology, and that’s one of the reasons why it’s not like legitimate academic research, but it is clearly critical free speech. It is politically salient free speech, and it’s not a lie. It is accurate statements.

As to the criticisms of how they did it, yes, they created a curated account where they specifically followed white nationalist content, and then they hit reload until they got these screenshots they wanted. But one, you’re going to have to do that, because on any specific case where you’re looking at ads, a huge amount of the ads on Twitter are just like crap right now. Right? They’re like for weight loss things, and buy gold, and the kind of stuff you see advertised on Fox News at 2:00 AM. And so if you want to get an Amazon ad or something, you’re going to have to hit reload, and they’re not saying this is the normal effect of just if you create an account, this is the normal behavior.

They’re saying this stuff will show up next to ads, and they’re completely true in that. What they’re trying to do is, simulate what happens across the entire platform of millions of people utilizing it in real time, and there’s no really great way to do that outside of the companies, right? Again, I would not accept this as an academic paper. But if you wanted to do this kind of thing in an academic setting, and be a little less biased in your language, and being much more specific about your methodology, I think there actually is legitimate research there, and certainly it is not defamatory for them to say, “This is what we saw when we did this,” and completely against any kind of free speech principles, that if you believe in free speech, then what you believe is that speech should counter other speech.

Evelyn Douek :

Right.

Alex Stamos:

He did that. He came out and said, “This is why this is not true.” Now, the reason it doesn’t really matter what their methodology says is that he’s also then saying things to horrible anti-Semitic tweets. He’s saying you’ve said the actual truth, right? And so this would not be as big a deal, if Media Matters just came out and said, “Here are these screenshots.” He could have come out, or Yaccarino could have come out and said, “Hey, these are the actual statistics,” and it would be kind of a research methodology argument. But the fact that it came out at the same time that he is personally amplifying anti-Semitic, white supremacist arguments makes it a huge deal, and makes it the breaking point, and it also demonstrates how big of a deal this is for Twitter. This is it, right?

We’ve had a bunch of these moments with advertisers. I think this is the point of no return for advertisers, as long as Musk is involved, because if you are Apple, and you’re spending $100 million a year with Twitter, it is clearly a negative spend now, and it is too high risk that no matter what Yaccarino tells you, it is possible that Musk is going to come out, and blow it all away through his personal, at 2:00 AM of him having his phone.

Evelyn Douek :

Right. And I mean, this is really important. I mean, first of all, it’s not defamatory. It’s all true. But second of all, in terms of damages and causation, I mean, there are many other reasons why advertisers might be fleeing the platform at this particular moment. In fact, the Media Matters lawsuit was probably not super significant, as aside from all of the press stories covering the fact that Musk was replying to literal anti-Semitic content that this is not like, What’s that anti-Semitic? That’s just-“

Alex Stamos:

And not just light anti-Semitic, here is a theory that empowered the worst mass shooting of Jewish people on American soil.

Evelyn Douek :

Just to be very clear to the listeners, not a borderline, not a borderline question. Absolutely, just well, well, well beyond the pale.

Alex Stamos:

You and I have been talking about our students being anti-Semitic through some of their pro-Palestinian stuff, and those are arguable. There’s no argument about what he was, I don’t even want to say it. I’m not even going to say it on our podcast-

Evelyn Douek :

Right, exactly.

Alex Stamos:

… of how it’s just so incredibly bad.

Evelyn Douek :

Okay. So, all of that incredibly stupid, incredibly ridiculous use of the courts. But here’s the scary part, and here’s the part why I think it was actually, I mean, that at that point, this is just Musk being Musk, and we talked about all of that before. But here’s the scary part, which is that the government gets involved. So then Stephen Miller in response to this, tweets, “Oh, by the way, fraud is both civil, and criminal,” and calls on conservative state attorney generals to look into Media Matters and the “ostensible fraud” that they’re pulling here in writing this report, and Musk sort of amplifies this. And then we have Missouri’s attorney general weighing in saying that he will look into this. Now, Missouri. Of Missouri v. Biden, which might ring a bell to listeners, because we have talked about it a lot on this podcast, Missouri v. Biden, now, Murthy v., I don’t…

No, sorry. Murthy v. Missouri at the Supreme Court. This is the big jawboning case, where a bunch of plaintiffs are alleging that the platforms had way too close relationship with governments, and were doing their bidding far too much. Here, we have the Attorney General of Missouri saying that they will launch an investigation into Media Matters, simply because Musk was tweeting about how he didn’t like what they said. Particularly egregious coming from Missouri, but not to be one up, Texas Attorney General, Ken Paxton, also weighed in, and announced an official investigation into Media Matters for potential fraudulent activity, as well as saying that they are trying to limit freedom by reducing participation in the public square through their work.

This is terrifying stuff to me. Honestly. This is what a real threat to free speech looks like, which is these attorney generals launching investigation into a private organization for clearly, indisputably, totally protected speech. Regardless of whether you think the methodology is accurate, or worthwhile, I think this is clearly intended to chill this kind of research, and that’s a very sad thing, and a very scary thing.

Alex Stamos:

Yeah, I’m not sure to… I mean, this is terrifying. This is the use of the power of the state, and taxpayer money to go after speech criticizing the political allies of those elected officials. It is exactly what the First Amendment was created to protect, right? Congress shall make no law respecting the establishment of religion, prohibiting a free exercise thereof, or abridging the freedom of speech, or of the press. It doesn’t say Media Matters shall not criticize Elon Musk, right? There’s no way, and I know it says Congress, but you could tell our listeners that applies to the States too, right?

Evelyn Douek :

It does.

Alex Stamos:

There’s a couple of court cases about that, right?

Evelyn Douek :

Yes, famously.

Alex Stamos:

Famously.

Evelyn Douek :

Yeah.

Alex Stamos:

I can’t name them. But I haven’t taken your class, so.

Evelyn Douek :

Yeah. I mean, this is just not a closed question. There is-

Alex Stamos:

So, how do you think they’re going to get protection? If you’re being attacked by a State, how do you go, get protected from this kind of abuse of political power? I mean, it’s shocking. You would hope that the voters of Missouri, and Texas would understand how dangerous it is, because there are blue states too, like having the California AG go after Fox News on the same theory would also be a terrifying misuse of power. And that’s exactly where we’re going to end up, if we go down this path.

Evelyn Douek :

Yeah. So I mean, it depends, I guess, on exactly what form these investigations take, and what exactly the government actors do. I mean, if the government is just speaking, and it’s government speech, and they say we don’t like these reports, or whatever, that is also, the government can do that. But once they start taking, actually, investigative steps, there are First Amendment retaliation claims that can be bought to say, “The only reason why the government is doing this to me is, because of my First Amendment protected speech,” and in which case, that’s also unconstitutional behavior. And so, we’ll have to see how this plays out, but the First Amendment does have a role here, if they cross that line. But it’s going to be tricky to prove, and it’s going to be difficult to see, because I mean, this is exactly why people worry about giving governments broad ranging power to enforce certain laws in these situations is, because they can be weaponized against political opponents.

But if they can show that it’s being done purely, because of their political speech, and their protected speech, they can be a claim there. But it’s terrifying stuff, and that shouldn’t be where we have to get to. Meanwhile, the “CEO of the platform” is tweeting, and then tweets about how protecting freedom of speech could not be more urgent and important than ever, at this particular moment. So, I’m glad Linda Yaccarino is really doing good work here in reigning in the platform’s owner.

Alex Stamos:

Her CTO.

Evelyn Douek :

Yes, exactly.

Alex Stamos:

Doing just normal CTO stuff, like getting the attorney generals of politically aligned states to go after his critics.

Evelyn Douek :

I mean, seems like a great business strategy to me.

Alex Stamos:

Just a normal job description for CTO. Let me just go look through Glassdoor right now and see. Yeah.

Evelyn Douek :

Yeah, okay. Well, I’m amazed she’s still there, honestly. But good for her, I guess. Okay. And now, I’m embarrassed to admit, Alex, after all of the windup in last week’s episode, I actually don’t know what happened with the big game. I was traveling over the weekend, and we talked a lot about the big game, and I have no idea. So, did we win?

Alex Stamos:

Who is we?

Evelyn Douek :

I know. I don’t even know. I mean, obviously, we… I will tell the answer to that once I know who won.

Alex Stamos:

The Matildas did not win.

Evelyn Douek :

Damn it.

Alex Stamos:

Yeah. The University of California Golden Bears defeated the Stanford Cardinal-

Evelyn Douek :

That’s who I was going for.

Alex Stamos:

In it was not a very close game.

Alex Stamos:

Yes. Yeah. So, go Bears as expected, this is not one of those surprising big games. Cal’s just crushed Stanford, and Cal keeps the Axe, the Stanford Axe, the famous prize, which has a storied 100-year history, which starts with it being stolen from a Stanford game, and then it being hidden under the skirt of one of the female students to cross the ferry to Oakland, because there was no Bay Bridge yet. Yes. The Axe will stay in Berkeley for at least another year, and I won my bet against Mike McFall. So, go bears, keep on winning, and I do feel bad for the Stanford team. They’ve had a rough season.

Cal now has to defeat UCLA to go bowling, which will be difficult, because UCLA defeated USC, and USC at least was quite good earlier this year. As happy as I am to see USC lose anything, if the gaping mob of the earth swallowed the USC campus, I wouldn’t be that upset. It does mean that Cal’s got a difficult game coming up. But yes, a sad day, the last big game of the Pacific Coast Conference, the PAC-8, PAC-10, PAC-12, and I can’t wait for the big game to be fought with the Atlantic Coast Conference symbol on the field next year. Once again, it’ll be in Berkeley. You can look out at the eastern reaches of the Atlantic Coast, while you watch Stanford, and Cal play in the Atlantic Coast Conference.

Evelyn Douek :

Right. We will have to have an update on that particular saga in a future episode, because there has been more to the story in the last couple of weeks.

Alex Stamos:

Yes, there’s been more legal stuff. Yeah, there’s been a bunch of the lawsuits, and there’s a stay. So, we should do a corporate governance episode, where we talk about OpenAI, and the PAC-12 board-

Evelyn Douek :

Right.

Alex Stamos:

… in the same discussion.

Evelyn Douek :

That’s hitting all of our listeners, key, these are the things that they come to this podcast to learn about. So, with that-

Alex Stamos:

You got to find a niche, and you got to explain it.

Evelyn Douek :

Right.

Alex Stamos:

That’s right.

Evelyn Douek :

So, with that thrilling teaser, listeners, we bid you a farewell, have a wonderful Thanksgiving. Enjoy your holiday, unless you’re a tech reporter, in which case, again-

Alex Stamos:

Or a ransomware actor, in which case, I hope you take the… Try turkey. It might be harder to get in St. Petersburg, but it’s delicious.

Evelyn Douek :

Is it though? I’ve heard mixed reviews. All right. And with that, this has been your Moderated Content Weekly update. This show is available in all the usual places, including Apple Podcasts, and Spotify. And these show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn’t be possible without the research, and editorial assistance of John Perrino, policy analyst at the Stanford Internet Observatory, and is produced by the wonderful Brian Pelletier. Special thanks to Justin Fu, and Rob Huffman. Have a happy holidays, and talk to you next week.