MC Weekly Update 12/12: THE PROPAGANDA PLATFORM (?)

Evelyn and Alex wonder if this podcast is the propaganda platform that Elon Musk has said Alex runs. Then they discuss Apple’s huge set of announcements about encryption this week and the balance between privacy and safety; their weekly check in on the Twitter files and how things took a dark turn; and the Meta Oversight Board’s long-awaited decision on Meta’s X-Check system.

Show Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • Apple is dropping plans to scan for child sexual abuse material (CSAM) in its iCloud file storage service that was put on pause late last year due to privacy and security concerns. – Lily Hay Newman/ Wired
    • More: In August, a New York Times article captured the story of a father who learned he was investigated by the San Francisco Police Department and had his Google account deactivated after an automated tool for detecting abusive images of children flagged pictures sent to his child’s doctor. – Kashmir Hill/ The New York Times
  • Cue the music for the Twitter Files!
    • Alex attempted to have a productive Twitter chat with Elon Musk about transparency efforts that ended with the Chief Twit replying “You operate a propaganda platform.” – @alexstamos
    • The third installment in the series of tweet threads focused on the decision to deplatform former President Donald Trump following the January 6 attack on the U.S. Capitol. – Joseph A. Wulfsohn/ Fox NewsCharisma Madarang/ Rolling Stone
    • An automated Twitter account that tracks Elon Musk’s jet, @elonjet, was restricted to make it harder to find. – Ellie Quinlan Houghtaling/ Daily Beast
    • Orin Kerr, a law professor at another California university, discusses why Twitter and Elon Musk should be careful in how they share information and communications with journalists (hint: it’s to do with the Stored Communications Act). – @OrinKerr
  • Musk tweeted an excerpt of former trust and safety lead Yoel Roth’s doctoral dissertation to falsely insinuate he supports the sexualization of children, opening up harassment and potential violence against the staffer he once praised. – Dana Hull, Kurt Wagner/ Bloomberg News
  • The Oversight Board released a long-awaited policy advisory opinion (PAO) with dozens of recommendations for improving Meta’s murky and controversial cross-check program which gives VIP Twitter accounts a higher level of scrutiny for enforcement of platform policies with little transparency. – Jeff Horwitz/ The Wall Street JournalOversight Board
  • Morocco’s win over perennial power Portugal and Ronaldo in the World Cup was a historic first for an African or Arab country to reach the semifinals of the tournament. They will next face France, the nation that colonized Morocco. – Issy Ronald/ CNN

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Transcript

Alex Stamos:

You could have Croatia versus Morocco, which if you had bet that money, you’re about to make… If you’re a half Moroccan, half Croatian dude, and you put in 10 bucks on betting that they’re going to be in the finals-

Evelyn Douek:

You’re a very lucky idiot, basically.

Alex Stamos:

You’re the next owner of Twitter. That’s it right there.

Evelyn Douek:

That’s right. Welcome to Moderated Content’s weekly news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. Alex, my first question to you this week is, are we a propaganda platform? Because I noticed this week that the world’s richest man tweeted that you run a propaganda platform. And I was wondering, are we it?

Alex Stamos:

Yeah. I’m trying to figure out what the propaganda platform is that I run. It’s possibly this podcast, although with our dozens of listeners, I’m not sure. I feel like that’s a little bit of self-aggrandizement. It might be my research group at SIL. It is true, we’ve got five full-time employees, two postdocs and a bunch of students. Is that a platform? Our postdocs are great. I think they put out great stuff. I’m not sure I’d call it a platform.

Evelyn Douek:

It’s definitely a publisher. I don’t know whether it’s a platform. We can debate that at length.

Alex Stamos:

I do run a [inaudible 00:01:06] on instance, cybervillains.com. It has 90 users right now, so that’s huge, right? That’s my huge platform. Or it’s this podcast or I don’t know, maybe I wore a T-shirt he didn’t like. Yes, according to Elon, I run a propaganda platform and that is an amazing, amazing statement from a man who just spent $44 billion to purchase one of the world’s most important social media networks that also happens to be the most targeted by foreign adversaries trying to influence American politics.

Evelyn Douek:

Well, it sounds very cool. I hope it is this podcast because I’m not doing anything nearly as awesome by myself. So commerce by association.

Alex Stamos:

Maybe this is the start of a Christmas romcom where I have a distant uncle who died and left me parlor or gab. So we’ll figure that out.

Evelyn Douek:

How’s that for a Christmas present? Here’s this exploding toxic platform. Have fun. All right, so let’s get into our propaganda and spread the good word. I think let’s start with an announcement or a series of announcements from Apple this week, which might not be where people think we would start given all of the continuing fallout of the Twitter files, which we’ll come to. But you were saying to me before we started recording these announcements, if we’re talking five years from now, where’s the consequential stuff going to be from this week? It’s in this series of announcements. So walk us through why you think they’re so important.

Alex Stamos:

So Apple made three big announcements that got really buried by online culture war, but I do think that they’re going to have huge impact on overall safety and security and how we’re going to deal with trust and safety in the future. The first announcement was new features in iMessage. iMessage is by some measures, the largest of the end-to-end encrypted messaging platforms. It might be smaller than WhatsApp, but it’s up there with WhatsApp with over a billion users. And iMessage is now going to have features that will allow you to verify who other people are, something that never really existed. Apple’s been criticized in the past about making it hard for people to really harden up iMessage and in fact, the features they’ve added in the past specifically made it into a white paper by the UK government where the UK’s version GCHQ, their version of NSA proposed a model in which companies like Apple could insert a silent participant in the conversations so that they could wire tap it.

So this is pretty clearly about the UK that Apple announced a number of features including a certificate transparency log and user interface features that if that kind of thing happened, should hopefully notify people. The second thing they announced was that they’re adding end-to-end encryption for most of iCloud. So the big backdoor that has existed in privacy for Apple users around the world has been iCloud backups for the most part. That if you back up your iPhone, that is something that can only be retrieved if your username and password are stolen and you can trick somebody into two factor. Which is how it has happened, I believe for a number of people, celebrities and such often get their private photos posted. I think that’s mostly from iCloud backup, but also that is available to Apple and therefore to law enforcement under lawful request. And so up to this point people have felt pretty comfortable about the end-to-end encryption on their iPhone.

We have the famous San Bernardino case and such, but if you were backing up your iPhone, then the FBI could go to Apple and say, give me their backup. Perhaps more importantly to the FBI, the data center that actually holds this data for citizens of the People’s Republic of China belongs to a joint venture that is controlled by the government. And so it has always been available to the FBI under lawful process. And there’s a real open questions about what’s available to the Chinese government with what you and I would probably not consider appropriate process. And so this is a big deal for them to, it’s not all data. There’s a couple things they can’t end-to-end encrypt, which I think are reasonable exceptions looking through the white paper that there’s no conspiracy here, it’s just things that are really hard for them to end-to-end encrypt.

But most importantly, it’s going to be iCloud backups. It will be something that you have to opt into because there is a risk of if you make a mistake, if you lose your backup information and such, if you don’t have a backup contact that you lose all your data. But it’s a great thing and it’s going to be a huge deal. They have announced that they’re ruling it out in the US, it’s available right now to beta users in the US. They’ll roll it out to the entire US to all users this year, and they want to roll it out to the entire world next year, which is a huge statement. They did not have an exception for China. And so we’ll see whether they roll it out in China. And then the third announcement was that I think tied to that is that they’re giving up on a proposal that they made last year to scan people’s devices using a very complex cryptographic system for possible child sexual abuse material.

That was a very controversial, I personally wrote a New York Times oped about it and I’m glad that they dropped that. I feel like they had not had the proper privacy benefit balance. And instead they’re focusing on what they call communication security, which is their device side scanning for nude images and abusive material and such. I think that’s great. One thing that looks like they’re working on, for example, is a detection of what we’ll call unwanted male genitalia on this podcast, but every woman with a iPhone has a different term for it of the kind of messages they get. And so that’s something that they’re going to be working on. And the cool thing is they’re going to be doing this as an API that’s available to other apps.

So if you’re a much smaller developer that doesn’t have 100 machine learning experts working on this, then you’ll just be able to call into Apple’s code and say, hey, does this image have male genitalia that my user didn’t want to see? So I think that’s absolutely the right focus for them. The trading of CSAM between consenting conspiring adults, not the number one thing I’m concerned about with Apple’s products. I am much more concerned about communication between adults and children of grooming and sextortion and the like. And for that, the communication safety product is a fantastic response and I really hope they set the stage here and they push WhatsApp and other companies to follow them.

Evelyn Douek:

Yeah. I want to ask you a little bit more about the privacy safety trade off and why this proposal was controversial on what you wrote in the New York Times. Because I think that’s really the heart of it here and I think it’s worth spelling out. The FBI, as you mentioned, they released a fairly normal boiler plate response to this announcement from Apple that continues to be deeply concerned with the threat to end-to-end and user only access encryption pose because of the way it might impede law enforcement. And for child sexual abuse prevention advocates, this might not be seen as a win rolling back these measures. And Apple is a place where people have lots of concerns about how seriously they take CSAM. In 2020, Facebook reported 20.3 million cases to NCMEC, whereas Apple reported only 265. So there possibly could be more that they’re doing on this front. So why were the previous measures the wrong balance?

Alex Stamos:

There are two classes of bad things that happened to children online. One is where only adults are involved, where adults are conspiring or they’re trading imagery. And the response to that is often the scanning using hash sets to look for known CSAM. NCMEC, the National Center for Missing and Exploited Children, maintains a database of all the CSAM that’s been found and reported to them. And then they release these hash sets that allow companies to find that content. That is where those 20 million Facebook reports mostly come from is known CSAM. The other class of issues is when you have an adult and a child who are communicating with one another and something bad happens, and that could be the bad thing that can happen, it could be the adult can convince them to come in for physical contact. It could be convincing the kid to send them images, which can then be used to extort them.

When you’re one of these companies for any of these safety things, I think you have to really focus on what are the kinds of abuses our specific products help with that make it… What are the things that we make bad guys’ lives worse? And the truth is, Apple doesn’t really build any products where it’s really, really easy to share huge amounts of imagery. They do have iPhoto shared albums and such, but it’s honestly not a great product for this kind of thing compared to Google Drive and Dropbox and Amazon S3 and all of those different products. Those are the companies that I think have a much bigger issue when it comes to the trading of large amounts of CSAM. But that is what they’re basically targeting with their proposal was people having large amounts of CSAM on their phone itself.

What worries me more is iMessage, because iMessage unlike Signal and WhatsApp allows you to have non phone number identities, which are in some ways easier to find and then also increased discoverability possibilities. And so my daughter has an iPad, she uses iMessage to communicate with her friends. It is not under a phone number, it is under her email address. And up to this point, iMessage has had effectively no controls. If some adult figures out what her iMessage is that they can reach out to her. And so they already have a limited one where you can say to a kid’s account that they’re not allowed to send or receive nude images and they’re going to extend on that.

And I think that is an appropriate place for them to focus. And the benefit from a privacy perspective is you can now put the protections on the device and the focus of the protections can be to have the potential victims. The children report the conversation themselves, so they are making the decision or they take it to their parents saying, hey, I got this pop up. And the parent says, oh my God. And they hit report and then that goes either straight to one of the child security centers, NCMEC or via Apple, which traditionally iMessage has not really allowed you to report anything bad except spam. And so this is an extension. Apple is finally kind of taking responsibility for the things that happen on iMessage.

Evelyn Douek:

Yeah, that’s great. And I think in the context of this, it’s always useful to point to a fantastic article that Cash Hill wrote in the Times earlier this year about the technology that companies used to flag, not as you said, previously identified CSAM, but newly identified images. And this is a great story about a situation where Google flagged a picture that a dad took of his son’s genitalia to send to the doctor. Clearly a good use case for sending that photo. And there were all of these ramifications for him, including disabled accounts and flagging to a police department. I think it shows this technology isn’t as good in identifying newly created images. And so when you’re thinking about possible error rates and possible costs, that’s an important thing to factor in as well.

Alex Stamos:

Yeah.

Evelyn Douek:

Anything else on the Apple story before we move to…

Alex Stamos:

No. I’m glad Apple’s doing it. I think, like I said, the big question is we now have to watch China. You said the FBI puts out a kind of [inaudible 00:11:22] press statement. I didn’t feel like the FBI’s heart was in it. There wasn’t a lot of, I think the FBI’s just given up on this and they’re working on device side hacking and the fact that almost any criminal defendant, if you get them in the box, they will unlock your phone for you, is what I’ve always heard from FBI agents. But the Chinese did not put out a statement. I am sure that the head of Apple Beijing got summoned for a meeting. And so whether Apple has the guts to stand up against the People’s Republic is the real test here.

Evelyn Douek:

Common theme of this podcast, Alex… But China comments on all of the stories for good reason, it is important to think about that aspect. So here we are, the regular Elon Musk segment of the show. So it took a dark turn this week. We had another couple of “episodes” of the Twitter files.

Alex Stamos:

The Twitter files.

Evelyn Douek:

What is your signal amongst all of this noise that’s going on Alex? What were you focusing on this week?

Alex Stamos:

Okay, so there’s the substantive and then there’s the dark culture war turn. So to talk about the substantive stuff, the next edition Twitter files was specifically about the decision making around January 6th. I do think it is news newsworthy to talk about what kind of internal conversations led to Twitter finally taking Donald Trump down.

Evelyn Douek:

Right. If any major news outlet got these internal communications, they absolutely would be reporting it. I think there’s a lot of questions to be raised about the way in which it’s being done and the framing and whether it’s being made too much of. But this is newsworthy content for sure.

Alex Stamos:

Right. I think it would be newsworthy. I think the problem is that it is being framed as this all being a conspiracy of it all being unfair. There is no evidence. Obviously there’s only been one president taken down. But in these other situations where they talk about what they call shadow banning, but which is really just eliminating the use of different recommendation interfaces for certain accounts that are repeat abusers, the assumption is they’re only looking at conservative accounts. So they’re only finding that there’s no scientific look at, here are all the different accounts that have these restrictions placed on them. And so once again, you have real screenshots being then used as evidence for assertions that are not backed up by any kind of empiricism. And then specifically around January 6th, it’s a kind of what you would expect a discussion between executives inside the company of whether or not these really exigent circumstances of a attempt to overthrow a valid election that was being encouraged by the current sitting president of the United States, of how that matched up to Twitter’s policies.

Turns out they did not have an insurrection policy. And so there’s a lot of discussion of whether his incitement to violence, political disinformation, such. Eventually they make the call, I think it was the right call at the time. I said Twitter and Facebook kept Trump’s account up as a respect for the democratic process that the president of the United States or a major candidate should be on these platforms. But if Trump shows disrespect for the process, then you should no longer feel bound by that, right? If he is going to try to overthrow a democratic process, then your respect for democracy should not be used against you. So it’s newsworthy, but the outcome is that they’re trying to, Musk continues to try to frame this up as scandalous. And as you might expect, the standard reaction from people on the Republican side is that this was election interference, this was that, the reaction from Donald Trump. Do you want to talk as a constitutional scholar on what Trump’s reaction was here?

Evelyn Douek:

Was this the prompt to, what was it? Suspend the constitution?

Alex Stamos:

Terminate.

Evelyn Douek:

Declare… Terminate the Constitution.

Alex Stamos:

Not the whole constitution, just the parts that keep him from being president right now, which I think are, I’m going to say pretty important parts, right? That’s like probably most of article one. Yeah.

Evelyn Douek:

Yeah, exactly. A lot of the constitution says you can’t just, there’s no big red terminate the constitution button. Sadly.

Alex Stamos:

Yes. Yeah. Yeah. That’s not in there. That’s not the secret article seven or whatever is how to terminate.

Evelyn Douek:

Yeah, that’s right.

Alex Stamos:

And so it’s newsworthy, it’s interesting for people to be able to see this discussion. It’s interesting to see the whole “shadow banning”. There’s this big argument of what Twitter did, the shadow banning or not. And if you want to steelman these arguments, I think it is appropriate to say that Twitter was too loose in their definition because Twitter, they do have a blog post where they say, this is how we define shadow banning and we don’t do it. But they were doing other things and they could have been more transparent about it. But in the end, having intermediate steps where instead of taking an account down, that you’re going to allow an account to exist and for its followers to see its content. But you’re not going to recommend that account to anybody. You’re not going to put in people’s algorithmic timeline if they did not ask to see it, then that is an appropriate middle step that allows you to have more speech while also reducing your personal responsibility for it as a company.

And I think Musk himself endorsed that idea when he paraphrased our colleague [inaudible 00:16:22] in saying freedom of speech, but not freedom of reach. Musk actually said that, this is exactly the implementation of that. And yet they’re saying that such an implementation that Musk has said is what he wants more of is a scandal. And so as we talked about at the top of the show, I propose, well, here’s a way that you could actually be transparent in this. You could have in the API saying what account actions have been taken on them. You could also release a database of what kind of decisions have been made. And so people can do real empirical work on is this something that’s politically biased? And his response to my proposal was that I run a propaganda platform. So I guess that’s a no, or I wanted to respond with the clip of, so that’s maybe then, right? There’s a chance? You’re saying there’s a chance?

Evelyn Douek:

Yeah. Hold your breath, Alex. Good plan.

Alex Stamos:

Yeah.

Evelyn Douek:

I think you’re right. The thing is, these are very fascinating exchanges. One thing you absolutely do not see is executives saying, oh yes, finally an opportunity for us to chuck out this Republican president. God, we hate the Republicans. Let’s do it. They’re really trying to think within the realms of their previously stated policies. And as facts developed and develop a framework for thinking about these things, it was hard. I was critical of some of the reasoning at the time, the way that they talked about their decision and the way that it fitted into previous policies. It didn’t quite match up. But that’s the thing about content moderation, is that we are asking these platforms to create a systematic and universal comprehensive set of rules that will cover every possible speech situation that arises.

And it’s just not possible to do that in advance. There’s one thing that Taibbi said in his thread here, which is that before the riots, the company was engaged in an inherently insane, impossible project trying to create an ever expanding, ostensibly rational set of rules to regulate every conceivable speech situation that might arise between humans. And he’s right. That is what these rule books are trying to do. I’ve likened them to the Borges story where people are trying to create a map of the world and it keeps getting bigger and bigger until the map of the world is the size of the world in order to capture all of the detail. It’s not possible to rationalize and regularize and make foreseeable every single speech situation. So there’s always going to be these hard calls that your rules don’t completely cover. But I think what you see in these documents are good faith attempts to, in some ways, tie themselves to the previous rules. And there’s more work to be done.

Alex Stamos:

Yeah. Taibbi is right that trying to build platforms that allow for communication between hundreds of millions or billions of strangers put you in a very difficult situation. But he doesn’t have a proposal of what else he would do. And if Taibbi had a proposal, it would be completely incompatible with everything Musk has said, which is, Musk wants to go after the really bad guys, he wants to go after incitement to violence, he wants to stop spam. Musk defines all of these things that are incredibly difficult things to define and to apply.

And so it’s just fascinating because I thought things were going to get better once Musk had responsibility here and that he saw how hard this is. But in some ways it’s getting worse because he’s creating this world where his culture war arguments are disconnected from what he’s actually doing. And because it’s just a completely political culture war situation, people don’t care that his actual actions don’t match up to what he is saying as long as he’s owning the Libs. Which we should talk about how he tried to own the Libs really hard this last week where it got really dark.

Evelyn Douek:

Right. Before we get to the dark part, just one little light thing that I am enjoying in this saga is, you were making the point that he should be offering comprehensive transparency, not just of the past but also going into the future and his own communications. Two little instances we’ve seen in the past week is the @ElonJet account seems to be severely limited in its reach and amplification. This is an account that tracks the movements of Elon’s private jet and apparently the account that posted the video of Elon getting booed when he went on stage at a Dave Chappelle show last night was also disabled relatively quickly. Who knows why either of those things are happening. I don’t know. We don’t know for sure that they aren’t just random mistakes, but there is this, if you’re calling for transparency, in one case it would be nice to know how these suspiciously aligned mistakes keep happening on these kinds of accounts.

Alex Stamos:

Right. The other thing that was released is that there was a leak. It’s hard because it wasn’t until a big journalist, but there was a leak that the account on Twitter that follows Elon’s jet has had the exact same limits that were placed on Libs and TikTok applied to it.

Evelyn Douek:

Exactly.

Alex Stamos:

If we keep on going down this route where trusted safety really just becomes about Elon’s ego, then yes it’s going to become harder and harder to argue that he is a free speech champion for sure.

Evelyn Douek:

And one last thing on these Twitter files.

Alex Stamos:

Twitter files.

Evelyn Douek:

Thank you. I was waiting for it. About…

Alex Stamos:

You need to smoke more so you can get down there.

Evelyn Douek:

I’m sorry. Yeah, about limiting the reach on the accounts. Part of the controversy here was there were screenshots showing how this is flagged in internal systems where it has the account and it has these tags on it. I don’t remember the exact words now, but basically saying flagged for reduced amplification or not on trending blacklist, that kind of thing. There was some controversy here around, because on those screenshots you can see over on the left-hand side, a little button that says direct messages next to a particular account. And there’s a lot of questions about what happens if the journalist that has posted this screenshot clicks that direct message button. Now as it turns out, you can see from the interface that picture was actually from the account of one of the people still currently within the platform.

Alex Stamos:

Right. Yoel Roth’s replacement, the current Head of Trust and Safety.

Evelyn Douek:

Right.

Alex Stamos:

Had been logged in and apparently shown these screens and then the reporters took the screenshots.

Evelyn Douek:

So they were just screenshots from her. So it doesn’t appear in this particular case potentially, we don’t know. But the statement from her is that she did not provide access to direct messages and purely provided the screenshots. But if she did or if potentially in the future, one was to provide access to direct messages within Twitter, other people’s private communications, that’s going to raise a whole bunch of legal issues. And so we thought it was something worth covering because potentially given all of the solicitude that Musk has shown to legal constraints so far, this might be one that he should be aware of just in case. And so we brought in an expert to get us up to speed on the Stored Communications Act. He is Professor Orin Kerr from Berkeley Law.

Orin Kerr:

First, what is this law? Well, the Stored Communications Act is most often known for being a law that limits access to email. You may have heard about the law in the context of the government trying to compel access to a person’s email or a person’s files or even metadata. And the Stored Communications Act imposes a complex set of limits on that. But actually the more important part of the Stored Communications Act is probably the limits on voluntary disclosure. And here’s the problem, an internet provider could decide to just show everyone your email if it wanted to under the Fourth Amendment. It’s a private actor, it’s not regulated by the Fourth Amendment. And so what keeps an internet company, a Twitter or a Facebook or any of these companies from just going into your email and looking for stuff and sharing it with whoever they want to share it with?

Well, the Stored Communications Act imposes these laws for any internet providers for, they call it remote computing services and providers of electronic communication service. It basically means any cloud provider or email provider or any service combined in kind of a combination of those. Well, if they provide services to the public, they’re not allowed to disclose either your content records or your non-content records subject to a bunch of different exceptions, which are more or less Fourth Amendment-like exceptions. What does this mean? Practically speaking, if you’re Twitter, you can’t disclose the contents of communications to outsiders unless one of a number of different exceptions applies. You can disclose non-content records to a non-government actor. That’s one of the exceptions for non-content records. But for contents, it’s a relatively limited set of exceptions that Twitter would need to comply with in order to allow a disclosure of contents.

Well, what are those sorts of exceptions? Consent of the customer or subscriber for example, would be an example. There’s an example for something that appears to pertain to the commission of a crime that’s inadvertently obtained. That’s basically like the plain view exception of the Fourth Amendment. And so there are a couple of different exceptions, but the basic ideas that an internet provider like a Twitter can’t ordinarily disclose content records belonging to users because this law says, hey, if you disclose that knowingly, you can be sued under this law, the Stored Communications Act. So we don’t actually know if Elon Musk did this, if Twitter did this, there’s some dispute in the record. But ordinarily key privacy protection of your internet messages, your dms, your private messages, your emails, your Facebook messages, whatever those content messages are, is that the provider is not allowed to disclose those unless one of these special exceptions applies. And just doing it for fun or for the political agenda of the content owner is not one of those exceptions.

Alex Stamos:

So Evelyn, I had a tweet thread where I proposed that releasing such dms would be in violation of 18 USC 2702, and Orin Kerr, one of the world’s experts in this, said that I was right. Can I get a JD now? Does that count under the California Bar Association’s requirements?

Evelyn Douek:

Right. This is the Store Communications Act unit in the Elon Musk JD, which I am still cobbling together. We are rapidly getting through all of the various parts of the law. So I hope someone is listening to this, I wouldn’t want to cross Orin on this particular point and keeps that in mind when they’re thinking, hmm, what should we release next?

Alex Stamos:

We were in discussion with another law professor of whether Musk can violate one rule from every title of the US code. And some of them get pretty tough. There’s a railroad one, but then somebody said, oh, The Boring Company could probably-

Evelyn Douek:

Right. Obviously.

Alex Stamos:

Yeah. And so writing, if you’ve got ideas of interesting ways that Musk can add to the syllabus.

Evelyn Douek:

I might actually put this together at some point. It’s good fun. Okay, so that’s all fun and games, but this was actually a sad and dark week in many respects. So yeah, tell us about the turn for the worst here.

Alex Stamos:

Yeah, so one of the things that’s been happening with the Twitter files the entire time has been that they have not been redacting names. And a number of people who worked at Twitter have now been getting a huge amount of abuse heaped on them because they are getting named as the people behind some of these decisions. Nobody more so than Yoel Roth who was the Head of Trust and Safety for a while at Twitter. He is not the person who made a lot of the final decisions here, but he’s, as you can imagine in his role, involved in almost all the conversations. And there was already a bunch of people who were trying to effectively QAnon on him, who were implying Yoel’s openly gay has talked about it in the past, lives with his husband and they have been trying to QAnon on him, trying to say that he is a groomer and a pedophile and such.

And this got a huge lift when Elon Musk himself took out of context a part of Yoel’s PhD thesis, which was talking about the fact that there were underage people on Grindr and that there was a difficult safety issue here. This was not what the thesis was about. It’s effectively an aside. But Musk took that and then implied that Yoel is a pedophile to his hundreds of millions of followers. I don’t know what to say about this other than this is one of the most horrible, disgusting things I’ve ever seen a CEO do. And that Musk is angry at Yoel for what he sees is a betrayal. Yoel tried to stay under Musk. He defended Musk publicly from a number of claims around hate speech and such using actual evidence against people’s kind of non empirical claims. But then he couldn’t take it anymore and he quit.

And he wrote a New York Times op-ed, which was a factual op-ed that we have cited. And he gave an interview to Kara Swisher. I think that was pretty much his entire media engagement. And in his interview with Swisher, he did not slam Musk that hard. He was actually quite even handed and effectively said he couldn’t stay because if you’ve got one guy making every decision, there’s no reason for a Trust and Safety head, which is pretty reasonable. And that was enough a betrayal for Musk to call him a pedophile to a whole set of people who have a history of violence here. We’re just coming out of a horrible shooting in a gay nightclub based upon people talking about grooming and kids and drag shows. We’ve had multiple death threats and bomb threats against doctors and hospitals involved with this. We have a long history of violence coming out of this movement.

And Musk effectively pointed them all to Yoel, which is, it’s just a morally and ethically completely rift of any kind of humanity. And I think it’s going to be a turning point. I think actually this week was a turning point for Musk because he is now going beyond standard culture war into the depths of the QAnon corner that has for a lot of people on the Republican side have tried to isolate and backed way from. And I think that’s going to be a problem.

I think it’s also going to be a problem for him because there’s a little bit of a Musk reputation bubble right now where it’s trendy in Silicon Valley to be seen as a supporter of Musk. And I think that bubble’s about to pop really hard and all of these otherwise legitimate people who have been cheering him on, venture capitalists, entrepreneurs, some people who I really used to respect who have joined his team or who are cheering him on now that he is doing this kind of thing, I think it is going to become very difficult for them to support him anymore. And we’re going to start to see in the next couple of weeks people start to back off.

Evelyn Douek:

Yeah, again, just to echo everything you said, this is truly repulsive and takes a very dark turn and I can’t work out whether he’s so far gone that he actually believes this stuff. He’s been so hard red pilled that he’s just completely lost touch with reality, or if it’s some sort of fun and games or maybe more likely he wants to chill people. There is a reporting this week about a memo he’s sending to companies, to his employees about telling them to shut up and stop linking to journalists. He’s all for transparency except when it’s against him from his employees.

Maybe this is a shot across the ballot to everyone that knows anything about Musk say anything. And it’s ludicrous because as you said, Yoel was really… Kara in that interview was inviting him to go hard on Musk and say lots of critical and fun things and he was really principled about it. And he said, no, that’s not consistent with what I saw. There are areas in which he is taking legitimate action against hate speech, the example that you gave, but it does seem that it’s just, shut up or I will sick my people on you.

Alex Stamos:

I think that’s right and that might work in the short term, but in the long term I think this is really hurting Musk. I’ve not seen anybody destroy their reputation this quickly, this publicly. And I have a thesis actually of what’s going to finally be the break on Musk behavior. Because I think everybody’s talking about, well, he’s a billionaire, he can do whatever he wants, there’s no controlling him. But Tesla’s stock is down 57% now, year to date. That is significantly more than the SCP 500. It’s significantly more than other automakers. That is not a coincidence. If you mark up some of the biggest drops it comes after things that he’s done with Twitter and some of his more erratic stuff. And I think there’s a lot farther for it to drop. We’re starting to see the brand damage against Tesla, measurements of brand satisfaction of Tesla and what people think of the brand have dropped significantly.

They’ve also dropped in a way, it’s gone up a little bit with Republicans. It dropped massively with Democrats. Well who buys Tesla? Who are the people buying $80,000 electric cars? $100,000 roofs, $20,000 power walls. It’s college educated suburbanites and urbanites. So I think Tesla is going to let Tesla stockholders know that both he is throwing money at Twitter, he’s going to lose money here, he’s going to have to sell more Tesla stock, which is going to drop the price. And right now I think he’s got some protection because the board of directors of Tesla is mostly friends of him, one of the directors is his brother. It’s really, the current board composition seems completely incompatible with what you hear about good public company governance. But if the stock keeps on dropping, and I think it has a lot to drop because they’re PE ratio’s still over 50.

So they’re way out of band for the PE ratio of any reasonable automaker or somebody else that makes actual physical products. Tesla, it’s not a software company. They don’t have zero marginal cost customers. They have to actually build stuff. If it continues to drop this hard, then you’re going to end up with activist investors getting in and those activist investors are going to push to get onto the board and then to put pressure on Musk or even maybe kick Musk off as CEO. Because if you are a Carl Icahn or those kinds of activists, you’re going to see eventually when Tesla starts, maybe when it’s under 100 bucks a share, which is definitely foreseeable in the next three months at this current pace.

You’re going to see these guys crowd in go buy a big enough percentage point and then you’ll have the institutional investors with ISS and the Vanguards and facilities and such backing them because Musk is destroying this value. And that will be an interesting day the day that Musk sees that his control of Tesla, he doesn’t have magical Google shares or Zuck shares here that his control of Tesla might be going away. That might be finally a thing that slaps him in the face of whether or not it’s worth it for him to do this.

Evelyn Douek:

I really hope you’re right, not just because it would be nice for this podcast to continue its good track record, but also it solved my moral dilemma that I’m having. This is total anecdata and N of one, but as a new law professor, I was quite excited to get a Tesla. I like the environment and they seem like nice cars and now I feel like I’m really in this pickle because it is inconsistent with my public persona and obviously my morals. It’s not just that I don’t want to look bad. And so it would be great if the board could just solve that one for me.

Alex Stamos:

Yeah, the cool thing is there’s a lot of people who make nice electric cars now. I actually was already going to sell my Tesla because it’s about 10 years old and it’s falling apart. And so it turns out there are companies that have made cars for 100 years now know how batteries work. So we turn the after show into a car driver review.

Evelyn Douek:

That’s right.

Alex Stamos:

We’ll get some sponsors.

Evelyn Douek:

That’s if anyone’s listening and wants to buy in, let us know. On the other hand, Musk has accomplished something that I thought nobody else could do, which is to make Meta and Mark Zuckerberg look good. Because across the other side of Silicon Valley this week the Meta oversight board released its decision on the crosscheck system that Facebook now Meta has in place. Now this is a decision that we’ve been waiting for, for about a year now. That’s too slow. But anyway, beyond that there are a lot of similar questions here, questions that were being explored in the Twitter files this week about how should social media companies treat particularly high profile accounts with particularly high visibility and the potential to do more damage or when they breach the rules, how should companies treat them? And this was something that came out in the Facebook files. I don’t know if we have a different sound effect that we want for the Facebook files.

Alex Stamos:

We’ve run out of voices that I can do. It’s going to be like Facebook files in Yoda, but I feel like maybe then we’re going to get in trouble at Disney. So yeah.

Evelyn Douek:

Files, the Facebook. And where Facebook has a list of really particularly high profile accounts that it puts on a list and then if they get strikes, they are not actioned immediately, but they’re sent for further review because the costs of making a mistake in that context are much higher, one way or the other. Now this is a 57 page report from the Meta oversight board. It goes through a lot of detail into how the systems work, which I think is intrinsically valuable in and of itself, just like the Twitter files have some details that are interesting about how these internal processes work, the internal processes of meta and what it’s done to treat these particularly high reach accounts is really interesting or these important accounts for various reasons. And then it issues that the oversight board issues 32 recommendations, and I’m not going to go through them all or any of them right now. But the overall ethos of the decision is that Meta erred on the other side of the balance to what Twitter had done and that the oversight board took issue with that.

Now what I mean by that is when these high profile accounts breached the rules or were flagged as potentially breaching the rules, the community standards, the content was left up while it was sent for further review. And that meant that sometimes the content was left up permanently for months or in some cases just for days while it was sent to internal systems and internal teams and higher ranked employees to take a look at. And that was because Meta decided that false positives were more damaging than false negatives and it wanted to make sure that it didn’t infringe on its value of voice. And the board said that in these particular cases that that’s the wrong balance, you’ve got it wrong. Now I personally find that hard to square with many of the other board’s decisions where they have really slapped Meta over the wrist for not prioritizing voice highly enough.

And we will potentially come back to this and do more of a deep dive on it in 90 days, which is when Meta’s response to these recommendations is due because for my money, that’s where the real rubber hits the road. This is really interesting, I learned a lot from it, but the proof is in the pudding as to how Meta responds. And so whether this actually improves the system, it depends on that. But in the meantime, I thought it was an interesting document and it at least shows if you want transparency and if you want to think about system design and system improvement, here is a completely different model to releasing some internal dms, selectively screenshotted and sent to a handful of favored journalists that at least hopefully is more legitimate and unbiased.

Alex Stamos:

So the report I think rightfully recognizes that you need to have something like crosscheck when you operate at scale. You cannot run a social media network where the account for the president of the United States can be suspended or even the password reset by every single content moderator, by every single contractor. So you do need to have protections up for really high level accounts. Also, because these big accounts often get mass reported. So if somebody is controversial, if they’re a political figure, you’ll have mass reporting campaigns. And so you want to have a very limited set of people who are training these kind of things. But as the report points out, the rules were really applied. You want that to be about protection, but then in the end, the decision should be the same. And they had a bunch of examples where the decisions were not the same that the final outcome, yes, you should have more adjudication when you’re reporting somebody with 50 million followers.

That’s what you’re going to need for an account like that, but in the end it should be a fair application. And they hit him pretty hard on that not being fair. Like you said, the oversight board looks pretty good compared to this is what happens when you have human rights lawyers and law professors and advocates and such spend months and months looking at a problem in depth. The other option is you just have Barry Weiss do a Twitter thread and try to drive death threats to people. I think this is a better way to make your policies better, but I guess we’ll see who’s in better shape at the end of next year, Twitter or Facebook.

Evelyn Douek:

Yeah. And I think the other thing that the decision really highlights, which is really worth underlining is it’s all about system design and not about individual decisions. The issue here was how was this overall cross-check system designed? How was the list compiled? Who were the particular decisions sent to? What were the timelines? How were these resourced? Were there adequate separation within the company between the lobbying and government arms of the company and the trust and safety teams that were charged with making these decisions and looking at as to whether they actually breached the rules, who have conflicting interests in how those decisions come out. So I think this is really, it’s underlining that this is not about an individual decision, this is about system design and those are really hard questions. Katie [Harbath 00:40:18], who was a former Facebook employee, wrote quite a good post I thought on how it’s actually quite hard to find a list of politicians in the world.

If you just want to go and say, make a list of politicians who we should probably double check our decisions on. There is no database that exists of all important public officials that you might want to make sure that you’re getting the decisions right. And so there are all of these things that I think need to be thought more carefully about. One of the worst parts that were highlighted in the decision were the disparate impacts and disparate resources in different parts of the world. The time to decision in Third World countries was much longer than in places like the US and Canada, which is obviously a problem. But those are the kinds of questions that we should be asking.

Alex Stamos:

A lot of this stuff actually came out of the 2016 investigation of these protections. When I was at Facebook, the first elections after the US 2016 were in France and Germany. And so a bunch of work went into can we identify the accounts of politicians for both content moderation protection, but also security protections. We did things like require two factor to be enabled for every French politician, for every German politician and then all their staffers. So you can find the list of people who are running for a parliamentary seat, finding the people who have control of their accounts and the actual staffers who are doing important things turns out to be super hard. And what we saw with the GRU targeting in 2016 is it is not beyond them to go figure out who actually works in a political party, what 23-year-old can you spearfish?

But you’re right, that was definitely benefited from the fact that there are large teams in Berlin and Paris working for Meta who could go coordinate that kind of work. And that is not true for Bangladesh or Sri Lanka or anywhere else. So yeah, I think this oversight, there’s a lot of appropriate things here. The cool thing here too is if, like you said, this is a design issue, it feels like this is a much broader scope than what was originally intended for the oversight board and it’s great. Because one of my problems with the original structure of the oversight board was all about individual content decisions, which is just ridiculous to have this much brain power on should this piece of content stay up or down?

You want this brain power on the big picture issues like this. And the cool thing is, because this stuff’s all public, is that there’s now going to be this database of all of these mistakes that were made by Meta and the solutions and the recommendations from the oversight board. And so if you’re starting or you’re already at another social media company, this is actually a pretty cool, everybody needs to have a crosscheck like program. This is a great checklist of the things that went wrong and what you can do to prevent making the same mistakes.

Evelyn Douek:

Yeah, don’t get me started. I’ve literally written hundreds of thousands of words on the oversight board and the model and it’s its strengths and weaknesses and I’m sure we’ll do more deep dives on that. We are at time, thankfully, for all of our listeners, otherwise you’d be in for a real treat. You did mention Paris, which is a nice segue to our regular sports segment of the week, which I have been requested to keep in. I don’t know why, it’s not particularly high value content, but I did see that France is through to the semifinals, which is exciting.

Alex Stamos:

Yes.

Evelyn Douek:

What is our sports update this week, Alex?

Alex Stamos:

Well, the big story from the World Cup is Morocco is now in the semi-finals. They are the first Arab country, the first African country, to make it to the semi-finals of the World Cup. Obviously this being played in Qatar has been both negative from all of the horrible human rights impacts of the World Cup in Qatar. The flip side is that the Moroccan team is gaining a huge amount of support from folks who are traveling throughout the Arab world there and they are now playing their old colonial overlords. And so you effectively have something where at least three quarters of the planet is rooting for the underdog here. So this will be fascinating. Also, apparently my colleague David Teal pointing me to, I didn’t know about this whole history of the Moroccans, the Moors conquering the Iberian Peninsula. And so there is a historical that they’ve beaten Spain, they beat Portugal, and now they’re going up against France. That there is a historical echo here from the eighth century or something. So pretty cool. That’s going to be Wednesday morning Pacific time, Wednesday night in Qatar.

Evelyn Douek:

Excellent. And by the time we record next week, we will have the end to this fairytale, hopefully, potentially story to update you on.

Alex Stamos:

Well, yeah, you could have Croatia versus Morocco, which if you had bet that money you’re about to make… If you’re like a half Moroccan, half Croatian dude and you put in 10 bucks on betting that they’re going to be in the finals, you…

Evelyn Douek:

You’re a very lucky idiot. Basically.

Alex Stamos:

You’re the next owner of Twitter. That’s it right there.

Evelyn Douek:

And with that, this has been Moderated Content for the week. This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderated content. This episode wouldn’t have been possible without the research and editorial assistance of John Perino, policy analyst at the Stanford Internet Observatory and it is produced by Brian Pelletier. Special thanks also to Alyssa Ashdown, Justin Fu and Rob Huffman. See you next week.