MC Weekly Update 1/30: No One Expects the Copyright Order

Alex and Evelyn discuss updates to the story about the BBC documentary take-downs in India; the latest Twitter Files on the Hamilton 68 bot dashboard; TikTok’s charm offensive and … Oracle’s (?!?) role; Apple’s expansion of censorship for China to Hong Kong; and the Financial Times’ admirable admission that it wants nothing to do with content moderation.

Show Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

India Update

  • At least some of the YouTube, Meta, and Internet Archive takedowns of clips from a BBC documentary that examines Prime Minister Narendra Modi’s political rise were due to copyright claims made by BBC, rather than requests made by the Indian government. Maybe they could have mentioned that a bit earlier? – Rishi Iyengar/ Foreign Policy, Russell Brandom/ Rest of World, Internet Archive
  • Luckily, Twitter owner Elon Musk chimed in with a tweet reply that he hadn’t heard of the issue, adding “It is not possible for me to fix every aspect of Twitter worldwide overnight, while still running Tesla and SpaceX, among other things.” – @elonmusk
  • Twitter reinstated Indian Hindu nationalist accounts previously suspended for hate speech against Muslims. – Newley Purnell/ The Wall Street Journal 

Twitter Corner

  • A new Twitter Files thread on the German Marshall Fund’s Hamilton 68 project, which tracked Russian influence operations on Twitter, illustrates the dashboard’s flawed methodology. That doesn’t change the fact that there was Russian interference during the 2020 U.S. presidential election. – @mtaibbi
  • Musk made the rounds on Capitol Hill, meeting with House leadership to ensure that Twitter will be “fair to both parties.” We are sure there will be tons of transparency. – Tony Romm, Faiz Siddiqui, Cat Zakrzewski, Adela Suliman/ The Washington Post 
  • Twitter will allow anyone to appeal an account suspension, starting this Wednesday, February 1. – @TwitterSafety 
  • And Twitter is re-suspending some of those accounts. White supremacist and Holocaust denier Nick Fuentes was suspended less than 24 hours after his account was reinstated. – Julia Shapero/ The Hill
  • In completely unrelated news, Twitter is being sued in Germany over failing to remove antisemitic hate speech. – Molly Killeen/ Euractiv, Aggi Cantrill, Karin Matussek/ Bloomberg News

TikTok Offensive

  • TikTok is going on the offensive with public engagements explaining its private negotiations with the U.S. government. Executives are briefing members of Congress, academics, and think tank researchers about Project Texas, the company’s plan to audit content recommendation systems and securely store and process U.S. user data in partnership with Oracle. – Cecilia Kang, Sapna Maheshwari, David McCabe/ The New York Times
  • Researchers briefed on TikTok’s proposal to continue operating in the U.S. said that a new subsidiary, TikTok U.S. Data Security Inc. (USDS), will house all of its U.S. content moderation under the governance of an independent board that will report to the U.S. government (CFIUS) — not to ByteDance. Plans also call for TikTok’s source code and content recommendation systems to be audited by Oracle and a third-party inspector. – David Ingram/ NBC News, Matt Perault, Samm Sacks/ Lawfare (commentary)

Other stories

  • The messy business of operating in China caught up with Apple again as the company’s Safari web browser seems to have quietly adopted a Chinese government website block list. – Sam Biddle/ The Intercept
  • Google plans to sunset a pilot program that stopped political campaign emails from winding up in the spam folder as it seeks to dismiss a lawsuit from the Republican National Committee claiming that Gmail filters have political bias. – Isaac Stanley-Becker/ The Washington Post, Ashley Gold/ Axio
  • The Financial Times had a miserable experience attempting to run its own Mastodon instance, facing “compliance, security and reputational risks” in addition to cloud hosting costs and creepy factor issues, such as seeing direct messages by default. – Bryce Elder/ Financial Times 

Sports Corner

  • Did Alex receive a call from the San Francisco 49ers football team during their NFL playoff game this weekend? No, not for that cyber issue last year. Things get “Purdy” desperate when a team’s first four quarterbacks are injured. – Nick Wagoner/ ESPN

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Transcript

Evelyn Douek:

Last week I got a very concerned email from someone at Stanford saying that we were breaching university funding rules by taking unauthorized funding from guttr, guttr.com in breach of university guidelines. So yeah, unfortunately we can no longer have guttr.com taking away our pain.

Alex Stamos:

How can I do an advertising read from Sleeper as S-L-E-E-P-R. Sleepr.

Evelyn Douek:

Please, yeah.

Alex Stamos:

Bringing water beds back. You thought waterbeds were stuck in the seventies? Well, now we’re bringing them back. Do you like destroying the home and the condo association below you? Then think about Sleeper, sleeper waterbeds for the 21st century sleepr.com/moderatedcontent.

Evelyn Douek:

That is the kind of entrepreneurial spirit. What do we do with all this extra water in the valley at the moment? Let’s do a waterbed startup genius.

Welcome to Moderated Content’s Weekly News update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. Okay, so we have to return to the story. The big story from last week, which was breaking when we recorded and now has hit the headlines happily, which is this story about the BBC documentary that was being taken down in India. The Indian government was ordering takedowns because it was critical of Modi’s handling of some riots in the past. It turns out. So one of the things that we talked about and got some traction you were saying was really surprising that some of these takedowns were global as opposed to geo-locked within India. And it turns out that part of the reason at least, was because a number of the takedowns were due to copyright claims by the BBC and not necessarily Indian government orders.

So at least YouTube, Meta, and the Internet Archive, they removed the video because of orders from the BBC rather than from the Indian government, which like the BBC really could have mentioned that earlier. We were reading all of these stories and it said the BBC declined to comment, or the BBC said, this was a really researched documentary. They really could have clarified, but this was a situation where they were benefiting from the Streisand effect of having this video taken down. And the Modi government was benefiting from looking super strong and having this video taken down. And so it finally [inaudible 00:02:08] took time for the platforms themselves to clarify that they had taken it down due to copyright orders. So all a bit of a mess. Yeah. Was that a surprise to you? That was how it was going down.

Alex Stamos:

I mean, I guess we shouldn’t be surprised. So again, it is a real story because a number of these takedowns were based upon Modi, but it is complicated by the fact that some of the, or the source videos… So not the references to them, but the source video uploads to Facebook and YouTube seem to be taken down via DMCA request from BBC. This is a very BBC thing when you have really well researched, hardworking journalists who are trying to make a political impact in the world’s largest democracy. And then somewhere else in the building, there’s a gray-suited bureaucrat who sat in that same desk for 35 years just sending DMCA request to every single thing that’s got BBC on it.

Evelyn Douek:

And he’s like, I’m really earning my paycheck today. Look at all these requests going out, nailing it.

Alex Stamos:

Which this really does encapsulate something that anybody who has ever tried to watch any UK TV, not just BBC, but ITV. For whatever reason, the UK wants the English Channel to be recreated for their video output. They do the most aggressive geo-locking of content. My wife wanted to watch a Harry and Megan thing, and I swear to God it took all of my skills. I had to bounce off of a satellite into an AWS instance, into tail scale. It was completely ridiculous. I felt like I was setting up a system to try to hide from the NSA while hacking the [inaudible 00:03:34].

Evelyn Douek:

Did she watch… I mean our listeners are waiting on tenterhooks. Alex, did she get to watch Harry and Megan?

Alex Stamos:

Yes.

Evelyn Douek:

Thank God.

Alex Stamos:

She watched it. And then after all that, after I use all my skills, I literally spent 90 minutes setting up this capability to try to get around the geo-block. She’s like, it wasn’t that good.

Evelyn Douek:

I’m so surprised.

Alex Stamos:

Great. Yeah, thank you, honey. But yes, so the BBC, like they famously go out and just… Them and ITV and Channel four, A bunch of these other UK folks are really aggressive about DMCA request. So it is unfortunate because they’re making it easier for the Modi government… The other interesting thing here, this does demonstrate something we see over and over again, which there’s a number of governments that massively overstate their power online. I saw this firsthand with WhatsApp in Turkey where the Turkish government was telling the media, Facebook is censoring on our behalf. And the media just wanting to run with an anti-Facebook story totally amplified it. It was completely bogus. But what they wanted to do was strike fear in the heart of their political enemies. And so to say, we have the American tech companies on our side we’re able to break.

They’re talking about a backdoor in WhatsApp and breaking encryption and stuff. Again, total BS. But it was effective in getting the media to carry a story that then helped suppress opposition folks who then were afraid to use WhatsApp. And I think we’re going to see the same thing where the Modi government is both preening for the local crowds as well as trying to demonstrate power internally. But unlike Turkey probably needs to balance it decently with how they look as a democracy. And again, unlike the Turkish situation, this was a real situation where they were effective and especially on Twitter of getting content taken down.

Evelyn Douek:

So it is an important clarification. This is not a nothing burger. This is still a big deal. You can go to the Lumen database and see the orders from the Indian government to Twitter saying take down and extremely broad any references to these kinds of videos. But it was a situation where nobody had the correct incentives to sell the most accurate version of the story because everyone was sort of inflating it and benefiting of it. And it accentuates as well that when videos get taken down, platforms don’t always explain why they’re doing it. And so we from the outside are at their mercy if they decide to tell us why have they taken down these videos in particular. And so in this situation, because of the controversy, we found out that it was through copyright claims on these platforms, but we wouldn’t have necessarily known otherwise. So a transparency issue there.

Alex Stamos:

I feel like Google’s actually the only one who’s reasonably good at that in search results. If they take down search results because of DMCA claim, they automatically uploaded it to Lumen and link to it, which makes it surprising that YouTube didn’t do that in this case. But anyway, it’s a kind of transparency that I wish we had more of.

Evelyn Douek:

Meanwhile, Twitter was reinstating a bunch of Hindu nationalist accounts in its great amnesty of 2023. And Musk’s comment on this whole thing was to chime in a few days after it all started with a reply to someone else’s tweet going, what’s going on at Elon Musk saying, “Oh, the first I’ve heard. It’s just not possible for me to fix every aspect of Twitter worldwide overnight while still running Tesla and SpaceX amongst other things.” So he’s like, I’m busy, sorry about this really important repression in the world’s largest democracy.

Alex Stamos:

Right. Because it was all our choices for him to be the world’s biggest moderator of speech while also trying to run a rocket company and a car company. That was what all of us on the outside decided we thought was best. But remember by popular acclaim, everybody was like, Elon, please, the best thing for Twitter is for you to fire the entire staff that thinks about these things and for you to make every decision by yourself based upon the input of Catturd2.

Evelyn Douek:

That’s right. Yeah.

Alex Stamos:

So I guess that’s on us. That’s on you and me, Evelyn, for forcing him to do that.

Evelyn Douek:

Heavy is the head that wears the crown. We are just grateful for the time that he does manage to put into this enterprise. We should actually move to our Twitter corner now. So with that. All right, so yes, what to say, Alex? I think probably the big story of the week was another Twitter files from Taibbi about Hamilton 68, and I think maybe if you could just give us a rundown of what this Twitter files was about.

Alex Stamos:

So Taibbi has internal emails inside of Twitter of them criticizing a product called Hamilton 68, which is put out by the Alliance to Secure Democracy. It’s kind of a nonprofit that has been very critical of Russia and Russian disinformation. And what Hamilton’s 68 purport to be is kind of a dashboard of Russian bots, Russian accounts, and Russian content. And there’s this history of years and years of Hamilton 68 being used by… To just cite of, this is a Russian bot and what the Russian bot is saying, okay. This is a Twitter file that has a legitimate background. The criticism internal to Twitter effectively was from Yoel Roth. So we’re back to Yoel being a good guy. So just-

Evelyn Douek:

Hard to keep up, but got it. All right.

Alex Stamos:

Just to recap, Yoel tried to do what he could-

Evelyn Douek:

Not a pedophile.

Alex Stamos:

Not a pedophile. Well, we’ll see. Yeah, we’ll have to go check.

Evelyn Douek:

Not today. Right, okay.

Alex Stamos:

What Elon is saying about Yoel right now. Yoel stuck around, tried to do the best he could to teach Elon about content moderation and how difficult these issues are. Yoel left because he thought he couldn’t do it anymore, after which Elon retweeted some horrible things kind of implying that Yoel was a pedophile based upon some totally reasonable stuff. He wrote in a PhD dissertation about online child safety and Yoel has been attacked and driven through the mud and eventually had to actually move, sell his home for him and his husband… So this is a real big deal for him of being attacked by the world’s richest man and then his thousands and thousands of fans sending death threats. But what we’ve seen through the Twitter files is if you actually look at Yoel’s internal correspondence, it’s very reasonable. Yoel both believes that the Russians are trying to manipulate the information sphere just like every other major country.

Although Russia was really kind of on the forefront of a lot of these tactics. He also believes that some of the things that are trying to call out bots are BS. And in that case, he looked at Hamilton 68 and he had the same criticism that I have had and a number of other people have had, which is one, there’s a bunch of these external bot-o-meter bot detector systems. They have way, way less. 10, 15% of the data that is available to the internal teams at these companies. So they’re working off of a huge disadvantage in that they do not see the cookies people are using. They do not see device identifiers, they not see IP addresses, they not see phone numbers, they do not see email addresses, they don’t see SSL fingerprints, they don’t see all of the kinds of stuff…

GPS locations, find GPS locations. They can’t see all the things companies do to instrument their apps to see if a human beings interacting with it or if it’s being interacted with from a debug interface or an accessibility interface. All of that data is available to the companies and that allows the companies to determine if this look like a real human or not and allows them to cluster them together. So one of the big steps here is you say these 500 counts seem to be working together because they’re coming through one IP address, or they all have email addresses that are related or something and then allows them to do attribution based upon, oh, well these guys screwed up and we were able to get their GPS location. That’s actual thing that happened during the Facebook work on this in early 2017 as we found that the internet research agency was full of young people who love Instagram.

And so at the same time that they’re doing their trolling, that they would also post real stuff. And we were able to get real GPS locations in that we were able then to tie those physical locations to [inaudible 00:10:48], right? So that’s all stuff that’s only available inside the companies. You cannot get from external IP. So if you’re trying to build a dashboard of we’re detecting bots, you’re working off of very limited data. The time it was posted, a little bit of metadata is sometimes available. You used to be able to see what the Twitter agent was that posted it. That’s not available anymore. There’s that kind of stuff, but especially content, and that’s where these things fall down, is if you take even Russian accounts and then you take the content output of those real Russian accounts and you train the classifier based on it, all you’re going to do is catch all of the kind of people who have the same kind of political views that the Russians are pretending to have.

And if you take the data set that was released by the companies in 2017, that is mostly going to be focused on pro-conservative topics because the big focus of the Russians was to hurt Hillary Clinton in 2016. It was to support Donald Trump. That actually happened. So I just want to be totally clear. The Russians really did interfere in the 2016 election. But what happened is then people took that information and that training set and they tried to then build machine learning classifiers in other systems based upon the activity from a relatively small amount of time focused just on the US election to find Russian bots. And so what you end up doing with that is you catch anybody who has the same kind of language or talks about the same topics or links to the same URLs. And so the internal debate inside of Twitter completely correct.

One of the things that was interesting was they were able to figure out actually the panel of bots that Hamilton 68 was using to train their classifiers because they were able to pull the API key and see what the rest of us couldn’t see, which is what they were requesting via Twitter’s API. And so you have all this internal discussion at Twitter basically saying this is BS. So that’s correct. We should not have those bot detectors. Stanford Observatory has never referenced one of those. So there’s a number of academic papers out there where people have used bot detection systems to create an N equals 400 accounts. We’re going to have a bit of a replication crisis now in the disinformation space. And that’s a good thing because this is not the way you should do this kind of science. There’s some really bad work that’s come out of Carnegie Mellon, unfortunately that’s along these lines.

There’s some folks who did this work at IU, all of whom are going to be kind of on the hot seat a little bit on the academic side. So anyway, I think the criticism Hamilton 68 is correct. What’s not correct is to say, oh, well Russia did nothing, which is, but we continue to see from Taibbi is to take these internal documents and then frame it up to say that none of this ever happened. And that’s not true. There’s both an overreaction to disinformation in bots. That does not mean that there isn’t a fundamental problem that we have to reasonably deal with. And that continues to be.. It’s great for this reporting to come out as a very obvious political project. I continue to disagree with the goal with what the obvious goal is, which is to make it that people believe that nothing ever happened in 2016 and that the internet is a place that is free of government interference, which is completely not true.

Evelyn Douek:

Yeah, I think the other thing that they raise in these threads that is there is a greater truth is how widely this was reported across the media. The Hamilton 68 did get a lot of coverage, and I think it goes back to the thing we were talking about in the first story actually about incentive structures for people to get the most accurate story. And I think there was an incentive structure around that time for people to report massive findings about Russian bots.

And so this did get a lot of attention and I think it really does say something going forward about reporters needing to be more careful in how they think about reporting on empirical findings that aren’t necessarily as rigorous or as careful as we would hope. But will this result in a more nuanced conversation? Let’s see. Unlikely. In a similar vein, Musk was on the hill meeting with house leaders in the last week, and despite all of his calls for more transparency between interactions between government and platforms, I’m not confident we’re going to get lots of transparency about what all of those conversations are about. And meanwhile, the account amnesty continues as we discussed. We’ve got white supremacists getting reinstated and then banned less than 24 hours later because oh, it turns out they do exactly the same thing they did before they were suspended.

Alex Stamos:

Wait, you don’t think these guys, while they’ve been off Twitter, they’ve become not white supremacists?

Evelyn Douek:

Yeah, exactly.

Alex Stamos:

They’ve been reading a letter from a Birmingham jail. They’ve really done a lot of internal self-

Evelyn Douek:

Self-reflection. Yeah, exactly. Turns out it was a reasonable hypothesis. Turns out no, but in totally unrelated news, Twitter is also being sued in Germany for hate speech. So I mean, this actually is a good example. Another example of incentive structures in reporting. This was widely reported that the Twitter’s getting sued in Germany for hate speech. It’s a good headline right now. But digging into it a little bit, having a look at it doesn’t seem like a very strong case. It seems like mostly PR.

This is not based on Germany’s famous NetzDG legislation. This is a contractual claim saying Twitter says in its terms of service that it’s going to take down these accounts for breaching these rules. And it hasn’t done that. And it named six accounts that were flagged and not taken down. I would be very, I mean, haven’t read Twitter’s terms of service in German, but I would be very surprised if Twitter’s German lawyers left themselves susceptible to a claim that it’s a breach of contract if they don’t take down absolutely every account that suspends the rules. So I’m not sure that this is much of a story other than a hilarious headline in this particular moment that Twitter’s being sued,

Alex Stamos:

Which is another kind of just media… And not just around tech, but this media standard story, which is anybody can sue anybody for anything. You got 50 bucks in pro se, let’s rock and roll. And then the media can report it as so-and-so sued. And then they only cite from the complaint because there’s no response from the other side. There’s been no rulings And you see this all the time, that you don’t have a filtering of what is a legitimate claim or not what actually has legs and so on.

Evelyn Douek:

Exactly. A good way to get a headline, file a lawsuit, and set up a snazzy website, apparently. And then that’ll get you in the news. Okay, so another big story happening at the moment, breaking news this morning is that TikTok CEO Shou Chew will be testifying before Congress for the first time on March 23rd. I am devastated that TikTok CEO is going to appear before Congress before Susan Wojcicki. My Susan Wojcicki to the Hill campaign 2022 has just been a repeated failure. I do not know how she does it, but anyway, gender equity if nothing else, get her up there. But I mean, it does make sense. TikTok is getting a lot of heat and a lot of attention and in response at the moment, it’s going on a bit of a charm offensive and trying to explain itself rather than being defensive. And one of the things that it did in the last week was give briefing to various reporters and academics about Project Texas, which is its plan to make its platform more secure. So Alex, can you talk a little bit about what the briefing sent?

Alex Stamos:

Yeah, so I did not get the… I was offered this briefing, I turned it down because I didn’t really like the rules around it, but the output of the briefing has now been published. My ask to TikTok is if this is a serious thing, then you guys should write a white paper and publish it and we can all freely discuss that instead of giving these briefings with notes but not slides and it’s just legitimate commentators are not going to want to have any kind of restrictions on what they say about this. Okay, so what is Project Texas? So Project Texas is TikTok trying to create a US subsidiary that ameliorates all of the legitimate concerns that people have had about a Chinese company being dominant in the internet space. That’s both access to data as well as the possible manipulation of the platform of what people see from an algorithm perspective or perhaps from a censorship perspective.

All of those things have turned out to be true. As we’ve discussed on this podcast, TikTok seems to have looked at the worst case scenarios of what people have thought about American tech companies in 2018, 2019, and then thought that’s a good idea. They’ve straight up been spying on journalists. It’s come out that they have the ability to promote stuff algorithmically and there has been censorship of stuff that is critical of the Chinese Communist Party. So legitimate concerns, how do you fix it while allowing TikTok to stay part of ByteDance, this big Chinese company. And what it looks like they’re doing is they’re effectively borrowing a model that already exists in the US government context for defense contractors. So we have this system that if you are a foreign company and you want to sell something to the US government that is on the classified side, you would generally have a problem because you would not be able to get US security clearances for your executives who are doing that work.

And so what happens is that defense companies generally… These are in US allies, right within the UK or Israel or Germany, they want to sell the US, they create a US subsidiary and they create this thing called a special security agreement, an agreement between the subsidiary and the parent company where the subsidiary is controlled by a board of directors that are American citizens. In the case of a defense contractor, they all have clearances and there are rule things that can be told to them by their parent company, but there’s restrictions on that in the agreement. And then those people are allowed to say, publicly, I’ve been ordered to do something bad. And they have both a fiduciary responsibility to the parent company, but they have kind of a security responsibility to the US. So great example of this is like BAE systems, this huge conglomerate defense contractor in the UK that sells a bunch of stuff to the US government.

They have a BAE systems incorporated, a Delaware corporation that is owned, the money flows back to the UK, but the secrets do not. Now obviously that’s a lot easier when it’s the UK or Australia or Germany or Israel when you’re less… These are US allies and you’re less worried. When you’re talking about China, that’s a complicated, like the odds of being able to come into agreement where you’re comfortable with members of the Chinese Communist Party doing the oversight in some ways could be tough, but it is a model that has existed. The idea here is that then you’d have Oracle of all people, and we’ll talk about that in a second. Oracle hosting the servers and doing a bunch of things around the security of data, making sure that data does not flow outside the US, doing code reviews of code that gets pushed down. One of the complications for TikTok is TikTok is a straight-up branch of Douyin of the Chinese version.

All of the code comes from Beijing. And this has been a challenge for TikTok employees in the US is if they want anything done, if they want anything fixed, they have to basically file a ticket with Beijing engineering teams. And so how do you make it secure in that case? So in theory, you could have a model where that code flows down from Beijing and there’s a code review and such, but at the speed at which these companies operate, that is going to be a really incredibly difficult thing. And part of it is then Oracle’s going to be looking at the recommendation algorithms and such to make sure they’re not biased, which is already this huge challenge. Nobody really knows how recommendation algorithms work. They’re making them explainable to humans is really hard. I’m talking about TikTok is probably training these algorithms on petabytes and petabytes of video data.

And so how to do that kind of audit and stuff would be hard for anybody. The fact that it’s Oracle is one of those things. It’s just like all of this stuff, Oracle’s involvement here started under the Trump administration and it was clearly an incredibly just corrupt insider dealing. Safra Catz, the CEO of Oracle was on the Trump transition team is a huge Trump donor and Trump Trumper. And it was clear that the Trump administration was just throwing a ton of money at Oracle. But for whatever reason, the Biden administration has allowed Oracle to stay in the driver’s seat of this entire deal, which is totally insane to me because if I think of a company that is not at all qualified to do this, it’s Oracle. Oracle has some of the worst information security practices of any large enterprise product. Their products are incredibly insecure.

Oracle’s one of the riskiest companies ever. If you look at the importance of their products and how bad their product security is, the idea that they could do this work in a way that [inaudible 00:22:14] should be happy is just completely ridiculous. Of any of the companies, Amazon or Microsoft or Google or any of the major cloud providers would be way better prepared than Oracle. It was clear under the Trump administration, it was just graft, it was just corruption. The fact that the Biden administration’s allowing Oracle to stay in the driver’s seat is crazy. And it once again demonstrates that Oracle is really a company of lawyers and lobbyists that have a couple engineers on the side that maintain their products, but they mostly make their money by suing folks and having really good salespeople. And that seems to be what their strategy is here too.

Evelyn Douek:

Yeah, as you say, the devil’s going to be in the detail on this and there’s still a lot of detail lacking. I was reading these announcements trying to work out exactly what was going to happen around content moderation because there’s the data privacy problems, but there’s also the content moderation concerns. As you say, Oracle’s going to be auditing TikTok source code and the recommendation algorithms, but of course that’s not the only place in which bias can be introduced into content moderation. It can also be through takedowns or heating as we talked about last week, in terms of pushing out certain videos or after the fact, [inaudible 00:23:14] anti measures in terms of letting certain things stay up or taking certain things down. And it’s unclear to me from these announcements how this program, Project Texas is going to operate with respect to that.

This could be a real shame because I think content moderation, auditing third party inspections is a really… This is going to be a growth space. The DSA, Europe’s Digital Services Act does have all these provisions about third party audits. This is going to be something that we see a lot more of where companies are doing these kinds of things as trust building quality assurance mechanisms. I would really like to see that happen in an open, transparent way with people who actually have the expertise rather than this being captured by existing dominant players who are well connected or whatever the case may be. So yeah, I again have no idea how this is going to pan out, but this doesn’t seem like a massive rep recipe for success.

Alex Stamos:

No, and one of the things I don’t think it’s going to work out is, again, I don’t think this structure is going to work in a situation where the final beneficial owner of TikTok is a Chinese company. It’s just going to be… Again, these special security arrangements, they work when you’re talking about US allies, and nobody really thinks that the British government is trying to spy on the US Department of Defense, at least at this level, that you have effectively honest actors on the other side of this agreement, which nobody is going to believe in this situation. So I think the continued ownership is going to be a problem.

If they’re going to do a breakaway, they’re probably going to have to spin out TikTok as a subsidiary that has some significant US ownership and then maybe ByteDance still gets some beneficial ownership, but without any kind of control. But as long as they’re over 51%, I think it’s highly unlikely that [inaudible 00:24:56] and especially Congress is going to be happy because the other thing that’s going on right now is members of Congress are going back and forth. Democrats are trying to reach young people on TikTok. So you have Adam Schiff doing a TikTok and then getting massively attacked by Tom Cotton. Whatever happens with Project Texas, those kinds of political motivations are going to continue to exist.

Evelyn Douek:

And the bills are still on the Hill to ban the platform entirely. We’ll see whether politicians say let’s just push that off until after 2024. Because I have some campaigning to do on the platform right before then. All right.

Alex Stamos:

Which you and I agree. I hope it doesn’t get to that point because I do think that’s a very illiberal thing for us to ban American citizens from freely choosing to use a product, even if it is Chinese.

Evelyn Douek:

I mean, I just can’t see it being consistent with the First Amendment to such an over broad measure to crack down on so much creativity, totally protected speech. It just does not seem to me to be consistent with that. But that does not mean, as we know that Congress might not try. Moving to a headline that I thought, oh, Alex is definitely going to have thoughts about this. This is a story in the Intercept this week. Apple brings mainland Chinese web censorship to Hong Kong. This is a story about expanding its Safari web browser safe mode that it uses in China now to cover Hong Kong. Alex let it rip.

Alex Stamos:

Apple has done more for the Chinese Communist Party than any other major American tech company has done for any other totalitarian state.

Evelyn Douek:

Drink, listeners. There it is. Yeah.

Alex Stamos:

Yeah. Unfortunately. Yeah, I have the Russian National anthem lined up here. The Chinese Anthem is not really… At least the Soviet anthem is kind of like a… It’s a banger as the kids say.

Evelyn Douek:

I’m not going to ask why you have that.

Alex Stamos:

Oh yeah, it’s for a different podcast. But yeah, so I mean, I guess we’re just going to need some kind of other sound effect for this. It’s just a sad one. It’s just Apple has… They’re clearly trying to separate themselves a little bit from their Chinese supply chain and to create alternative supply chains. But for now, the PRC has complete and total leverage over Apple. And one of the sad things has happened over the last couple of years is Hong Kong, the special administrative status that Hong Kong enjoyed since the exit of the British, has been whittled away, whittled away.

It is sad to now see US tech companies help with the integration of Hong Kong into the PRC’s surveillance and censorship regime. So it’s just another sad point from a geopolitical perspective. Just also another reason why the Taiwanese are not super happy about the idea of going direction of Hong Kong. And so it is interesting to see the PRC continue down this. There’s been some other news this week by people talking about possible conflict between China and Taiwan. And this is the kind of thing that makes that more likely that you’re going to have a radical… Bit by bit demonstrating why you’re going to have radical separatists and pro-republic folks being elected in Taiwan and possibly in 2024.

Evelyn Douek:

Very sad. And just a small update from a story that we talked about a couple of times on this, which is Google’s spam filter program that it launched to allow political campaigns to sign up to be able to direct route into your inbox. The RNC, which has been slamming Google for bias in its spam filter never signed up. And I guess following the FEC ruling last week, that Google’s spam filter wasn’t biased in its filing this week in the US District Court for the Eastern District of California in response to the RNC’s lawsuit against Google for bias has said it’s now closing that program. I guess Google has said we’re we’re done playing games. This is clearly a totally legitimate commercial measure and we’re not going to keep trying to hand it to you here. So that’s good news. And then finally in the content moderation is hard segment.

This is one of my favorite stories from the week. So the Financial Times Finance and Market blog had set up a Mastodon server a little while ago thinking it’ll be really fun. And they’ve closed it credit to them for announcing the move in an article with the headline, “We tried to run a social media site and it was awful.” They say a few months ago, Financial Times, Alphaville thought it might be fun to host the Mastodon server. Boy, were we wrong.” And they talk through why basically content moderation and running social media is a dumpster fire of problems and headaches and in particular, compliance, security and reputational risks are substantial and ever growing.

They talk about the changing regulatory landscape, the fact that social media executives may have criminal liability under the online safety bill that’s being pushed through Parliament in the UK at the moment. And it was basically just not worth the hassle, ambiguity around were the Mastodon’s server owners [inaudible 00:29:29] responsibility for their user’s defamation, which the answer should be no, but you never know when you’re leaving it up to judges. So turns out content moderation is hard and the Financial Times doesn’t want anything to do with it. And which leads me to ask you, Alex, how’s your Mastodon server going?

Alex Stamos:

It’s going okay, partially because I only have a couple dozen active users, so I run cybervillains.com as a test bed. One for security. There’s a number of security people on there that have been testing it and trying to find bugs and stuff. And I’ve found a couple of little things, but also to get the experience of what it is like to run a distributed server. And we have some students at SIO who are now looking into this and looking into the legal obligations. And FT is right, if you’re any reasonably sized organization, you go to your legal team, you’re like, hey, can we take on all of the content moderation responsibilities and legal responsibilities of Facebook, Twitter, and Google? The moment at which all those responsibilities are possibly exploding, both in Europe and in the US, due to all the Supreme Court cases, they would say You’re completely insane.

And so I think one, it does demonstrate it’s hard. Two, it once again also demonstrates that a bunch of these laws are probably incompatible with small companies or individuals running social media. This is a side effect of the DSA. It is a side effect of GDPR. It is a side effect of… It will be a massive side effect if on Gonzalez or any of these cases, which is if you create all this liability, nobody smaller than a trillion-dollar corporation is going to want to take it on or could self-insure against it. And so it is something that policymakers need to consider when they write these laws of whether they’re going to create a space for smaller non-professional runners. And I think the real people like the mastodon.socials and stuff where you have hundreds of thousands or low number of a million users are in a real challenging place because the tools available for them to do content moderation scale are minimal compared to what exists inside the commercial companies.

Evelyn Douek:

Yeah, I think that’s a really important point to, we often talk about these laws and these measures as sticking it to the big platforms and trying to get at Facebook and YouTube or whatever it is. And actually what you see is massive entrenchment because those platforms have legal teams that are bigger than some small countries and they’re probably going to survive. But anyone that’s trying to set up a platform to challenge them is going to find it much more difficult. So the anti-competitive effects of legislation and regulation, they’re really important to remember, but those voices very often aren’t in the room or those kinds of platforms aren’t really in the minds of policymakers, unfortunately. Okay. I’m told that we have to bring back our sports section for the week, Alex, because John Perrino, my head of the Stanford Internet Observatory Policy Analyst tells me that I need to ask you about a certain football game that happened and how you’re holding up this morning.

Alex Stamos:

Oh, John.

Evelyn Douek:

Yeah.

Alex Stamos:

Yes. Unfortunately, the 49ers lost in a very bad way with lots of really dumb mistakes. And they gave up not only a couple of really bad turnovers, but they kind of lost control and ended up reasonably getting flagged by the refs at very critical moments. Yes, very sad. The 49ers have been eliminated. They will not be in the Super Bowl, and so the Super Bowl is dead to me.

Evelyn Douek:

I’m told to ask you whether you were getting a call to serve as the next quarterback on the team. At some point, were you warming up, getting ready at a certain point last night?

Alex Stamos:

Yeah, exactly. I’m like the ninth or 10th. There’s a couple people ahead of me, but yeah, so those of you who have been following American football, the 49ers keep on their quarterbacks have become like the drummer of Spinal Tap, somebody who continuously injured or spontaneous combustion. And yes, they had to put in more and more backups. For a while, backup Brock has been doing extremely well. The last drafted man in the NFL draft has had a breakout year, but then he himself got injured. So yes. And that has been… Volunteering to be the quarterback for the 49ers is just a little less risky, than being the number two for Al Qaeda. It is becoming the same kind of position.

Evelyn Douek:

Well, thank you, John. I really enjoyed seeing Alex squirm that the pain on his face was visible. And then with that, that has been your… Yes.

Alex Stamos:

Sorry. Oh, and one other thing I just want to spot. I’m going to be on a different podcast this week if you’re interested in Dune Pod with Jason Goldman of the Obama administration. We’ll be talking about Hackers, the 1990s hacking teenager classic on Dune Pod. It’s going to be a good listen.

Evelyn Douek:

Actually, that’s a good reminder if we’re log rolling. I’ve been putting out a podcast with the Knight First Amendment Institute at Columbia University called Views on First. Much less exciting than Hackers, but it is about what happens when the First Amendment collides with social media platforms and some of the really big cases that are coming down the pipe.

Alex Stamos:

Well, that’s great because this demonstrates I am a fake professor, and so I could be on a podcast talking about nineties cult movies. And you being a real professor have to do talk about the First Amendment with the Knight Institute. Boring.

Evelyn Douek:

Yeah. Well, I had fun. There’s twists and turns, listeners. Twists and turns. All right. That has been your Moderated Content Weekly update. This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderated content, and you can find transcripts there as well. This episode would not have been possible or anywhere near as much fun without the research and editorial assistance of John Perrino, Policy Analyst extraordinaire of the Stanford Internet Observatory.

Alex Stamos:

Who’s about to get fired. So goodbye, John. Thank you.

Evelyn Douek:

It’s been fun working with you and it is also produced by the wonderful Brian Pelletier. Special thanks to Alyssa Ashdown, Justin Boo, and Rob Huffman. See you next week.