MC Weekly Update 5/6: Good luck, Linda!

Alex and Evelyn cover a of stories you’d definitely want to hear on your first day as the CEO of Twitter: Twitter has been failing to remove dozens of known images of child sexual abuse, as revealed by an investigation by Stanford Internet Observatory; ad revenue is way down; Senators are concerned the company cannot comply with its obligations under the FTC consent decree; this is not helped by the fact it recently lost two high ranking trust and safety employees… and more! Good luck Linda! Meanwhile, YouTube is no longer enforcing its 2020 election misinformation policy; Instagram has reinstated RFK Jr’s account; TikTok has been sharing user on an internal messaging tool accessible by ByteDance employees; and the Surgeon General issued an Advisory on Social Media and Youth Mental Health.

Show Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Twitter Corner

  • Meanwhile, YouTube announced it will stop enforcing its 2020 election misinformation policy. Good thing there’s no big events coming up in the next year where the amount and importance of such claims is likely to increase! – Sara Fischer/ Axios, YouTube 
  • Instagram lifted its account suspension for Robert F. Kennedy Jr. on Sunday, saying it was a mistake not to reinstate him after he launched a presidential campaign in April. – Cristiano Lima/ The Washington Post
    • Kennedy’s account was previously suspended for repeatedly sharing debunked claims about vaccines and COVID-19. His nonprofit, the Children’s Health Defense, is still suspended from the platform.  

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Transcript

Evelyn Douek:

I didn’t know that all you had to do was announce that you are running for president in order to get a basically community standard immune Instagram account. I think this is a big plus. Big news. Do you have to even be eligible or is it can just anyone announce that they’re running?

Alex Stamos:

I think you should test that, Evelyn. I think you should test that. It is a long shot that, as an Australian, you’d be allowed to become President of the United States. But that you yourself are now running for president and therefore should be verified, should have free ad access. I think you should get lots of lift and be recommended. Let’s go get you on the debate stage.

Evelyn Douek:

Sounds good.

Welcome to Moderated Contents Weekly, slightly random and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. Today is a big day, Alex, because I read this morning, it is the first day for Linda Yaccarino, Twitter’s new CEO. Congratulations Linda. Welcome aboard.

Alex Stamos:

Linda, this hit is going out to you.

Evelyn Douek:

It’s the sound you want to hear as you come through the door. And what a day to be your first day as CEO. A bunch of stuff here that you would really, really want walking in the door. Where we’re going to start is with a story that you feature in prominently, Alex. This is, for all the right reasons this time, this is a story about a study that you’ve done with the Stanford Internet Observatory co-authored with David Thiel and Renee DiResta about a network of accounts, advertising, self-generated sexual abuse material and findings that you found in the course of doing that report about Twitter. Do you want to tell us what you found?

Alex Stamos:

Yeah, this is a big investigation into a phenomena that is really creating a lot of CSAM. We will be releasing that report on Wednesday. There will be some more media coverage. I don’t want to preview that too much. And just to be clear, we looked at lots of platforms. One of our big area focus is the fact that this is a cross-platform issue. And because it’s a cross-platform issue, you have lots of problems that are very difficult for one platform to solve on their own. Although we have already seen a bunch of fixes put in place since we notified companies. Anyway, we can talk about that more next week and hopefully we can have David and Renee on, my fantastic co-authors on this. But, one of the interesting things that we found while doing this work is that, a while ago, David is our chief technologist at SIO.

We worked together on a variety of issues including child safety at Facebook and he built a scanner so that all of the content that SIO is pulling in, from a bunch of different places. If an image comes in, we now have a scanner that checks to see whether it is known child sexual abuse material. We did that because, one, we’re doing child safety investigations. But, also just because if you’re looking at a bunch of different platforms, especially these alt platforms, the ones that don’t have trust and safety teams, the ones that aren’t doing anything, even if you’re looking at a political topic, you might run into CSAM. When Stanford fires me, I want it be for some cool academic freedom issue. I don’t want it because some 19-year-old ran into a picture of a child being abused because they’re looking at a hashtag and some real jerk polluted that hashtag with real CSAM.

We scan everything as it comes in. If we get a hit on this detection, this is the exact same scanning that the big platforms do. We use the exact same code. We automatically notify NCMEC and the metadata gets encrypted and put into place that only David and I could possibly get to it. Students are never exposed. We send to NCMEC so it could be handled by law enforcement. While we are not, I think, required to do this under 2258A, which is the law that affects electronic communication service providers, we hold ourselves to that standard. While we’re doing this investigation, which includes looking at stuff on Twitter, what we start to see is a bunch of hits in our system where CSAM is being discovered. This ranged from the somewhat more mild stuff with teenagers to the absolute worst, what you call a one, which is they have the worst stuff that involves little kids and it’s really, really bad.

We’re not going to talk about the details of what that means. This is known stuff. Content that has been seen, that has been hashed, that has coming out of the hash banks of NCMEC or the Tech Coalition or one of the different groups were the hashes that we subscribed to that we’re getting hits on. This is just super basic. It demonstrated a real regression at Twitter that clearly something broke there because Twitter has been part of the Tech Coalition, which is the group that works on child safety between tech companies, I think since pretty much its beginning. They’ve done a lot of work in this space, but it looks like a ton of those folks have been fired. There’s only a handful left and basic systems are breaking. This was not intentional on Twitter’s part, but when you fire all your engineers and you fire your investigators who do this work, then you’re going to end up having this breakages.

That’s what we found. We tried to notify Twitter. We didn’t hear back. We notified NCMEC, of course. We eventually got a contact at Twitter and we were able to brief the handful of people who still work there on child safety and after a while they were able to fix it. But, I can’t guarantee to you that’s still fixed because unfortunately the other thing that happened last week is that Twitter cut off our access, as they’re going through and cutting off all academic groups who do any cooperation.

We have gone from working side-by-side with them on Chinese interference, China lying about COVID, about Russian infiltration into sub-Saharan Africa, on child safety issues, on suicide, on self-harm. We’ve done all this work with Twitter side-by-side, try to help them make the platform better and for them to help us with our research. And now they have officially cut us off and lots of other academic institutions. This is the last hurrah of something that we’re going to be able to fix at least for the foreseeable future or until, say, the Digital Services Act brings back this access, which is not clear if that’s going to happen yet.

Evelyn Douek:

Just to underline it here, this is really the bare minimum that you can do around child safety as a tech platform. This is Musk comes in, says this is his highest priority. The child abuse is the one thing that he really is going to get behind in cleaning up the platform. This is not complicated stuff that you were noticing was breaking. This was the basic first level of protection, right?

Alex Stamos:

This is the bare minimum. This is known stuff for what you’re giving a hash and they were allowing it to be posted publicly. We don’t have any access to private data. This is all public stuff on Twitter. We just had the ability to use certain search terms. Now, we found dozens and dozens of hits, but that was only for specific search terms. There might have been hundreds or thousands or tens of thousands of examples of this being posted across the platform. Yes, this is just the basic stuff and they’re failing the basic stuff. Which once again demonstrates that, I think, a lot of his child safety alignment is not a real alignment. That is part of him signaling to QAnon and the super right-wing crowd who have adopted child safety as a theme even though they’ve done no actual work in that space.

We have yet to hear from the, quote unquote, child safety experts who have been big Musk backers, who have no actual history in this space, who I’ve never seen at the Crimes Against Children Conference, who I’ve never seen at a Tech Coalition conference, who I’ve never seen write a paper on child safety or testify in a trial. All the things that my colleagues and I have done, none of them have commented so far on the fact that Twitter has just basically failed in this. I feel bad. There’s a handful of good people who are still trying to keep the lights on at Twitter. But, it’s clearly becoming harder and harder as they lose their resources to be able to fix this kind of stuff.

Evelyn Douek:

There’s good coverage of this in the Wall Street Journal, which we’ll link to in the show notes. It’s exactly what you’d want on your first day as CEO of a new company. A big headline in the Wall Street Journal showing how you were, there are basic systems failing within your company to protect against the most basic forms and horrific forms of child sexual abuse.

Alex Stamos:

But Linda, if you’re listening, Linda, go find those people, the handful of people who work in child safety still and go meet with them and ask them what they need to make sure this doesn’t happen again. Maybe ask them, do they think it’s smart to cut off the research groups that they’ve cooperated with in the past to have helped them with these issues? Or would you rather find out when the FBI comes knocking? That’s your other option.

Evelyn Douek:

Right. Another headline that greeted Linda as she walked in the door today is a great headline in the New York Times about how Twitter’s ad sales are plummeting. They’re down 59% from the year before. I’m sure totally unrelated to the other headline that we were just discussing, how advertisers feel about being on the platform. A nice little tidbit in this story, in the New York Times, was that one of the things that they’re promoting at Twitter to try and woo advertisers back is tools called adjacency controls, where these are basically brand safety tools to keep ads away from tweets containing specific words. The New York Times had reporting that some advertisers are using these tools to keep their content away from Mr. Musk’s tweets, in particular. That’s exactly what you want. When the face of your company is what advertisers are trying to avoid on your platform.

Another thing that would’ve greeted Linda as she walked through the door today was a letter from four senators asking if Twitter still had enough people to run their company in legally compliant ways. Senators Warren, Wyden, Markey, and Hirono wrote to ask if Twitter was still able to comply with the terms of its FTC consent decree and the privacy and data security obligations that it has under that. They were citing two recent high profile trust and safety resignations as the reason why they’re concerned, which, fair enough, because in the last week Twitter has lost its head of trust and safety, Ella Irwin and it’s head of brand safety and advertising quality, AJ Brown to significant departures. And when you’ve lost Ella Irwin, Alex, I don’t even know what to say. I don’t know what to make of this story. What do you think?

Alex Stamos:

It looks like a lot of this comes out of a really bizarre thing that happened last week, which is blow up about this film called What Is A Woman. As any of our listeners will know, part of the recent culture wars. A lot of it has been about trans issues and a right wing provocateur has made a movie that he thinks is a lot better made and funnier than it clearly is. Called What Is A Woman? I’ve watched it. It’s insulting. It’s out of place. It’s stupid. It’s got some not so great stuff in it, but it’s also what you expect in a big pluralistic democracy of people arguing back and forth, of people having stupid movies and stuff. But, there’s this weird back and forth where Twitter apparently promised them a high billing, a special deal to distribute this movie on Twitter, and then they revoked that and then lowered its distribution.

This became a huge cause celeb on the right-wing. Elon initially backed the decision to not super promote this film even though it was on, but it was harder to find. Certainly, didn’t get top billing. Then he reversed himself. I think whatever you think of the substance of the decisions around this film, it’s pretty clear that Irwin ran into the exact same problem that Yoel Roth did, which is that whatever decision Musk makes, he does not feel bound to it if he gets enough criticism, at least from the right. He never does anything because something comes to the left. But, he’s not going to let anybody be to the right of him and if he gets enough criticism, he will reverse himself. It’s just impossible to run a trust and safety team if whatever decision the CEO makes, they’re willing to reverse because they got enough tweets at them. From Kat Turd or whomever.

I’m not totally sure exactly why she quit, but it does seem that either he asked her to leave or she decided to leave because of a disagreement over this. They’ve had a bunch of anti-trans stuff. She has personally approved that.I don’t think she was standing up on principle here, but maybe she found of like, “Hey, you can’t change your mind between 9:00 AM and 10:00 AM and then blame me for it, for enforcing your rules. And then you getting angry that I was enforcing the rules that you signed off on.” Good luck, Linda. He’s not the CEO anymore, but he is going to be the chairman and I think he said he’s the CTO. Clearly, he’s going to be involved in all of these content moderation decisions. Whatever you think the appropriate decision is, sometimes you just have to make one and stick with it.

If you’re going to have a situation where the CEO decides they’re going to whiffle waffle back and forth, that’s not going to do anything. I think the overall controversy ended up with the movie being seen way more than otherwise. This has turned into a Hunter Biden situation where we will hear complaints, howls of complaints about the several hours during which this thing was down ranked, and yet it probably got 10 times the viewership because of the Streisand effect here because a lot of people were being pushed to it. Which again, people can have their own opinion on the movie.

Whatever you think of it though, I do think it’s a kind of thing that we should expect to exist and we should allow to exist, but it’s also appropriate for Twitter to say, “We’re not going to give this thing top billing. We’re not going to allow this thing to be massively promoted because we disagree with its contents even if we don’t completely take it down.” That kind of disruption of reach while allowing the underlying speech, as Elon has said, ripping off our colleague Renee, I think is an appropriate decision. It’s too bad that Musk doesn’t have the ability to stick to anything that he decides.

Evelyn Douek:

This controversy was totally predictable. The policy that the video was supposed to be violating or was set to be violating was that it misgendered people, I think, twice in the movie. Twitter used to have an explicit protection against misgendering in its hateful conduct policy. It got rid of it about a month ago. We talked about it and Ella Irwin, in fact, came on the record and say, “No, no, no, we just got rid of the wording. But the policy still applies.”

Obviously, that was probably an untenable position because the wording went missing for some reason. The reason became apparent in the last week. It does seem a strange kill to die on given all of the things that have happened over the last six months that Ella Irwin has been in charge of trust and safety. But still, I guess, it’s an untenable position to be in when you’re at the face of making all of these decisions and the person standing behind you is constantly changing their mind and you are the one that’s having to try and make that look rational and it’s a policy-based decision. Yes, good luck, Linda. That’s your job now, I guess.

Alex Stamos:

Good luck. Good luck, Linda. Brand safety. Brand safety. On the FTC side, like you mentioned the FTC thing, they’ve lost Ella. They lost their head of brand safety this week. This follows their chief security officer, chief compliance officer, chief privacy officer, all quitting, none of whom I believe have been replaced. I’m not exactly sure who’s there anymore making anything. Certainly nobody is going to be taking a head of role where they could be pushed forward by Musk of being the front person for these decisions. I think for the people who are there, they’re all keeping their heads down and allowing him to be the face of whatever Twitter decides to do.

Evelyn Douek:

The one thing that everyone always talks about with these decisions, with these difficult decisions, is transparency. And it’s the one thing, again, that Musk has said he’s really behind and really thinks is important. We don’t know who’s there. We don’t know who’s working on these decisions. We don’t know who’s responsible for these decisions except, of course, Musk. We now don’t know really know what the policy is. It seems to be changing on the whim. And you mentioned them cutting off API access. Alex, it’s even worse than you suggested. It’s not only that academic researchers, such as yourself, can’t track what’s going on on the platform on an ongoing basis. And so for example, these systems breaking around child sexual abuse, you won’t be able to research that going forward in the future.

In quite an extraordinary demand, I don’t think I’ve ever seen anything like this, Twitter is sending letters to researchers that have acquired data through that API over the course of their research in the past and demanding that they delete all of that data within 30 days unless they pay the new rate of access to the API, which is the ridiculous $42,000 a month. Completely beyond the capacity of academic researchers. All of these academic researchers are in this place where they’re being told to delete the data upon which years of research is being based. The only rationale is some fear of transparency that I can think of. I can’t even really understand what’s going on here.

Alex Stamos:

Talking about stories, in effect. Nothing says we have confidence in our decisions like I am going to threaten to sue academics because they have data. They collected public data. You got to remember, this is not private data. This is nothing secret. This is not a Cambridge Analytica situation where anybody abused APIs to get access to data that was not meant to be public. This is public data, that people have effectively used APIs to scrape, with Twitter’s permission, a huge amount of the public discussion. If we’re going to have a public discussion, there are people who are going to study it for a variety of reasons. People who study, how do people interact with each other? People who study, how do teenagers talk about things? There’s people that study suicide and self-harm. When people have mental health issues, how do they interact online?

All of this research requires being able to see all this public discussion and be able to analyze it. There are a variety of different ethical frameworks around how you protect the privacy of individuals. And I think any good academic researcher will be very careful to use the data in aggregate without pointing out individual people and the speech, even public speech of individuals. But, there’s not a legitimate privacy argument here. There’s not a legitimate transparency argument. This is just straight up, we want money. If we don’t have the money, then you’re not going to be allowed to criticize us. Which is exactly what you and I talked about. That this was, I think, the likely as well as one of the batterer scenarios for Twitter was them continuing to be the centerpiece of… Twitter used to be a centerpiece of all political discussion in the United States and now lots of people on the left and more of the Democratic and moderate side have left.

And now it’s become more of a hard-right platform. But, it is becoming the most important hard-right platform. It is definitely passing up Truth and Getterer and Rumble and other platforms among people in that space of the place where they gather. The idea that you can have these big public discussions that have real effects on our politics that perhaps are being manipulated by outside forces, and nobody’s allowed to talk about that and look at it and say, “This is what I see going on,” is just completely ridiculous and not compatible with any of the free speech ideals that Musk himself has talked about.

It is becoming a dark day. None of these things are going to stop the people who are scraping for ad purposes or to violate people’s privacy or the defense contractors who sell stuff to the DOD. They don’t care. It’s only going to be legitimate researchers who have to go through IRBs, who have to get approval on their privacy and such who are going to care about legal threats. You talk about the 40,000, realistically, the tier at which you have to be at, if you’re doing any of this research is the $200,000 a month tier, which is obviously just completely out of bounds for any academic research group.

Evelyn Douek:

You mentioned the Digital Services Act as a potential tool that might force Twitter to keep its data more open to researchers. That’s yet to see what’s going to come of that. There is certainly going to be some a showdown between Twitter and DSA compliance in the past week or so. It pulled out of the European Union Code of Practice on Disinformation, which isn’t a surprise given that it had laid off a bunch of its Brussels trust and safety workers and also hadn’t been filing its compulsory transparency reports under that code. The code was a weird thing. I have a bunch of hesitations about the code anyway. It was a notionally voluntary piece of commitment by the industry to comply with these obligations that had serious free speech implications given that they’re around code of practice on disinformation.

I don’t really need to necessarily spell that out. But August 25, a bunch of legal obligations come into force under the Digital Services Act, and compliance isn’t going to be voluntary under that. The fact that Twitter is pulling out of these obligations suggests that compliance is going to be messy. Thierry Breton, the EU commissioner responsible or what he’s calling himself is the chief enforcer of this law. He said in response to Musk, “You can run but you can’t hide.” They’re going to be doing stress tests of all of Twitter’s ability to comply in the next few months, as well. Again, if you’re walking through the door as Twitter’s new CEO, you’ve got these stress tests coming up in a month and you’ve just lost your two top trust and safety officials. That is not necessarily the position that I would want to be in. As the title of this episode will be now, good luck, Linda, is where we’re at.

Alex Stamos:

Vaya con dios.

Evelyn Douek:

Okay, so to give Linda a break, let’s look at some mayhem on some of the other platforms that’s happening at the moment. YouTube this week announced that it will stop enforcing its 2020 election misinformation policy and will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US elections. A good thing there’s no big events coming up in the next year or so where the amount and importance of such claims is likely to dramatically increase.

I hope, if you’re at all nervous, don’t worry. YouTube said that it carefully deliberated this change, although it didn’t provide any further examples of what factors it considered at all in weighing this decision. I don’t really know what to say. I think there’s important conversations to have about policies once they get put in place and continually reviewing them and making sure that they’re still current and also making sure that resources are being put towards actually policing the most damaging and important claims and problems on services. But, this is just a weird change to make at a particularly weird time. I don’t know what would’ve been driving YouTube’s decision here.

Alex Stamos:

There’s no justification of it. It’s not relevant anymore. As of 2023, there’s no elections on the time horizon where it’s almost guaranteed that people will say it was stolen and then try to spread that via YouTube. Unfortunate call. I don’t know what to say other than this is not great timing and it is going to once again make YouTube the real centerpiece of this kind of abusive action. They’re basically saying, “We want to be the number one video platform here.” As much as you wanted to get Susan up on the hill, it does seem with their leadership change that perhaps they’re going to be backing off. It also I think demonstrates that it has been effective. There has been a attempt to frame any policies around whether the election disinformation, where it is factually untrue claims about the election.

A number of platforms had those policies because they saw those policies in 2020 as being nonpartisan. That it was completely appropriate for you to say you can’t lie about who you are. You can’t lie about the actual functioning of the election, where the voting is, when the voting is. All that kind of stuff. And you can’t make claims that are factually untrue about the outcome of the election. That that was not seen as a partisan thing. And in the last several years, what’s happened is that has been turned into a partisan thing because you still have a significant portion of, to be frank, the Republican Party, that believes the election was stolen. This is going to become a bipartisan thing. When we were doing our election work in 2022, one of our expectations was that Republicans were going to do much better and you would end up with probably house members who lost, all of a sudden coming up with excuses that the loss was driven by some kind of manipulation of the vote.

There continues to be a subgroup of leftists and Democrats who believe that the 2020 election that Joe Biden legitimately won, but a bunch of Republican victories were actually fake and were due to a really specific rigging of the election. They believe this with all their heart. For them to do it because they feel like it’s a partisan side, I think obviously it’s a mistake on many levels. But, it’s also a mistake to say that this is only going to be a one-sided thing. If we normalize as a country that the losers get to fight for years saying that the election was stolen from them, then that is going to be a really bad outcome and every loser’s going to do it because there’s not enough pushback on the Democratic side that if you’re a loser in the Democratic Party, that you’re not going to try to maintain.

One, you have a lot of ego, but also people build the supporter networks so they don’t want to give up. And one of the ways to keep that network of supporters behind you is to make them feel aggrieved like things were stolen from you. We had a little bit of that in the 2020 primaries with Bernie Sanders people saying the whole thing was rigged by the DNC and stuff. We have some history of Russia actually leaning into that and the Russian supporting Bernie’s claims. It’s quite possible that in the 2024 cycle that this will become more of a bipartisan thing. And then YouTube’s going to look back and say, “Oh, great, now we’re hosting all this content from a bunch of different people, from a bunch of different political parties saying that the election’s been stolen.”

Evelyn Douek:

This election is just looking to be an absolute mess across basically all of the platforms. Over to Instagram, which on Sunday lifted its suspension against the account of Robert F. Kennedy Jr. An outspoken anti-vaccine activist is how the Washington Post complimentary, nicely described him.

Alex Stamos:

Outspoken.

Evelyn Douek:

And so outspoken, in fact, that Kennedy, RFK lost his account in 2021 for repeatedly sharing debunked claims about COVID and vaccines. But, he announced he’s running for president as a long shot Democratic challenger to Biden. Instagram has reinstated his account and all 700,000 plus followers. I didn’t know that all you had to do was announce that you are running for president in order to get a basically community standard immune Instagram account. I think this is a big plus. Big news. Do you have to even be eligible? Or can just anyone announce that they’re running?

Alex Stamos:

I think you should test that, Evelyn. I think you should test that. It is a long shot that, as an Australian, you’d be allowed to become President of the United States. But, that you yourself are now running for president and therefore should be verified, should have free ad access. I think you should get lots of lift and be recommended. Let’s go get you on the debate stage.

Evelyn Douek:

Sounds good. Let’s see what outrageous things I can post before they finally kick me off. It’s an amazing decision. We’ve always known that.

Alex Stamos:

I would say the odds of the Constitution being amended to allow you to be President is about the same odds as RFK Jr becoming president. You’re not talking about that far off, that many decimal points of percentage.

Evelyn Douek:

I don’t know if you saw though, he got the much sought after and very valuable Jack Dorsey endorsements over the weekend. That’s something I don’t have in my back pocket. He does have that on me.

Alex Stamos:

And effectively an Elon endorsement through this Twitter space.

Evelyn Douek:

Right, exactly. Because not to be outdone on outrageousness, yes, Instagram reinstated RFK’s account, but Musk, by the time you listen to this Moderated Content episode, we’ll have hosted a Twitter spaces with RFK Jr. Because he just wanted to make sure. He thought Linda’s first day on the job was too boring and not enough going on. And so why not also host this Twitter spaces where I’m sure nothing at all problematic will be said.

Alex Stamos:

On this policy, you and I talked about this before. There’s a legitimate problem here, for platforms, is you don’t want to get involved in a political race. In a democracy, you do not want to be the ones deciding what speech is appropriate from people who are legitimately running. But, the problem is that we’re so early in the cycle, there’s no such thing as a official measurement here. Even the parties themselves and the different committees that put together the debates and stuff have to come up with these weird artificial standards.

You have X donors. You have Y people who support you in different states and such. But, we’re too early for even that. Effectively, anybody who has enough Twitter followers and says they’re running for President gets to call themselves that. I don’t understand how Instagram is making this determination. I think it’s nuts. I think it’s nuts because, effectively, they’re saying they’re going to highly motivate grifters. There’s already plenty of grifters who run for President knowing that they’re not going to win. But now there’s a whole other motivation, which is that you get out of Instagram jail as long as you just declare. You get to both raise money for your super PAC, become a kingmaker, get free TV, and get Instagram to un-ban you. It’s a pretty good deal.

Evelyn Douek:

It also just highlights how little we got out of the three-year-long process with the Meta oversight board around Donald Trump’s account. Where Meta suspended it. It went to the oversight board. They gave back the decision with barely any guidelines of Meta should be making these decisions. And then Meta reinstated Trump’s account. We talked about it on this podcast. With barely any guidelines of how they had made their decision to reinstate the account and how they were going to be making these decisions going forward. We went through all of that process and here we are still with these completely opaque decisions made without any explanation or understanding of where’s the line, how are they deciding when a candidate goes over the line. When they reinstated Trump’s account, they made all of these comments about how they were instituting guardrails on the account to make sure to contain the damage that might happen from harmful content that a known repeat offender might post.

There’s been no comment whether RFK Jr’s going to have any of those restrictions applied to his account either. We went through all of that process, all of that money set up and attention paid to this oversight board process. And here we are. We’re still none the wiser about how this decision making took place. Over to TikTok. In the last couple of weeks, there’s been a few stories, Alex, that you’ve been quoted on around more evidence of poor data security practices and sharing user data on internal messaging tools and things like that. Tell us what’s been going on and what it means.

Alex Stamos:

There’s been stories that the data for individual users keeps on getting copy and pasted into the internal discussion forums that are used inside of TikTok. This is a common problem for tech companies. We have this problem all the time at Facebook. All kinds of rules about personal information cannot be shared on these discussion forms, can’t be put into documents, can’t be put in chat and such. Part of the problem this creates for companies is for GDPR and a bunch of different privacy regimes, you need to effectively know where your PII is. You can track it. You can secure it. If somebody requests for it to be deleted, you can delete it. And if people are just copy and pasting it into effectively Word docs or Google Docs or chat forms all over the place, you lose track of it. It’s not shocking that this problem exists. It is a common problem.

The more shocking part is this discussion forum is hosted by ByteDance in China. There’s a little issue here about the actual data, but the overall thrust and the lesson we can learn from this is that TikTok is a thin layer on top of ByteDance. Most of TikTok’s operations, their engineering, their operations, their technical operations, a ton of their work is done by ByteDance and TikTok is a brand with its own employees. But, that a ton of this stuff is ByteDance on the backend. And no matter how much TikTok keeps on repeating Project Texas, Project Texas, that has not changed yet. We’re ending up in this situation where the period of time during which TikTok is making promises around the security of PII against employees in China versus the actual implementation is a huge risk to them because they have over promised and they have under delivered so far. They do say, “Hey, we’re still working on this.” But, how long are you going to say we’re still working on this after you’ve had your CEO go and promise Congress that’s going to get fixed.

He was extremely careful in his language. But, certainly the overall thrust of his testimony and all the other things TikTok says is not, “Hey, most of our stuff is still ByteDance.” What they’re trying to make people believe is there’s tiny little things that are still connected to China. And the truth is that a huge chunk of the company is still running on Chinese infrastructure and therefore if you’re worried about the People’s Liberation Army and the mystery of said security as we should be, then the security of that data from those kinds of actors is not guaranteed at all.

Evelyn Douek:

The last story that we should talk about, because it’s happened in the past few weeks, and it was one of the biggest stories around social media, was the release of the Surgeon General’s advisory on social media and youth mental health. Now, our audience will certainly have heard all about this by now, so we don’t need to recap the report or its findings. They would’ve seen all of the headlines in the media that the Surgeon General warned of a profound risk of harm to mental health. I’m curious to get your thoughts on this one, Alex, particularly as someone that’s worked in these social media companies, but also as a father of teenage kids. How you think about this?

Alex Stamos:

I agree that I think social media has been really bad for teenage mental health. There are upsides. But, I think the downsides probably outweigh the upsides from what I have personally seen. Now, this is much less of my area of expertise. We just held, a couple of months ago, an event on child safety at Stanford in which we tried to bring together people from the different areas. And it turns out there’s two totally different universes of people working on child health and child safety online. There’s the pediatricians and psychologists and such who study what is the effect of this on kids’ brains and how kids interact with each other. A lot of this has to do with the bad stuff that you and I dealt with as teenagers. It’s just amplified if you’re doing it on chat all night until 1:00 AM. People have been bullied and girls have had body image issues for a long period of time, but all this stuff gets really amplified by the internet.

And then there’s the adversarial part in which adults are actually doing bad things. I’ve worked much more on the adversarial part. That’s our Twitter report and our work this week is all about the adversarial side. And it’s fascinating to get these folks together because they know very little about each other’s worlds. And I learned all these things from, we had these pediatricians from Stanford Hospital, from Lucile Packard Children’s Hospital.

We had folks in the med school from the psychology department. It was just fascinating to hear about the kinds of things the Surgeon General was talking about here, which is there’s real good peer reviewed evidence that social media overall is bad for teenagers. And I certainly see that with my kids. Again, it’s the social things. I don’t think the social media, it’s not zapping their brains directly. But, the way kids interact with each other and the heightening of drama, the fact that something could go effectively viral very, very quickly, much faster than it could when you and I were taking the phone down the hallway. Did you have the huge cord that you could take the family phone into your room?

Evelyn Douek:

We were up to wireless phones by the time I was using them.

Alex Stamos:

You’re younger.

Evelyn Douek:

Yes.

Alex Stamos:

For me, it was I literally went to Radio Shack and I bought a 40-foot RJ11 cord so you could get it all the way to my room. Back in those days, the speed at which kids could be crap to each other or could push each other or bully each other or talk about these kinds of things is much reduced than what’s going on right now. Obviously, kids are being exposed to this huge universe of all of this content, some of which again is good and some of it which is not so good. A lot of it which is not so good. I totally agree with the overall thrust here. The problem is the practical implications are really tough. And what we keep on seeing is proposals for child safety bills that are either, one, completely destroy the privacy of everybody, not just children, because you have things like everybody has to show a government ID to use the internet.

That’s how the People’s Republic of China solves this problem. You just have to show ID and you’re able to get a SIM card and to get a login. But, not really compatible with how Western democracies try to do things. Or you have laws that are FOSS and testing, which you create all these new liabilities. And there’s a really good piece in Tech Dirt that I think we should link to where he talks about how one of these proposed laws, you already have the Heritage Foundation and some other right-wing groups basically saying, “If these laws passed, we’ll use them to wipe out any kind of pro-LGB content online.” You can agree with what the Surgeon General is saying as a general issue, but the proposals we hear both at the Surgeon General’s office and especially out of Congress, I think are always shortsighted and never considered the second and third order effects of how those things will be abused by bad actors.

Evelyn Douek:

Yeah, I really appreciate that because I’ve been super frustrated by this conversation, which, like everything, seems to have devolved into two camps. One which says social media is the worst thing in the world that’s straight into our kids’ brains and corrupting them and it’s a dangerous, toxic substance and we need to ban it entirely. Another is no, it’s really good. It’s connecting LGBT youth all around the country, which is also good and really not ignoring the real risks that do exist. I think you’re right. We need to find a way to be conscious and attentive to these risks without jumping to these pretty Draconian responses. I’m worried about this and I’m certainly not envious of parents that have to wrestle with these issues. But, a lot of the policy responses that come out, including the Surgeon General’s report, are pretty worrying.

The Surgeon General report points to a strong history of taking a safety first response to things like toys, cars, and medication. And there’s a big difference between toys, cars, and medication and a social media platform. You don’t need to be a First Amendment professor to know that there’s no constitutional right to play with toys and there is a constitutional right to speak and to listen. And that’s not merely an academic point. This is not just a point about what some old dudes wrote down hundreds of years ago.

That’s because we protect freedom of speech because of its value, but also because we really worry about what happens when we call on the government to adjudicate what is, quote unquote, harmful speech. If you are asking the government, a judge, what is harmful to children, yes, absolutely, you’re going to see governments that find pro-LGBT content, content about birth control, all sorts of things. You don’t need to use your imagination very hard to work out where that leads. It’s a pretty dark and scary place. We need to find a way to find a response to these risks that doesn’t result in that kind of response.

Alex Stamos:

Yeah, I think the other issue here is we’re not talking to teenagers. If you talk to teenagers about this, they will say, “Yeah, sometimes using Instagram, I feel really sad, but also I can connect up to my friends and these are the positive things.” I don’t think we listen enough, especially the older teenagers who have, I think, the ability to have some self-awareness here of to what are the parts of social media and online environments that are negative versus positive, and what are the situations which are positive. The real problem part of this is we’ve never really had a good push from a research perspective and design perspective about designing surfaces for teenagers that are specifically good for facilitating positive interactions between young people. It’s just never been a real design goal of any of these products. I don’t know how you encourage that without creating this crazy, nebulous legal liability that then could be abused by all these bad actors for anything they see that they don’t think is, quote unquote, kid safe and therefore try to repress it not just from kids, but from any adult who uses the internet.

Evelyn Douek:

Any parting words for Linda this week, Alex, as we wrap up?,

Alex Stamos:

I’m going to be honest. Linda, if you want to chat, we would love to talk. It’s my last name @stanford.edu. Happy to talk about the fantastic work Twitter did around the early days of child safety and building things like the Tech Coalition. We’d love to talk to you about how Twitter was a leader in trying to get rid of fake accounts and bots and networks of folks who are trying to manipulate the platform and how perhaps there’s been a little bit of backsliding in those areas. If you want a little bit of the history of how this stuff can actually work out, I’d be happy to give that to you because it’s going to be the faster that Twitter understands that these are areas that are pretty critical, having a platform that’s completely overrun by the 50 cent army of the People’s Republic of China.

Just flooding it with pro-Chinese messages. Every time you say something that’s critical of China. We just had the anniversary of Tiananmen Square this weekend. You should go look. People should go look at any discussion involving the anniversary of Tiananmen Square. It’s flooded with pro-Chinese messages in the way that was not true any other June 4th in the last 10 years. That is a direct response to the Twitter getting rid of the teams that go and find that and shut down this kind of organized manipulation. My email is open, Linda. DM is open. Drop me a line and we’ll talk about brand safety.

Evelyn Douek:

I’d be a little surprised, to be honest, if on her first day she found 40 something minutes to listen to this podcast. Maybe only 15 given that you can listen to it on two to three times speed. But, it wouldn’t actually be the worst thing that she could do today to learn a bit about the platform that she is now at the helm of. And so with that, this has been your Moderated Content weekly update. This show is available in all the usual places, including Apple Podcasts and Spotify. And show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn’t be possible without the research and editorial assistance of John Perrino, policy analyst at the Stanford Internet Observatory and is produced by the wonderful Brian Pelletier. Special thanks also to Justin Fu and Rob Hop.