MC Weekly Update 10/23: The Enemies of Progress

Alex and Evelyn discuss the “Techno-Optimist Manifesto” posted by Marc Andreessen this week and whether one can love technology and also think about risk management at the same time. Tricky! They then discuss the ongoing challenges of moderating during war, the Supreme Court’s cert grant of the jawboning case out of the 5th circuit, and Threads’ position on news.

Show Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • Marc Andreessen, the co-founder of venture capitalism firm Andreessen Horowitz and the Netscape web browser, wrote a lengthy blog post with an ode to technology. He also manages to declare trust and safety “the enemy” in the rambling screed of more than 5,000 words. – Dan Primack/ Axios, Marc Andreessen/ Andreessen Horowitz 
    • Have you “properly glorified” technology today?

Moderating the War

Legal Corner

  • Threads is still working out what it wants to be and says suppression of search terms on controversial news topics. – Sarah Perez/ TechCrunch 

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance. Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Transcript

Evelyn Douek:

To John Galt, techno-optimism, sage, you led us into the modern age. With boundless dreams and reasons might, you lit the path to futures bright. Innovation’s patron we revere, your legacy of progress clear. A beacon in the darkest night, you championed reasons guiding Light. And welcome to Moderated Content’s weekly slightly random and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos.

Today, obviously, we opened with our obligatory ode to a patron saint of techno-optimism, written thankfully by ChatGPT, John Galt, of course, the hero of Ayn Rand’s Atlas Shrugged. And this week he was declared a patron saint by Marc Andreessen, the venture capitalist co-founder of Andreessen Horowitz and co-founder of Netscape, who wrote a 5,000 words blog post, screed.

 

Alex Stamos:

Screed.

 

Evelyn Douek:

Last minute piece of homework. I don’t know what this was, as an ode to I think ostensibly technology, but really I think unbridled capitalism is what was going on here. So I thought it might be useful for us to talk about it because, I mean, there’s been plenty of discussion about it, but there is a portion of this blog post where I think we might or at least we are affiliated with what Andreessen declares is the enemy. The enemy. We need your voice on this one, Alex.

 

Alex Stamos:

The enemy. The enemies of progress. That’s what we’ll have to rename the podcast. Welcome to The Enemies of Progress.

 

Evelyn Douek:

That’s so much more catchy. That’s us, hating on progress right here. Yeah, so Andreessen says we have enemies. Our enemies are not bad people, but rather bad ideas. Our present society has been subjected to a mass demoralization campaign for six decades against technology, against life, under varying names of things like existential risk, ESG, social responsibility, and this is where we come in, trust and safety.

 

Alex Stamos:

The enemies of progress, trust and safety.

 

Evelyn Douek:

Trust and safety edition. That’s right. All right, I mean, the post overall is not the most optimistic post, frankly. It reads like….

 

Alex Stamos:

Of all the blank optimist posts, it’s definitely more pessimistic than you would expect. Yes.

 

Evelyn Douek:

Lot of downers in this post about how very scary the future is. But I guess, I mean, I’ve got a really hard question for you, Alex. Take your time. Can you believe that technology is good while also believing in trust and safety?

 

Alex Stamos:

Gosh, that’s an interesting question. Okay, so I’m not here to praise Marc Andreessen, but to burry him. Okay, look, there are things I actually like in this, right? And one, he is a venture capitalist, so they’re supposed to be optimists and the entire venture capital model is you invest money in a bunch of stuff that’s probably going to fail. And then every once in a while something gets huge and then that pays off your LPs, and that pays for the house in Wyoming and the private jet and all of the accoutrement that Marc Andreessen takes advantage of.

So you have to be techno-optimist to a certain extent to be a venture capitalist. And there are some things in here that I agree with. I do think he somewhat is responding to mass media discussion of AI, for example. He has a lot of stuff in here about AI, and it’s true. There’s a bunch of complaints and worries about AI that are disconnected from empirical reality.

There are a bunch of people in the tech world who really focus on long-term risk to humans from AI and such that’s just really science fiction and they ignore the kinds of things you might want to deal with these days. And I think he is right to effectively make a moral case for growth. He has, I think, and he’s not the person that invented this, a number people talk about this, that the measure of energy used per human is the measure of progress.

And I see part of this as a response to some of environmental folks who believe the solution to global warming and every other problem is basically for human beings that get poor net and for rich people to get a lot poorer, but overall for us to stop or plateau, which he’s basically pointing out that is unfair to all the people who have not benefited, the billions of people on the planet who have not benefited from the economic growth of the last 50, 60, 100 years or basically everything that’s happened since the Industrial Revolution.

That it is completely reasonable for peasants in India to want to be able to use as many kilowatts hours per day, as I do. And that you can be optimistic that there are futures in which that can happen without the entire planet melting. It’s interesting because it’s like global warming is a backdrop to a bunch of his complaints there, but he doesn’t specifically say global warming’s fake or whatever.

He just says, “We have fission right now and we’re working on fusion. We’re investing in fusion, and there’s options here that we can have great amount of energy use while not degrowthing the planet.” Okay, so I agree with all that. The part that you read about trust and safety being the enemy’s progress is where he starts to go completely off the rails. I mean, he has some other things. He cites some people here who are pretty problematic people, so I don’t want anybody listening to this and say I’m endorsing the whole thing.

I just think that there are some fundamental arguments we can have about techno-optimism, but his problem about trust and safety is effectively he ignores the fact that every technological advance will either have side effects that are negative or will be abused by bad guys. We live in an adversarial world. We live in a world where there’s people whose entire job it is to rip other people off or who are politically motivated or personally motivated to cause harm to other folks.

And so if you have any technological advance, they’re going to utilize that. And so for any technological advance, a key part of deploying it is thinking adversarially and trying to fix those problems, which is exactly what’s happening with AI right now. And he’s basically flipping out. Because unlike other technological revolutions, the people who are at the forefront of AI, especially OpenAI, Google, Meta, all of those companies have teams that are red teaming the AI, who are thinking through downsides.

OpenAI wrote this whole paper on the misuse of OpenAI for certain purposes, and that is a good thing. It’s a good thing for society. It’s a good thing for capitalism. Because that’s the other thing that this screed, it really disconnects. It highlights that Marc Andreessen has not had a real job for decades. Marc Andreessen has not had to be out there actually getting shit done in the tech industry for a long period of time.

Because if he was actually out there talking to CIOs for humongous companies that are pulling AI, they realize there’s a bunch of benefits to AI, but they have to also know how is this thing going to be abused to hurt me. If I am going to put AI out there to go talk to my customers and get rid of a bunch of customer service reps and that saves me a bunch of money as the CIO and it makes me look great from the CEO, I also have to make sure that that system does not get manipulated to steal data from us or for people to be able to ship themselves product because they convinced the AI there’s some good reason for it.

Most AI trust and safety is not about ESG. It is about making these tools… These are effectively non-deterministic systems from the standpoint of a large corporation and you’re trying to make these tools things that large corporations can utilize. It’s just really how disconnected he is from the reality. He’s got to get out of sandhill and he’s got to get out of the private jet and go talk to people who are actually deploying AI in large enterprises because this kind of risk management is something that companies do.

Just saying risk management is the enemy of progress, that’s just amazing for the history of accounting and CFOs and security folks. It’s just nuts because this man, he’s been on the board of Facebook and he knows. He was on the audit committee. I would brief him personally and I would be like, “In the last quarter we’ve dealt with the Iranians, we’ve dealt with the Ministry of State Security.

We’ve dealt with every single Russian agency, GRU, SVR, SFB, a couple other acronyms you’ve never heard of. All those people are trying to break in, all of them are trying to misuse our platform. This is what we’re doing to fix it.” And at no time did Marc say, “Alex, by trying to oppose the Ministry of State Security’s attacks, you are the enemy enemy of progress.” Right? That’s just ridiculous.

 

Evelyn Douek:

He was pulling down on the job then clearly.

 

Alex Stamos:

He was on the audit committee. His job was risk management on behalf of the shareholders. It’s almost just like a political position to get him fan. He can’t actually believe this based upon the man’s background. I don’t know. If he does, then he’s got a real problem. He has turned the corner away from consensual reality in a way that’s really disturbing for a guy who controls so much deal flow on sandhill.

 

Evelyn Douek:

Yeah. I mean, this last point that you’re making is I think super important in how the argument fails on its own terms, right? It’s like the lesson that Elon Musk is learning the hard way is the number one driver of trust and safety or the reason why it was created in the first place was capitalism. It was brand management. It was a business proposition for the companies to protect their product and make it usable for users and a nice place for people to visit.

And in many ways, this is why people were really concerned about it and have been really concerned about and raising free expression concerns is because the capitalist strives of trusted safety aren’t always aligned with the social good. This is something that the free expressions callers talk a lot about and the ways in which responding to public pressures and responding to business incentives doesn’t adequately protect the public interest and free speech. And so I think it’s just totally missing the actual driver here of what’s going on.

 

Alex Stamos:

It’s the opposite of the Bill Gates trustworthy computing memo where that was a big deal for Gates about 22 years ago, 21 years ago.

 

Evelyn Douek:

This one’s going over my head, but I’m going to nod knowledgeably on this one.

 

Alex Stamos:

Bill Gates wrote this famous memo when he was CEO of Microsoft towards the end of his tenure, basically after a bunch of these security scandals where Microsoft was getting slaughtered for NMDA and Code Red and a bunch of security bugs that caused these big worms. And their enterprise customers were basically like, “Hey, we can’t buy your products if they’re not reliable.” And so he wrote this trustworthy competing memo and really pivoted Microsoft and that had a huge effect on the rest of the industry.

But part of his memo is basically like, “Oh, why are people going to buy our stuff if they can’t trust it?” Because he recognized we are living in an adversarial world. Microsoft cannot ship at the time SQL Server. They could not ship SQL Server without assuming how are people going to utilize… Are going to attack this system to try to make money. And this is the opposite where he just pretends that you can ship technology and not think adversarially, not have people whose entire job it is to protect it from bad guys.

The people who protect systems from abuse are the enemy themselves. It’s completely, totally wacko. And again, the whole risk management thing, the hilarious thing here is you scroll to the bottom of the blog as it’s posted on a16z, there are three paragraphs in italics of disclaimers. The views expressed here or the individual AH capital management personnel quoted are not the views of a16z and its affiliates. Content is provided for informational purposes only. It should not be relied upon for legal, business, investment, or tax advice. This is risk management. This is the lawyer.

 

Evelyn Douek:

Well, that guy hates progress.

 

Alex Stamos:

If he really believes this, he should remove all of these legal things. They should fire their general counsel. They should not have anybody. Andreessen Horowitz moves billions of dollars around. They have a huge team auditing that to do risk management, to make sure that they’re handling funds… They should fire all those people. That’s all inappropriate. Those people are all enemies of progress, unnecessary friction.

 

Evelyn Douek:

Unnecessary friction.

 

Alex Stamos:

Every employee should be just allowed to write checks from Marc Andreessen’s personal bank account because that is the way we’ll get the most progress. He clearly doesn’t believe this. It’s just like when you hear some of these guys railing against the elites from their chateau on top of a mountain in Aspen like, “Oh my God, the elites are destroying the world.” Everything he has done is completely orthogonal to what he’d saying here. And so it’s just interesting to look at it as an artifact because it can’t actually represent anything he truly believes.

 

Evelyn Douek:

Right. Certainly a pretty clear statement about the culture and the culture wars that are going on.

 

Alex Stamos:

The other thing you have to read this against is they got their butts kicked by Web3. Andreessen Horowitz went all in on blockchain as the fundamental new technology of the future. And a number of people, including myself, pointed out, hey, not every problem society is facing is the Byzantine general’s problem. And things like we’re going to build smart contract systems that assume human beings can write perfect code to represent those contracts, that that was a dumb idea.

When I pointed that out to his partner Chris Dixon, I was blocked on Twitter and Andreessen blocked me for just saying this is a bad idea. Well, it turns out billions of dollars have been lost in that space since I wrote those tweets, and they have never ever admitted any kind of fault. And so that’s the other way you have to read this is basically people told them they’re wrong, that they had to be more careful about these, that they’re investing in grifters or they’re investing technologies that will enable grifters and harmful things.

That turned out to be true, and they’ve probably lost hundreds of millions of dollars in LP money. We won’t know until they have to mark-to-market, but that’s not for years in the way that these funds work. But they’ve lost all this money and they’re just moving on. You also have to read this to the backdrop of they got the last set of decisions wrong, and he’s apparently in such an echo chamber that he can’t see that. Hopefully, Ben Horowitz, I’ve always found… I worked for both these guys at LoudCloud.

I’ve always found Horowitz to be a much more thoughtful person. I can’t imagine he actually believes this kind of stuff. My hope is that Andreessen’s just out there on a dream quest, taking POT and reading Ayn Rand and then having GPT-4 write stuff for him like this, while Ben Horowitz is actually making sure that their money’s deployed in a smart way with reasonable risk management.

 

Evelyn Douek:

This is not GPT-4. This is like three maximum. Editors are apparently also enemies of progress by the looks of these.

 

Alex Stamos:

Editors who reduce our words.

 

Evelyn Douek:

Yeah, exactly. Unnecessary friction. Okay, so speaking of trust and safety as risk management and not necessarily well-aligned with the public good or free expression, we should…

 

Alex Stamos:

Wow, look at that pivot. Amazing. Pivoting like Magic Johnson right there.

 

Evelyn Douek:

That’s it. We have to, of course, turn to the ongoing story about the tragedy and war continuing to unfold in Israel and Gaza, which is, as we say every week, also a Content Moderation story, which is the aspect that we are talking about here. But before we dive in, I would just like to note that several top Republicans have said that they support revoking the visas of people that they’ve found to say outrageous things like criticizing the killing of civilians in Gaza.

And so as a person here on a visa, just like to say that I absolutely love this country’s strong commitment to First Amendment freedoms. What does it say here? Oh yes, #IStandWithIsrael is what my notes say I’m supposed to be saying. Honestly, the free expression contours of this crisis is just profound and really surprising to me. And part of that, of course, is playing out online.

So last week we talked about the challenges of moderating terrorist content for platforms and a thing that we mentioned in passing, and I want to spend a little bit more time on today, is the error costs of aggressive moderation in moments like this and who bears the costs of these errors. We should start with the proposition that content moderation is at the best of times impossible, and these are not the best of times. There’s a flood of content and a lot of it becomes more violating content, which stretches platforms to absolute capacity and makes errors especially likely.

And so we have seen over the past week or in the past few weeks stories cropping up about how these errors are unfolding. These inevitable errors are unfolding, and they come against the backdrop of longstanding concerns. The Palestinians and Arabic speakers tend to bear the costs of these errors more highly, more harshly than other speakers.

Meta got a headline that you never want in 404 Media this week that said Instagram sincerely apologizes for inserting terrorist into Palestinian bio translations after an auto translation function was translating words like Palestinian and an Arabic phrase that means praise be to God into Palestinian terrorists are fighting for freedom, which the platform says it’s very sorry for. But these are the kinds of things that are cropping up by systems working too fast and without enough data and not being trained on Arabic content.

The Wall Street Journal had a big story on this yesterday, which I think is really worth reading in full about the internal tensions at Meta in terms of how they’re dealing with this content and showing that they’re just completely stretched capacity, and what they’re doing is reducing the threshold for hiding content in the region because they’re just not catching all of the violating content. Normally they hide things when they’re 80% certain that they qualify as what the company will say is hostile speech.

They first reduced it to 40% and was still failing to cope. And so it’s now at 25%. Now, math is not my strongest point, but 25% likely to be hostile contents is not a high percentage.

 

Alex Stamos:

No.

 

Evelyn Douek:

Right?

 

Alex Stamos:

I think you saying that is quite possibly 20% hostile.

 

Evelyn Douek:

Just below the line. It’s allowed to stay.

 

Alex Stamos:

There goes your visa.

 

Evelyn Douek:

That’s right. We’ve talked about how this is an impossible situation for platforms. It’s a really difficult problem, but of course, it does have all of these suppressing effects on freedom of expression in this region. And again, takes place against the longstanding backdrop of concerns raised by human rights activists that Palestinians and Arabic-speaking users generally bear the brunt of this.

Indeed, Meta had a human rights impact assessment that it commissioned in 2022 by an independent third party that talked about this and showed that indeed there was unintentional bias, not intentional bias in the region, but just unintentional bias as a product of how its technology was working in the region. I just think it’s worth paying attention to that aspect when we see what’s happening on these platforms as well.

 

Alex Stamos:

The one thing I say, people who are interested in this should definitely listen to our last episode with Brian Fishman, who I think spoke about this pretty eloquently and has done was in the heart of these decisions and talked about how he himself would get quizzed by his employees on whether content was ing or not and he, the man who wrote the policy, would get it wrong. So there’s just the operational aspects.

There’s, like you said, the translation aspects. That translation mistake is just a terrible one. It makes you wonder what system they’re using there. If that kind of mistake was made in the internal translation systems that are enabling non-Arabic speakers to do decisions, if that’s live and precision you get, then it’s could be very hard for them to participate.

So it is a great demonstration of the fact that as we’ve talked about before, the best moderated from both directions, internet and the world, is the English internet, because generally the people who write the rules and most of the people who are enforcing and coming up with the interpretations and such are all native English speakers or very good English speakers.

The other interesting thing here I think that people are not grasping as directly with is there are the mistakes and such, but there’s an intentional decision here that I don’t necessarily disagree with, but I think people need to understand, which is that when you have a conflict between a state and an NGO and that NGO is designated a terrorist organization by a number of groups, including by some of the platforms themselves whom have sometimes standards that are higher than government standards, so Hamas is not on the UN terrorist group list, it is on the US and is on Facebook’s, then they are intentionally making decision that the speech that is egging people on will be asymmetrically enforced.

So if you say, “I want Hama to win,” then that will probably be taken down as celebrating a terrorist attack. But if you say, “I want the IDF to win,” that will be allowed to be there. If it was between two states, if it’s India and Pakistan, both sides can say I think [inaudible 00:19:59] I think Pakistan can win. But when one of the groups is a terrorist group, then the companies have made a decision to not allow that speech. I’m not against that decision. I think it’s actually probably a reasonable one, but it is like the effect that happens in these situations is that it is not just the mistakes.

There’s an intentional decision here and the structure of these companies treating non-explicit calls to violence as being violating if they happen to reference a side that is pro a group that has been designated terrorist group is going to have this effect.

 

Evelyn Douek:

Right, and it’s those intentional decisions compounded, of course, by the errors where pro-Palestinian speech is more likely to be classified as by machines or whatever systems are being used to identify this content as pro-Hamas speech and is violating, even though it’s not only not violating, but can be important pro-human rights speech as well.

An example of this, I mean, this is in some of The New York Times reporting around bugs that Meta had that was suppressing pro-Palestinian posts, it noted that many users were going to LinkedIn to post critical of Israel’s response to Hamas and in support of civilian victims of Gaza. And then a sad coder to this is that we’ve had this reporting in the last 24 hours of a website that’s been set up called anti-israel-employees.com that is scraping LinkedIn to identify posts of people that are being criticizing these Israeli government’s response here and post the name of these people.

And ostensibly, the people behind this website say that it’s about exposing people who support Hamas, but it’s very clear that some of these posts are not about Hamas related content at all. It’s like people posting hashtags #GazaUnderAttack. And so it’s sort of the breadth of how these systems are classifying this content. And here’s where the business incentives kick in as well for the platforms where if you are facing possible legal liability for hosting certain kinds of content, what are you going to do?

The business, the risk management team comes in and says, “Be risk averse and take a bunch of this stuff down in order to protect yourself.”

 

Alex Stamos:

Yeah, it would be interesting to ask Andreessen to pull these things together of does he think that… Let’s say there’s a bunch of fake accounts have popped up on Twitter that have been neutral and now are suddenly very pro-Hamas. If those say are Islamic Revolutionary Guard Corps accounts, should that be taken down, that’s trust and safety work. Is that enemies of progress to try to prevent the intentional manipulation or the use of an American platform by sanctioned entities? We can’t make this entire podcast about that.

 

Evelyn Douek:

Actually I think it’s going to be the rest of all of our shows ever being like, and by the way, is this enemy’s of progress?

 

Alex Stamos:

That’s why we should rename it.

 

Evelyn Douek:

Yeah, okay. And last week we talked as well about how governments are not helping in this situation with these incentives. We talked about the jawboning, the letters from Thierry Breton to all of the platforms and then opening an investigation into X. Well, now this week the EU is also asking for information from Meta and TikTok.

It warmed my heart to see civil society groups this week, I think it was 30 civil society groups or more, wrote a letter protesting the politicization of the DSA in this way and saying that the DSA has been contesting the legal interpretation of the DSA and saying, you are drawing a false equivalence between the DSA’s treatment of illegal content and disinformation and really worrying about the effects on freedom of expression that this kind of grandstanding is going to have.

And I think that’s absolutely true. We talked about it at length last week, but the question I guess is, on the other hand, we have for a long time been talking about the opacity of these platforms and how it would be good to know more about what’s going on, especially in these crisis situations, like these concerns about bias. How are they playing out? What is it that the platforms are taking down?

And so I guess it’s good to maybe take a step back and say, okay, if this process was working properly, if this wasn’t just political grandstanding and Thierry Breton’s grand plan to become your hypothesis I think was president of France.

 

Alex Stamos:

Well, I don’t know what else is for him, right? There’s nowhere else in the EU to go, and he was already a cabinet minister, so le president seems to be the best option for him.

 

Evelyn Douek:

Okay. Taking that out of the equation, if this was not about political ambitions, but actually about a genuine attempt to try and get good information about how these platforms are responding to this extremely difficult situation, what’s the kind of information, Alex, that you would be interested as someone that researches this, that’s actually genuinely interested in knowing about this, what would you want to see from platforms about their response?

 

Alex Stamos:

I think the DSA action database, the database of these are actions that were taken to moderate speech, could be useful in this situation. It’s still very early, as we talked about with Daphne in our live show a month ago. The compliance with that database is very uneven. That schema is very messy, and I think in a lot of cases it doesn’t have the actual content.

But if we could look at a database right now and you could basically say, “I want to see all the Hebrew and all the Arabic stuff that’s been taken down,” then you could make some very good empirical measurements of this is how the rules are being enforced. And whether you agree with them or not, we could actually have this argument based upon empiricism.

Right now Twitter and every other platform is full of people complaining that my side is being censored and your side is not, and those conversations are useless of doing these kinds of arguments based just upon anecdote. If we could have real good study of these decisions, then that would be helpful. So I think if the DSA continues on the path of building that out and we could end up with data outside of Europe being in that database, then that would be amazing.

 

Evelyn Douek:

I mean, the information like The Wall Street Journal got about how they are reducing their thresholds for error rates in these situations, I mean, that’s actually really valuable information to understand what’s going on in the decision-making and the trade-offs that the platforms are making in this situation. I would love to know what the other platforms are doing and how they’re making those same decisions. And the other thing that I would really love to see is, and this is not part of the DSA information request, but would be really valuable is an evidence locker.

The human rights community has been talking about this for a long time, but currently platforms are removing a lot of content, a lot of it for violating their graphic imagery policies and things like that. And a lot of that content would be valuable information for future human rights investigators and war crimes tribunals as to it is the first draft of what’s going on. A lot of it’s unreliable.

A lot of it, as we’ve talked about on this podcast before, comes from many years ago in Algeria or whatever it is, but at least keeping that information for analysis at a time where there’s time and space to actually verify what’s going on and have a better understanding and then use it in legal disputes I think would be very valuable. But at this stage, it’s not clear that any platform’s doing that and that a lot of this content is just for all intents and purposes disappearing, never to be seen again.

 

Alex Stamos:

Yeah, and that is an unfortunate lesson from these past controversies for which there’s no good solution. If you look back at Myanmar, if you look back Sri Lanka, if you look at situations in which violence has been either documented or supported on the platforms, you end up with this content just disappearing, gain memory hold, and then it’s never available to human rights groups, to journalists, and eventually to international criminal court and the other kinds of authorities that could possibly hold people accountable. And so yeah, I think that would be the evidence locker, and that is something that should be run by governments.

I think that’s actually a reasonable place for the companies to push back of like, hey, it’s not our job to do this. And so if you had effectively a repository that the EU is controlling of content that is of a certain level that is related to this conflict that is going to be taken down on the platforms that they could deposit it there with the appropriate metadata, like what kind of privacy goes into that becomes a very complicated one, but they deploy the appropriate metadata compatible with all of the different privacy laws that apply, which might not mean that you have very much at all. At least you’d have the actual evidence and hopefully it would be…

Because it would be deposited at that time, you’re effectively time-stamping it and saying that this was something that was at least we know… We don’t know that the video is actually authentic, but it was uploaded on this date to this platform. And that would be a useful thing. And then you wouldn’t deal with these ECPA issues, for example, that happen afterwards when you have the International Criminal Court trying to get evidence from Facebook. And Facebook is stuck between GDPR, ePrivacy Regulation, ECPA, the Irish local law enforcement rules, all that kind of stuff.

 

Evelyn Douek:

Okay. Speaking of government pressure on platforms, the big legal news for the week, so the big legal news this week, it wasn’t a total surprise, but it was a surprise I guess that it happened so quickly, the Supreme Court granted cert in the jawboning case out of the Fifth Circuit.

We have talked about this case at length on the podcast, including two dedicated episodes with Professor Genevieve Lakier of the University of Chicago about the district court opinion and the Fifth Circuit Court of Appeals’ decision that held that large parts of the Biden administration had violated the First Amendment in their ongoing engagement with social media platforms around in particular COVID misinformation, but also things like the Hunter Biden laptop story and things like that and issued what was originally an extremely sweeping injunction enjoining those continued contacts.

And then the Fifth Circuit narrowed the injunction somewhat. I’m going to point you back to those episodes to get a really deep dive on the existing state of the law or those decisions as they’re being teed up for the Supreme Court to weigh in this term. So that will be coming up in the coming months.

And the Supreme Supreme Court in the meantime granted the application of the stay of the injunction, which the TLDR of that means that the Biden administration can continue talking to the platforms in the meantime while the case proceeds. Alito wrote a very angry dissent from this decision in which Thomas and Gorsuch joined saying that government censorship of private antithetical to our democratic form of government, and therefore today’s decision is highly disturbing.

The majority stays the injunction and thus allows the defendants to persist in committing the type of First Amendment violations that the lower courts identified. So it’s really hard to pick out where exactly they stand on this issue at this stage and where they’re going to come out on this. I mean, I’m joking obviously. It’s a pretty clear sign that at least three of the Supreme Court strongly leaning towards finding a First Amendment violation here. But three is not five and it’ll be interesting to see what the rest of the court does.

And again, I’ll refer you back to those previous episodes to get a deep dive on the law, and I’m sure that we will continue to have podcasts discussing the submissions and the argument as this case goes on. There’s going to be a lot of legal podcasts coming up because this is an absolutely huge term in terms of the First Amendment and social media. Next week we have the Supreme Court is hearing two cases about whether government officials can block their constituents or the public on social media.

And then as we’ve talked about at length, we’ve got the NetChoice cases arising out of Texas, of Florida with the must carry rules and the transparency provisions and whether they violate the First Amendment, and now the court’s gone and added. This week there’s been a huge interpretive debt about how the First Amendment should apply to these new technologies and how these new technologies fit in with the court’s precedent.

But I don’t know that we expected them all to come due in the same term, in the same six months. And so it’s going to be quite the show watching the Supreme Court try and fit all of these pieces together.

 

Alex Stamos:

The fact that they’re doing this all at the same time, does it make it more or less likely that you end up with a actual consistent standard being applied here? Because you could totally see three different decisions that give governments huge amounts of power, in some cases no power in others, and stupid things like the city councilmen can’t block people who are giving them death threats at the same time. Do you think they’re thinking about this from a consistent perspective, or is it just going to be random?

 

Evelyn Douek:

Right. I think in an ideal world it’s kind of maybe good, you’d say, that the court is taking all of these cases at the same time to think about the relationship between government power and state action and private power, and whether these spaces on the internet are public forums or whatever, how to think about them. But do I have faith that they are going to go through and actually do the hard intellectual work of reconciling the places where there can be inconsistency and thinking about that?

I mean, I don’t. I’m very nervous that what could come out of this is that it’s happening so fast that there’s just not going to be the time to take that more incremental step-by-step approach. But I could be wrong. It could be precisely because they want to understand both when the state uses formal law to ask platforms or mandate platforms to do certain things.

And when the state uses informal action to try and make platforms do things, how do we fit those two different kinds of action together and how should we think about that in the law and trying to take them at the same time so that there’s some consistency there. But I guess I’ll try and be ever the optimist and hope that that’s the case, but we’ll see.

 

Alex Stamos:

Well, no matter what, we know that all of the enemies of progress will be filing amicus this year because it’s everything at once. Get those fingers ready.

 

Evelyn Douek:

Right. We don’t even need to scrape websites to compile this blacklist. You just go to the Supreme Court docket and see the enemies of progress all listed there who are just getting in the way of unbridled capitalism. Yes, absolutely. I mean, I don’t actually know how all of the amicus briefs are going to get written. I mean, that is one of the costs, the real costs, I think, of doing all of these at the same time is that you want the same experts writing briefs in all of these different cases.

And unfortunately, they are all human and have lives and need sleep and things. And so it’s going to be sending them caffeine and thoughts and prayers for the next six months. Okay, and then finally, there’s been this ongoing debate in the last few weeks about platforms and their relationship to news and whether they want to be news, how they want to think about their relationship with news.

And this primarily takes in place around Threads, Meta’s new platform, and whether it has come out and said it doesn’t want to be primarily a platform for news, but they’ve been walking that back and saying, “Look, we’re not anti-news. It’s just that we aren’t going to be promoting news.” And then this has come into the headlines again this week where Threads has had this decision, which we’ve talked about before on the podcast, where it blocks certain search terms on its platform, including around COVID misinformation and things like that.

And Mosseri has said this week that that’s only temporary as they continue to ramp up their trust and safety and the maturity of the platform. And I’m just curious for your thoughts, Alex, on this, about how… I think in some ways people have said, “Oh no, platforms should really control news because all the news you get on platforms is fake news.” And now that the platforms are stepping back from news, they’re going, “Oh my God, this is terrible, where platforms aren’t going to be providing reliable content.” So how do we square that circle?

 

Alex Stamos:

Yeah. To steal man the enemies of progress essay, one of the complaints there is effectively about the media and media treatment of technology. And I think this is actually a completely predictable outcome of the platforms being blamed for the behavior of everybody who’s on it. So the person who’s been most aggressively complaining about the fact that you can’t search for the word COVID on Threads is somebody who worked for newspapers and who wrote a bunch of articles that were effectively, I saw something on social media I did not like, right?

This was a very popular way to fill a couple thousand words to file at The New York Times and The Washington Post and The Wall Street Journal and a couple other big media outlets in the 2017 to let’s say 2022 timeframe. I saw something I did not like on social media. It should be taken down. And a lot of those were written about COVID, and there’s legitimately a lot of problems with the discussion of COVID on big platforms. There’s a lot of real interesting challenges about how do you moderate in the middle of trying to create consensus, scientific consensus on these issues.

And so there’s reasonable complaints to be out there, but the way that this was treated was never that subtle. You never really had the people who were saying the platforms were… They almost never said they’re doing too much. It was always, you’re not doing enough, you’re not doing enough. Well, this is the platforms doing enough, right?

If Adam Mosseri does not want Threads to be taken over by people who are arguing over whether long COVID is real or not, arguing over vaccines, that you have this humongous social culture war battle being fought on your platform, then you have to take a step like this to just take the conversation off. And so I think one, this is a natural consequence to the kinds of complaints that you saw for years after years, especially against Facebook Meta.

The second is TikTok has changed the world here in that TikTok very aggressively, if they don’t take content down, they very aggressively down rank anything that’s serious, right? There’s very little serious hard news on TikTok, and they’re making a ton of money. They’re totally eating Facebook’s lunch in a bunch of different ways. And so I think it’s the confluence of, well, you complained about this, and the platform that is the least news heavy is doing incredibly well right now, then we’re going to partially copy that.

I think other thing you’re seeing here is that they don’t know what Threads is yet. So is Threads a Twitter competitor? If it’s a real Twitter competitor, then you have to allow very controversial topics to be discussed. Is it more like Instagram? Then you don’t if it’s text Instagram. And I think that’s the other issue that we’re seeing is that the Threads product management team is constantly bouncing between, is this text Instagram or is this a better version of Twitter?

Mosseri said, “We’re going to allow this,” so it seems like he wants to end up in the better version of Twitter. But for now, they’re going to benefit from the fact that if you just block entire words, it turns out that some of the most controversial topics that make you no money and only bring you grief just disappear from your platform.

 

Evelyn Douek:

Your social media use like these days? Are you finding Threads a good source of information? Where are you getting your information from these days?

 

Alex Stamos:

Yeah. I mean, I’m still sad that during this war over the last couple of weeks, it would’ve been really useful for Twitter to be what it used to be, a place that you could reliably get breaking news. I mean, obviously it’s always been manipulated. You can read plenty of things that I’ve co-authored about the manipulation of Twitter, but it’s gone to such a place that it was effectively negative to go on Twitter to try to find breaking news. So I’ve spent probably too much time in Hamas Telegram channels, to be honest.

We’ve been doing a bunch of research into what is going on in platform manipulation and how terrorist groups… We’re hoping to publish some stuff in the future about Hamas versus ISIS and what the different structure looks like. And so unfortunately, I’m getting too much of my news from Telegram, which turns out to be a pretty negative place to stay on top of what’s going on in the world, especially if you’re part of those kinds of groups. That is not my normal account, let me just say.

That is a burner. On Threads, I mean, I think Threads has some reasonable conversation. It’s not quite there yet for breaking news, but for the second order conversation, like the conversation about the techno-optimism manifesto and stuff, there’s a bunch of good stuff there. And then for really techy stuff, it’s been mostly on Mastodon. I continue to run my own Mastodon instance with a couple other people and a lot of the technical cyber people. Infosec community have moved to Mastodon and seem to be hanging out there.

It’s not where you’re going to find the normies, right? It’s just too technologically advanced, but it’s almost… Mastodon feels a little bit like IRC used to be, right? It’s a little bit of a club and people are fine that it’s hard to use because it keeps the riff-raff out.

 

Evelyn Douek:

As a pretty first class normie, I got to say, yeah, I gave up on Mastodon a while ago. I still haven’t found my…

 

Alex Stamos:

So what are you hanging on?

 

Evelyn Douek:

Yeah, no, I mean, I’m not. I’m in Threads and Bluesky. I find Bluesky…

 

Alex Stamos:

You say Westlaw and Thomson Reuters.

 

Evelyn Douek:

That’s right.

 

Alex Stamos:

That’s where all the cool people hang out.

 

Evelyn Douek:

That’s a business opportunity for LexisNexis to really get in on. Just don’t listen to any of the podcasts we’ve done about how content moderation is a real headache and let the academics loose. Yeah, I mean, there’s a pretty robust academic community on Bluesky now, and so that’s where a lot of stuff that I’m following is. But I mean, I haven’t found my second home, and so I was kind of hoping to hear something from you, but the Telegram channels is definitely not. Thank you for doing that work, because that’s definitely not something that I have the stomach for I want to see.

So I look forward to reading the research that comes out of it. And with that, this has been your Moderated Content weekly update. This show is available in all the usual places, including Apple Podcasts and Spotify, and show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn’t be possible without the research and editorial assistance of John Perrino, enemy of progress, at the Stanford Internet Observatory, and it is produced by the wonderful Brian Pelletier. Special thanks to Justin Fu and Rob Huffman. Talk to you next week.