MC 10/16: Facebook’s Ex-Counterterrorism Lead on Moderating Terrorism

Alex and Evelyn talk to Brian Fishman, the former Policy Director for counterterrorism and dangerous organizations at Facebook/Meta, about the history of terrorism online, the challenges for platforms moderating terrorism, and the bad incentives created by misguided political pressure (looking at you, EU).

Show Notes

Transcript

Brian Fishman:

Terrorist organizations are able to adapt very, very quickly to new circumstances. You take down their accounts, they create new ones. You take down a network, they have safety accounts that their supporters know that they can go connect to, and then they can rebuild their network. They operate across platforms.

 

Evelyn Douek:

Welcome to Moderated Content’s weekly slightly random and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. We are recording at around 10:00 AM Pacific Time on Monday morning. A ground invasion of Gaza seems imminent, the humanitarian crisis in the region is escalating, and tragedy is compounding tragedy. But as we said last week, we are not Middle East or foreign policy experts, and that is not what you’ve come to listen to us talk about. But for us, as for many of you no doubt, our understanding of this crisis is mediated by the social media platforms that you do come to listen to us to talk about. So that’s what we’re going to talk about today.

And there is perhaps no one that has spent as long thinking about how social media platforms should respond to terrorism than our guest that has kindly joined us today, Brian Fishman. Brian is the co-founder of Cinder, a software platform for trust and safety, but relevantly, he previously worked as policy director for counter-terrorism and dangerous organizations at Facebook and now Meta. And in a previous life he was an academic researcher working on counter-terrorism studies as well. So, thank you very much, Brian, for coming to talk to us about this.

 

Brian Fishman:

Thanks, Evelyn. Thanks, Alex.

 

Alex Stamos:

Glad to have you, Brian.

 

Evelyn Douek:

I want to take a step back first. When we say content moderation these days, what it gets associated with a lot of the time is things like the lab leak theory or Hunter Biden’s laptop or something along those lines, but actually the origins of content moderation beyond a lot of porn and adult content was this story of platforms combating terrorism, and a lot of the original early controversies around content moderation were around how they were dealing with terrorism. So this is not a new story, and I was hoping you could maybe just take us a step back and tell us something about the history of how platforms have dealt with terrorism on their services.

 

Brian Fishman:

I like to actually go back before social media. Sometimes people think that these phenomenon are a function only of social media, but they’re not. The earliest white supremacist presence on the web goes back to the early 1980s, hosted on Apple IIes and Commodore 64s. If you go to the mid-1990s, there were a slew of terrorist organizations, including Hamas, including Al-Qaeda, that were hosting websites on GeoCities. And as you move forward, what you see is that this evolution of terrorist and hate group utilization of the internet really tracks the development of technology overall. You see them going from these one-way sites communicating out to two-way communication to the introduction of email addresses and web forums, and then ultimately the adoption of social media.

And I think the thing to understand is that oftentimes when you go to Washington, DC, you hear policymakers say, “Well, these terrorists, they’re always innovating, they’re adopting new technologies so much faster than we do.” And I think that’s sort of true, in the sense that many terrorist organizations adopt technology faster than members of Congress and the Department of Defense, but they do not adopt technology as fast as Stanford undergrads. So, they operate somewhere in the middle there, and we have to think about that.

Now the platforms themselves, though there are really interesting instances going back to the mid-1990s of hosting platforms taking down white supremacist material then, the platforms that we think about today, social media platforms, didn’t really get serious about confronting terrorism on the internet until the explosion of ISIS content in the middle teens a decade ago or so.

Really that was not just a function of ISIS’s success on the ground in Iraq and Syria, it was their success using the internet to recruit Westerners in Western Europe, in the United States, in Canada, and that created this political pressure that was really intense on the large platforms that inspired them to get serious about these problems. And one of them at least hired me to help them deal with that as a function of that decision. So I think that’s the process that you see the platforms go to, that they were pretty late, but once they got serious, they got much, much better at dealing with this kind of material.

 

Alex Stamos:

I want to give a little color on the decision to hire Brian, so something I’ve never forgiven myself for. No, I’m just kidding. Hiring Brian was one of the smartest things Meta ever did when dealing with this. So when I got to Facebook, the biggest content moderation safety issue was ISIS and was the misuse of the platform by terrorist groups, but mostly ISIS, as Brian can speak to. Brian has written famously a very good book on the formation of ISIS. ISIS is the first millennial internet first terrorist organization, and so really accelerated these problems on social media. The pressure on the companies was a fascinating thing, in that it was one of those problems where defining what the goals of the companies were was extremely complicated.

I’ll give a preview of something that I’ll have in a book one day, is I’m in a meeting and one of our executives had just come back from the UK where that executive had been yelled at by David Cameron, because horrible things had happened there that the UK government was blaming on Facebook. In fact, there’s a number of people at fault there, including the UK security services who were supposed to be watching these people who were on a watch list and ended up murdering a UK service member in a terrible, terrible way.

This executive comes back and forms a meeting, and this is before Brian was there. There’s a bunch of people in there, and it was talking about what’s our plan around terrorism. My boss at the time, who’s a general counsel, Colin Stretch, said, “Well, what’s our goal here? Let’s define what is a success criteria for us?” The executive who had just gotten yelled at, reasonably based upon their experience says, “To defeat terrorism.” Colin to his credit says, “Oh, let’s pump the brakes there. It’s like, we are a private social media company. Is that an appropriate goal for a private company to defeat … are we going to do drone strikes? What does that mean?”

So coming out of that, we had started to sharpen up a little bit. Well, the goal is that you should not allow terrorists to benefit from what you build, which is a much more reasonable thing for a private company to say. But also that discussion led to hiring Brian and some other folks like him, because it also demonstrated there’s a real gap in understanding of why are terrorists on these platforms and what benefit they get, and what is just the reflection of these things on the platform, and what are the things we are doing that is specifically making their lives better?

Which I think is one thing that’s been lost in a lot of the political discussion here is that just like every other societal effect, any part of society will be reflected online. But there’s also the things companies do that specifically make life easier for terrorist groups, and defining what those are and figuring out what those are is actually a really key thing here that I think gets lost in discussion. But it was a weird time for what is our responsibility to the world, for sure.

 

Evelyn Douek:

So given this long history now of platforms dealing with this and all of this enormous political pressure, it might be one of the few areas where it’s bipartisan political pressure on platforms to do something. I guess, Brian, why isn’t this a solved problem? What makes this a difficult problem? Why is this hard?

 

Brian Fishman:

Well, I think it’s not a solved problem because terrorists are often brutal, but they are not stupid. And they innovate very quickly and often more quickly than platforms, which especially the big platforms, we think about them as these nimble tech platforms. At least that’s the narrative I hear talking with friends in DC oftentimes. But these are big organizations with tens of thousands of people, and they don’t always pivot really quickly.

Terrorist organizations are able to adapt very, very quickly to new circumstances. You take down their accounts, they create new ones. You take down a network, they have safety accounts that their supporters know that they can go connect to, and then they can rebuild their network. They operate across platforms, and this is really, really important, because we still oftentimes measure and think about these kinds of organizations as if this is what Hamas does on Facebook, but that’s not really how anybody uses the internet, especially adversarial groups.

They’ll advertise something on Facebook, but they’ll host the content somewhere else. They’ll point to it from Telegram because they have a relative safe haven there. So you get this sort of really dynamic web of activity, where a single platform can take a chunk out of it, but not all of it. Because they can’t take all of it, it’s easy for these networks to rebuild and then reintroduce themselves to a platform that is trying very hard to do something about it.

Now, I do think that one of the things that platforms can do, they do have a lot of tools in their toolkit. They’ve got hash sharing through the GIFCT, which maybe we can talk about. They’ve got some basic trust and safety tools, keyword searches and image matching and those kinds of things. They’ve got AI and ML that can identify some of this kind of stuff. They need to think about intelligence-driven processes. This is something that we built at Facebook or Meta that was really useful where you-

 

Alex Stamos:

It was built at Facebook, Brian, let’s just be clear.

 

Brian Fishman:

Built at Facebook. Honestly, we were debating whether we should refer to this historical entity as Facebook or update at this point.

 

Alex Stamos:

I’m not going to allow the 1984 newspeak, yes. I was introduced as being the former chief security officer of Meta, and I almost vomited right there on stage.

 

Brian Fishman:

You lost it, yeah.

 

Alex Stamos:

It was bad.

 

Brian Fishman:

No, so one of the things that was useful for me as somebody who had studied these groups that I think Facebook slash Meta didn’t understand when I got there was the degree, the way that these organizations have very structured processes for releasing propaganda. Oftentimes it starts on Telegram, and that gives you opportunities. If you can collect that information on Telegram, then you can search for that image or those videos or whatever that material is on your own platform very directly. You don’t have to use fancy AI to do that. You can do hash matching.

So there is a lot of different kinds of tools that I think are available, but fundamentally those are tactics. The fundamental reality of this situation is that an organized, dedicated group with time on their hands is able to take a punch when you remove them from a platform and come back. So it is just not something that’s going to be solved. It’s going to be something that is a problem that is mitigated and dealt with on a continuous basis until the end of time.

 

Alex Stamos:

To point out, and one of the reasons they can come back is the big platforms that people are thinking about when we have this discussion. The Meta platforms, Facebook, Instagram, WhatsApp, YouTube, TikTok, the VLOPs are not where they’re actually organizing. So those places are the secondary, you see the outcomes of the things that they’re doing, but if you kicked off every Hamas member from Instagram, it’s not like they’re going to lose their communication mechanism with each other, right?

 

Brian Fishman:

Right. I do think that is a change though. I think the VLOPs have made life more difficult. These groups have less reach, they’ve been pushed closer to the corners of the internet, but they still have access. They still have access to the center.

 

Alex Stamos:

Right. And the most important platform here is Telegram, in that they’re effectively openly operating on Telegram that there are … I’m looking at a bunch of channels that I’m in, and a bunch of them have the word Hamas right in it. It’s not subtle, right?

 

Brian Fishman:

Yeah, right.

 

Alex Stamos:

The Hamas Support Group effectively is what some of these things say in Arabic.

 

Brian Fishman:

That’s not just Hamas. That’s where ISIS was operating, as Al-Qaeda uses Telegram too. White supremacist groups all over the world have flocked to Telegram for the same reasons. And I do think that there’s so much media and journalism focused on the name brand platforms in the United States, and there’s not enough around this specific issue focused on Telegram.

 

Alex Stamos:

Right. It’s the number one place for trading CSAM. If you find somebody on Instagram, you pivot to Telegram. I think more and more of these content moderation discussions are going to be dominated by that you have this cluster of companies that are trying. Maybe they’re failing in some ways, but they’re trying. You’ve got Twitter who’s just given up, and then you’ve got Telegram, which is actively trying to encourage people to use the platform in these ways.

 

Evelyn Douek:

I think it was The New York Times that had reporting this week with a Hamas official saying that they’re intentionally trying to exploit the more lax content moderation policies and practices of platforms like Telegram and now X, given that that’s just an opening and an opportunity for them to do so. So bringing us to this moment then, what do you think of how platforms are responding in this moment? I think one of the things is there’s lots of headlines about how there’s lots of disinformation, there’s lots of graphic footage on these platforms. Do we have a real sense of how platforms are going in their response? Do we have a sense that it’s better or worse this time? What are you seeing and what do you think?

 

Brian Fishman:

I have some opinions, Evelyn, but I don’t want to mischaracterize them as driven by a bunch of data. I think they’re anecdotal really at this point. I do think that the large platforms seem to be doing a decent job of managing official Hamas propaganda, which I would expect. I think there is a bigger challenge around what to do with graphic imagery of victims. And there are real fundamental questions there. There are lots of operational challenges with that kind of material, but there are real fundamental questions there for platforms about what is their purpose. Because on the one hand, they want to create a safe environment for people to operate online, they want to make sure that their platforms aren’t used to advance real-world violence, but they also need to not whitewash the real world, which is often brutal and terrible.

And we have a responsibility as a global society to bear witness for the victims of Hamas’ attack on 10/7. We have a responsibility as a global society to bear witness as a military operation takes place in Gaza, that even if it is conducted to the highest possible standards is going to have a terrible impact on a lot of civilians. And so I think there is a real challenge there for social media organizations, for old school media organizations on how do they tell that story. What do they allow and not allow? Because you want to create a safe environment for users, but you really don’t want to whitewash the present. That’s a very, very challenging line to walk.

 

Alex Stamos:

I think that brings up another interesting challenge in this specific case, which is there were a lot of arguments around the Arab-Israeli, Palestinian-Israeli conflict. There’s a lot of statements that people are making that I disagree with pretty strongly, including from universities. Lots of things are being said at universities I disagree strongly, but I also believe those are statements that people should have the right to say because they are making arguments around legitimately complicated issues between Israel and the Palestinians of criticizing Israeli policies and coming up with discussions of the future here of what is a possible future, peaceful future. That didn’t exist in the ISIS conflict. Nobody was making a human rights-based, western liberalism-based argument towards why ISIS was right.

And so that does seem to be one of the challenges here is that you have Hamas’ actions, which are completely abhorrent and easily banned under the terrorism issues, but then you have a bunch of other discussions that are innately tied up in Hamas’ actions that you probably don’t want to ban and you want to allow. And that seems much harder than in some of these previous situations in which nobody really thought Al-Qaeda or ISIS had any … There was very little legitimate arguments that were tied to their actions, if that makes sense.

 

Brian Fishman:

I think that’s right. I remember many instances when I was in my old job talking to human rights activists and other activists who would say, “No, Hamas is not a terrorist organization. They are the government of God.”

 

Alex Stamos:

So that one’s fallen pretty hard, right?

 

Brian Fishman:

That’s a hard one to follow.

 

Alex Stamos:

That’s a hard argument to make now.

 

Brian Fishman:

No, right. That argument was wrong then. It’s wrong now. But I think it illustrates your point, Alex, that there are a lot of people that recognize and want to stand up for Palestinian rights and Palestinian statehood and certainly the human rights of Palestinians that live in very difficult circumstances. And Hamas, and one of the things we have to understand about Hamas that it has done very successfully and far more successfully than a group like ISIS over the years, is build that sense of, I don’t want to use the word legitimacy because I don’t want to convey any legitimacy to this group even by mentioning it, but they were able to understand their constituencies in much more granular ways than the way ISIS thought about like, oh, you support us, or you are an apostate or an infidel.

And Hamas tries to build a sense of legitimacy with a variety of other groups around the world to build and bolster its political position. And as a result, it takes different tactics. It, prior to this event, has tried to present itself as a legitimate resistance organization. It builds lots of proxy organizations that it pushes its messages out through, that sometimes you see those things in more of a CIB behavioral context than you do as just overt support for terrorism. And Hamas is just more sophisticated in a lot of those ways than a group like the Islamic State was.

 

Evelyn Douek:

I think this is a really important point because I think there are these, I guess, sometimes difficult normative questions about what to allow in terms of graphic footage and bearing witness. But I think obviously it’s not just that a lot of this content … A lot of it is valuable political expression that should be protected. And the concern, I guess, is that companies don’t necessarily have the right incentives in order to protect that kind of political expression. This is something we interacted with about a lot in your prior role around where are the company’s transparency around protecting this kind of expression. The worries that they might have disparate impact on Arabic language content where they’re classifiers maybe just aren’t as good. And they’re getting all of this pressure, and we can talk about this in a second, political pressure from governments to remove terrorist content, remove terrorist content.

It’s easier for the platform in that moment to just be like, okay, fine, let’s just grab all of this stuff, not check it carefully and take it all down. And the fear is that you are presenting a skewed view of what’s going on in those circumstances. And so I guess I’m curious to hear you talk a little bit about how to think about protecting freedom of expression, the technical challenges of that, but also the incentive challenges of making sure the platforms are … They’re businesses. At the end of the day, they don’t want to get in trouble with governments. They don’t want to have problematic content on their platforms. What incentive do they have to make sure that they’re protecting this speech adequately?

 

Brian Fishman:

Evelyn, I think these are all really profound questions, and I’m not going to pretend I have perfect answers to any of them. I do think that platforms, at least the platforms that I understand best, took really seriously the responsibility to try to maintain political speech and controversial political positions so long as they did not support violence or a call for violence. And now I think a thing that’s challenging is that I see Hamas as a terrorist organization. The US government sees Hamas as a terrorist organization. The European Union sees Hamas as a terrorist organization. The United Nations does not sanction Hamas. So it is not universally held, and there are people in the world that disagree with that characterization, and they don’t think of it that way. In the context in which they operate, they may understand that Facebook bans terrorism, but they may, from where they sit, not think of it that way, a group in that way.

And so they may introduce rhetoric around groups like that into their political rhetoric in a way that is an anathema to me, but may feel much more normal in that context. And so I do think that there are real tensions and conflicts here, and I think there’s a dynamic here where terrorist content, there’s actually circumstances where it’s allowed even on Facebook. It’s allowed for distribution for newsworthy discussion. It’s allowed for counter-speech in some circumstances. And so you’ve got all these really difficult contextual decisions to make when you’re talking about terrorism that are not part of the CSAM discussion.

They’re also not part of the discussion when you’re talking about gore and violence. But the thing that’s really interesting about those contextual discussions, that’s hard for AI, everyone talks about how that’s hard for AI, that’s hard for people, too, a lot of the time. And the people don’t often get it right. I had data scientists on my old team, and they would run me through tests sometimes of a bunch of content, DOI content. I wrote the rules. I was the guy in charge. And they would run me through that content, and then they would test me, and I didn’t get a 100%. It’s hard. It is hard. And I think everyone’s got to understand that challenge and just recognize that these are very nuanced calls, that companies are making an extraordinarily high scale and that human beings are going to make mistakes too, even very well-trained ones.

 

Alex Stamos:

And it’s great that the people who really understand the challenge is the European Commission.

 

Evelyn Douek:

That’s exactly where I wanted to pivot to, to talk about political pressure and understanding the incentives that we’re creating. So there’s nothing I hate more than politicians using tragedy as an opportunity to promote themselves. And I don’t think there’s any way to understand Thierry Breton’s actions this week other than that. We’ve talked about him before on the podcast. He’s the European Commissioner for the internal market and the chief enforcer of the new Digital Services Act that came into force in the EU in August for the largest online platforms. And this week, over a series of consecutive days, he rolled out these letters that he was sending to each of the different platforms, warning them about the need to comply with DSA and about worries that there was illegal content and disinformation on their services. He started with X on Tuesday, Meta came Wednesday, TikTok on Thursday, and last but happily not forgotten, YouTube got a letter on Friday.

I can see no reason for rolling these letters out over consecutive days other than if you do it over four days, you get four headlines as opposed to doing it on one day, you get one headline. There is just no reason why. He only realized YouTube might have a problem on Friday as opposed to X on Tuesday. And these letters were, I think we’ve been talking about this endlessly in the US context as jawboning. These are pressures from a politician on platforms to take down more content. It doesn’t say that in the letter. It doesn’t say you need to take down more content, but it says there’s worries about illegal content circulating on your services, and there’s risks to civic discourse from disinformation. I guess, as two people that have been within a platform and on the receiving end of a letter like this, how does a platform understand that letter? What is the natural result of a letter like that?

 

Alex Stamos:

What the Europeans would argue is that these letters have no power on their own, that the European Commission’s DSA process is regimented and specific. They were followed up that there are a bunch of private correspondence that we cannot see that between the people who actually enforce the DSA and the platforms asking right now just asking for more information. I think probably the big platforms, you look at this letter and you realize that he’s running to be president of France. I think that’s left for him. He was the minister of something already. He’s trying to slingshot himself back into French politics, and that he’s a politician. But I do worry in that there’s a lot of problems with these letters. One is there’s no performance standard, so there’s no way to look at it and decide this is what they want me to do.

And so I think all three of us have shared concerns in the past of any political pressure from governments that is just do better. Because just pushing the companies, these governments should understand that there are equities on either side because they complain both when content is taken down, and they complain when content is left up. So they must understand that there are equities involved in turning up the knob. And for them just to say, “Turn up the knob. Turn up the knob. If you haven’t turned up the knob, we might fine you 6% of your global turnover,” is not actionable. And just turning up that knob will have negative speech effects. I think the timing here is terrible in that in a big discussion right now of we’re in this crazy world where Musk has thrown away everything that Twitter has done in this space.

And so there is a real need for there to be a legal baseline of, at least you try, like we talked about last week, is you have organized actors just pushing false narratives. Like I said last time, I would not be surprised if it was Iran. Certainly I am now watching in these Telegram channels a bunch of pro-Hamas groups who are effectively working together to coordinate CIB on Telegram that then they execute on Twitter. So there’s a lot of reasons for governments to say to companies, “You should not let your platform just be straight up manipulated by either state governments or by terrorist supporters.” But the way they’re doing this is they’re risking a global pan political backlash, especially from Americans, of what it looks like as the European Commission is secretly pushing the companies to restrict the speech of people globally.

They would argue that they only have domestic jurisdiction, but realistically, any changes to these standards are going to have a global impact most likely. And they’re doing so at a very sensitive time, and they’re making Musk look right. And I think that’s the disaster here is the one mechanism that might create a baseline that people have to live up to, when Twitter is now diving below it, and Telegram intentionally is a submarine miles below what the standard should be, that they are throwing away whatever political capital they might have. Because it looks like these letters that they just straight up want to sensor content that they don’t like, even though they don’t say what the content is they don’t like. And no letter went to Telegram, which is also the hilarious thing here is the number one platform for Hamas’ actions here is Telegram, by far. Nothing else is even close. And that is instead it’s just the big American companies. So it just looks like, oh, this is about punishing big American companies, it has nothing to do with actual risk.

 

Brian Fishman:

No, I was just going to make the Telegram point. That is the key one to me, that platforms are going to see this and they’re going to understand it as fundamentally political. And what you don’t want is you don’t want platforms learning the lesson that I think Facebook Meta certainly learned over time, rightly or wrongly, and I think there’s a lot to suggest that they shouldn’t have learned this lesson. But I do think that they learned, no matter what we do, we’re going to get yelled at. There’s no way to get positive feedback.

And from a regulatory perspective, sure, I think it’s good that the commission has some sticks, but they also need to recognize that carrots are necessary, and they have to understand that they’re going to have to lay out a standard at some point, that if they’re positioning themselves just to force platforms to do something so that they can yell at them about it no matter what they do and don’t set some standard for success, it’s just a recipe for antagonism over time.

And I think that’s a real missed opportunity because the DSA, for all of its faults, and I do think it has some faults, but the DSA is still the most thoughtful piece of regulation out there, in my opinion, and I want to see it succeed, but I worry that steps like this are going to undermine its success over the long run.

 

Evelyn Douek:

The reason for the exclusion of Telegram might be that it is not officially a VLOP, or a very large online platform. I believe it reported users lower than the threshold in the region, and I don’t know what the process is for verifying that or checking what’s going to happen there. But-

 

Alex Stamos:

So I think I would report that if I had any possible way, if I was a private company and didn’t have reporting. The big American companies have to report to the street, so they have to say they are audited on, and there’s this SEC violation for them to lie about how many European users they have. But the idea that Telegram does not have 45 million European users, I find that extremely unlikely.

 

Evelyn Douek:

I just want to cosign everything you say, which is that the officials at the European Union have been selling this DSA as the mature version of social media regulation, which is not about increased censorship. It’s the alternative to increased censorship. It’s all about processes and having risk assessments and things like that. And then we are in this moment where it seems like actually no, very much it’s giving lie to that, and it’s just the officials coming out and trying to get headlines and trying to force platforms to take down more content without really specifying exactly what are the failures that they’re seeing, which is not to say that there probably aren’t failures. But it’s very amorphous and only incentivizes platforms to take down more stuff without adequate safeguards.

X or Twitter, the EU has opened an official investigation into X, so we’re going to watch what comes of that in the coming months. I believe the platform has a couple of days to respond to information requests, and we’ll see how this pans out, whether this has just been political rhetoric in the heat of the moment, but that when the enforcement comes around, it actually is more serious, or what happens. Any other thoughts on what to watch for in the coming days, Brian, or platforms’ responses?

 

Brian Fishman:

I think there’s a couple of things. Obviously what looks like in impending Israeli operation in Gaza is going to produce a lot of content that will be very difficult for platforms to manage. And I think you will see that the large platforms, the Metas, the Googles have crisis management teams staffed by people that used to work at the State Department and DOD and the intelligence community that know how to manage this kind of stuff. Smaller platforms don’t have anything like that. And the thing that I want to make sure especially smaller platforms are attuned to is the fact that there is going to be an adversarial reaction when groups want to get their message out. And if it’s hard to get it out on Facebook and YouTube, they’re going to start looking in other places. And so even if those small platforms don’t see this kind of stuff immediately, that doesn’t mean they’re out of the woods. And I think they need to make sure that they’re attuned to it.

The other thing that I look at with my old CT guy hat on is the possible entry of Hezbollah into the war if there is an incursion into Gaza. Hezbollah, this is usually attributed to Dick Armitage, but I heard Wayne Downing say it long earlier that they’re the A team of terrorism. That’s true, both in terms of their tactical capacity on the ground, but also their propaganda capacity. They’re much more closely aligned with Iran. And so if Hezbollah enters the war, I think there’s a whole new set of potential challenges, both in terms of propaganda but also cyber capabilities in that particular case.

And then the other thing I’m looking for is, how do other terrorist groups, Islamist terrorist groups react? Because that includes the Islamic State and includes Al-Qaeda. Both groups have been very wary, and especially the Islamic State hostile to Hamas historically, but this is a dynamic where they’re probably not going to want to be left out in the cold. And so I think there’s going to be some interesting reaction and effort by those kinds of groups to operate in the digital space to get some of the, frankly, attention and momentum that Hamas has been able to achieve.

 

Evelyn Douek:

Well, thanks very much, Brian, for joining us. And I’m sure that there’ll be many further opportunities to talk as this unfolds, because this is going to be an ongoing crisis for a while. And so I hope we can get you back sometime. Thank you.

 

Brian Fishman:

Thanks for having me.

 

Evelyn Douek:

Thanks, Brian. And with that, this has been your moderated content weekly update. This show is available in all the usual places, including Apple Podcasts and Spotify, and show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn’t be possible without the research and editorial assistance of John Perrino, Policy Analyst at the Stanford Internet Observatory, and it is produced by the wonderful Brian Pelletier. Special thanks to Justin Fu and Rob Huffman. Take care.