Social Media and the First Amendment
Transcript:
All right, folks. Welcome to this final event of the Constitutional Law Center’s annual programming. We’re going to talk about social media and the First Amendment. I’m Jud Campbell. I’m a professor here at Stanford Law School and I have the pleasure of moderating this panel and I’ll just start by introducing the speakers and then talk a little bit about the format and briefly introduce the net choice cases that we’ll be talking about.
We’re gonna run down the line from your left to starting with Brendan Carr. Who’s a commissioner on the Federal Communications Commission where he served since 2017. And previously, he was the FCC’s general counsel and has also worked in the in private practice on tech issues and telecom issues.
So he has a lot of expertise in this area followed by Ramya Krishnan, who’s a senior staff attorney with the Knight First Amendment Institute at Columbia where she litigates cases involving social media issues among other first amendment topics. And prior to joining the Knight Institute, she worked for the attorney general of Australia, which is where she also got her law initial law degree.
And then finally, Eugene Volokh a professor here. And this is, he told me his first event as a Stanford law, as a Stanford professor. He’s the Thomas N. Siebel Senior fellow at the Hoover Institution and he began just a few weeks ago after being a faculty member at the UCLA Law School for 30 years.
He writes extensively on First Amendment law, in particular free speech and is a founder and editor of the Journal of Free Speech Law. So I’m going to just briefly introduce the net choice cases which are the focus of the panel today and then we’ll try to have a bit of a conversation among the panelists followed by some audience Q& A.
So I’d like for you guys to be thinking of questions as we proceed and then we’ll end with a reception afterwards. This is a really big term for the court on social media cases. It’s got a bunch of different social media related cases. The court has two cases, NRA versus Vullo and Murthy versus Missouri that deal with jaw boning efforts of Governmental officials to pressure social media companies and other actors into doing things that might violate the first amendment if done by the government.
And those raised a bunch of interesting state action questions that we talked about last month with Philadelphia. There are a couple other social media and state action cases that the court already addressed, although in a kind of cursory opinion that didn’t actually reach the hearing.
Resolve the most difficult aspects of the cases those were cases where the court was trying to figure out what the use of a private Facebook account by a public official meant for purposes of trying to figure out the limits of the First Amendment. I have questions. Kind of issue that was teed up initially in a case involving President Trump a couple years ago that was mooted after he left office.
But most important this year are the net choice cases. These are the most closely watched cases. They involve efforts of Texas and Florida. to impose restrictions on social media companies, both relating to how they’re able to engage in content moderation and then also requiring them to do certain things procedurally such as provide for appeals and disclosure of how they’re engaged in content moderation.
Just to get a sense of the audience how many folks have taken a first amendment class in a law school? All right, so we’ve got a few, but quite a few who haven’t. And so I hope we’ll keep that in mind as we proceed. The, in terms of the two statutes the main issues are similar between the Texas case and the Florida case.
But the statutes are subtly different, maybe in ways that’ll impact the court’s analysis. And so I’ll be curious to hear what the panelists have to say about that. The Florida statute applies to large internet platforms and it requires consistency in the way that those internet platforms engage in any content moderation.
It also bars platforms from deplatforming. Either kicking off or lowering the exposure of certain categories of speakers in particular political candidates and also journalistic enterprises. This Texas statute applies to social media platforms and it requires those platforms to maintain viewpoint neutrality.
Neutrality with respect to viewpoint in their content moderation policies, although it does clarify that they can eliminate harassing speech, violent speech, and so on. Texas ended up winning its case in the Fifth Circuit. Florida lost in the Eleventh Circuit, and so we have a circuit split going up to the Supreme Court.
The court heard oral argument in late February, but hasn’t issued opinions. They’ll probably come out in a few weeks and and so we’ve got some really important, lively First Amendment issues to talk through and without further ado, I’m going to turn it over to Commissioner Carr to get us started.
Yeah, thank you so much for the introduction and for setting the stage. It’s great to be here at Stanford. This is actually my first time being here, so I’m really looking forward to it. Have a nice little stroll through the very pretty campus before. I’ll start a little bit, similarly setting the stage.
That’s what I think is one of the political policy and really cultural issues We’re going to talk about some of the challenges we face right now in this country with the rise of social media, the debate about free speech, the debate over censorship, misinformation, disinformation, and talk about some of the solutions that government officials have been offering and how the net choice cases really are an example of all of those issues coming together and being that we’re out here at Stanford, I’m reminded back in 2011, then President Barack Obama gave a speech at Facebook’s Palo Alto headquarters and part of that speech was talking about what he described as the free flow of information over the internet.
And the punchline there was he talked about the free flow of information being key to what he described as a healthy democracy. And this is part of what I would think of as this almost 1. 0 sort of what I’d call libertarian view of free speech on the internet. Now flash forward a short 10 years later and a couple miles down the road.
President Obama came back. In 2022 and gave a speech at Stanford at the main campus and talked about the free flow of information on the Internet being a threat to democracy. And so how do we go from 2011 unfettered speech on the Internet being key to a healthy democracy to 2022, it being described as a threat to democracy, at least around the edges, All speech wasn’t healthy and all speech isn’t a threat, but the emphasis being vastly different in two speeches over a small geographical distance and a relatively short time period.
I think part of that is we entered this, second phase of social media. This phase of really increased censorship that in my view only accelerated with COVID 19. I think whenever you have The assertion of strong, centralized government authority, you necessarily see, as a piece of that, a rise in censorship.
We saw this movement towards, again, misinformation, disinformation, COVID was an accelerant of it when you had, medical misinformation out there. There was a broader acceptance, I think in the country, as a cultural matter, of censorship. I actually think that we’re on the back end of that.
We’re entering this third phase where people are trying to find a more balanced approach in one of the sort of prime examples of this second phase of this surge in censorship, and my view was the Hunter Biden laptop story. I don’t bring it up to be partisan or controversial, but it’s an example of where people thought that was Russian misinformation that should be censored, whereas in hindsight, a lot of mainstream media organizations thought it’s, a fair game story that should have been able to be shared by And so against this backdrop of this surge in censorship, you saw a concomitant response on the policy side, and these two cases that we’re talking about, Florida and Texas, are examples of lawmakers coming together and believing, rightly or wrongly, that social media companies were engaged in too much censorship.
They wanted to More speech, more, in their views, diversity of opinions. Now, how this clashes, obviously, with the First Amendment is, the starting point for me is always that people that own private property have the right to eject or not to host people or things, and then by extension, speech on their own private property.
It’s fundamentally a concept of private property and the right to eject. And as you enter these cases, what you found is that people divided very quickly. In sort of two camps with respect to the First Amendment on the one hand, you would have people pushing back social media companies pushing back on these laws, saying that intruded unlawfully on their First Amendment right to engage in content moderation, and they would analogize to a newspaper is a lead case in this space, Miami Herald versus Tornillo, where Florida passed a law requiring newspapers to run partisan op eds or editorials If the newspaper ran one for a different candidate, what does the Supreme Court and the Supreme Court broadly said that there is a First Amendment right a bit controversial depending on the lower courts, but effectively in those decisions about what op ed to run.
So on the one hand, you’ve got social media companies that are bear hugging. These newspaper lines of cases, Tornillo and his progeny saying when we decide to take a post down or leave a speech up or elevate one post of the other, we’re doing nothing different from a First Amendment perspective than a newspaper does when they’re deciding what editorials they want to run.
And that under Tornillo, the government cannot interfere in our right to do that. And then you have another extreme set of cases that cut. The other way, one of the lead examples there is a case called Pruneyard, and Pruneyard involved a shopping center, and you had a group that wanted to go onto the shopping center, private property, and disseminate pamphlets, and the Supreme Court said that in that context the private property owner did not have the right to eject that group from disseminating their pamphlets, that they would allow this intrusion on the traditional right To exclude.
And so you have these cases where it’s two ships passing in the night. The governments like Texas and Florida on the one hand say this is just Pruneyard. There’s another lead case in this example called Turner, which is familiar to me at the FCC because it involves what’s called our must carry rules.
So cable companies have an obligation under the Communications Act to not have carte blanche, free wheeling authority to exclude from the carriage. on their private property, cable networks, certain channels based on certain vertical integration and other characteristics. And the Supreme Court upheld those must carry obligations against a challenge by the cable companies, meaning the cable companies do not have a right under federal law to have complete discretion over what channels To place on their own private property.
So you have folks dividing very quickly in these two extreme camps. Is this like a newspaper? Or is this like a cable company or like a shopping mall? And what I think is if you look at the broad scope of all the Supreme Court cases in this space, and I’ll quickly walk through other ones, I think you can actually attempt to reconcile all of them.
By looking at some common factors and some common themes you can pull out of these cases, and rather than thinking about it as a binary matter, I think you can put them on a continuum. And the factors that I see, for instance, are How central is that means of communication to participating in public discourse today?
That’s one factor. Another factor is how much market power does the means of communication, the thing you’re trying to regulate, possess? Another factor, if you look at these cases, is how much of the speech carried on that platform is really imbued with some sort of either artistic or other sort of First Amendment value and expressive conduct content of the platform itself.
There’s other factors like, do people, will people be confused about whether you’re carrying someone else’s speech? Is that attributable to you or not? And so again, really quickly, just to walk through some of these cases and how it plays out, I think on the sort of far extreme, again, you have the newspaper case, Tornillo, where a newspaper is making very individualized decisions about what editorial to run, what editorial not, is not operating in the main, as I view as the other extreme set of cases, which is as a conduit for other people’s content, a conduit for other people’s speech.
Similarly, you can look at this in terms of expressive meaning. Again, a newspaper imbuing every editorial they write with, Their own viewpoint, their own meaning. There’s another case called Hurley, which had to do with parades and whether the government could regulate a private entity in terms of the decisions they made about what floats to allow in the parade.
The Supreme Court said no, and again, because in my view at least, that’s an example of where the parade is imbuing with our own expressive content. Through the actual floats that they allow. And so you look at all of these cases, whether it’s, again, sort of Turner, the must carry cases, whether it’s Zauderer, the shopping mall case there’s another sort of set of cases called I’m sorry, Pruneyard, another set of cases called Zauderer, which has to do with forcing people to provide certain disclosures in sort of commercial advertising.
And again, you can pull, in my view, out of all of those cases, these themes. And when you apply those themes to social media today, it tells me that the government can, in the appropriate circumstance, intrude on that traditional right to exclude. So social media today, I would argue, is central to public and political discourse.
You basically have to be on there to participate in a lot of political debates, at least in D. C., where I spend most of my time. In terms of the speech being attributed to Facebook or X, I think it’s relatively low, because people are used to other people’s speech being carried on them. When you talk about imbuing each post with the actual platform’s expressive viewpoints I think you can make two arguments there with social media but one of the trump cards there is that Section 230, which is a law that applies to content moderation, says if you’re a social media company and you are contributing to the message that’s being sent across your platform, you lose Section 230 protections.
So I think that tells us that social media companies can’t really bear hug the idea that content moderation is imbued with expressive activity or else they walk themselves out of Section 230. protections. So with that, I’ll wrap up my opening filibuster. But effectively, I think social media is in the position today that cable was in the eighties and nineties, where the Supreme Court said, consistent with the first amendment, we will tolerate in the right circumstance, some intrusion on the traditional right of exclusion.
All right. Thanks so much. Ramya Krishna. Thanks. This is my first time at Stanford as well. And so I also enjoyed a stroll. It’s a really lovely campus. So I share many of Brendan’s concerns about the status quo, the, Concentration of platform power of a public discourse that is the ability of a few private corporations to dictate who can speak and who will be heard in some of the most important spaces for communication today isn’t particularly good for free speech or democracy.
But and I think he is the rub. Neither is a solution that essentially involves the government taking this power for itself. I think an interpretation of the First Amendment that allows this kind of power grab also, makes me nervous. And yet, throughout the cases, and I think we start to hear this from Brendan, bodies have essentially presented the courts with an all or nothing choice.
The states argue, on the one hand, that the platforms Content moderation decisions don’t implicate the First Amendment at all. They argue that the platforms are like the telephone companies. They’re essentially open to all content, all comers, and so any regulation of their content moderation decisions is best thought of as a regulation of conduct rather than speech.
This theory, I think, if adopted, would give the government sweeping authority over the digital public sphere and make it really hard for the platforms to address what are real online harms on the other hand, you have The platform’s arguing that any regulation of their content moderation decisions attracts the most stringent First Amendment scrutiny and may even be per se unconstitutional, and they base that argument on the idea that What they’re exercising when they engage in content moderation is editorial judgment.
That is, they Engage in the selection, the editing, the arrangement of information and so they are best thought of as akin to newspapers. This theory, I think, if adopted would make it impossible for governments to pay to pass even carefully drawn transparency, privacy, all the kinds of laws that actually serve First Amendment values.
I think that both of these approaches is unsatisfying. The centralization of private power over public discourse is undesirable, but so is giving the government unbridled control. to regulate the platform’s content moderation decisions and practices. Both of these approaches, I think, are dangerous for democracy.
There’s also this weird feature or bug, depending on how you look at it, that the Texas and Florida laws, Florida’s in particular, sweep much more broadly than the sorts of content moderation decisions. I think the lawmakers who passed these laws mainly had in mind. I think the paradigmatic examples would be, the deep platforming President Trump, the shadow banning of the Of certain speakers and in particular conservative speakers that these laws and Florida’s, as I said, in particular, don’t just cover things like Facebook’s newsfeed.
Florida’s law seems to sweep much more broadly than the traditional social media platforms. It encompasses platforms like Uber and Etsy. And yet the parties chose to litigate these cases as if they did just cover the Facebook’s of the world. Net choice brought a facial. What’s called a facial challenge which essentially is an argument that the law is unconstitutional in most of its applications.
And so is so is at bottom just, invalid. And the states defended these laws on the basis that the platform’s content moderation was entirely conduct rather than speech. Perhaps the. Thing that I was most pleased to see at oral argument before the Supreme Court was how many of the justices seem to find both of these these arguments less than satisfying.
Many of the justices seem to recognize the needs to differentiate between different platforms. That is to distinguish between platforms that are like Facebook versus, platforms that are, Really not thought of by anyone really is at least at their core, like social media platforms such as Uber and the need to unbundle the functions, even of platforms like Facebook, recognizing that these platforms perform multiple roles on Facebook, of course, has the news feed, but they also allow users to send direct messages to one another.
That might be thought of as more analogous, for example, to an email service or the kind of activity that telephone companies are engaged in. Facebook also has a marketplace, which might be analogized to again a supermarket. So not exactly what one thinks of when they think of Social media is content moderation decisions.
I think implicit in this was the rejection of the parties. Superficial analogies to the newspapers and the telephone companies or other common carriers. The justices seem to recognize that you can’t analogize the platforms wholesale to really anything, but even as to more traditional content moderation functions like account suspensions, like the news feed, they have not many of them at least seem to see meaningful differences between newspapers and UPS and telephone companies.
On the other hand, I think that was part of their real. So the frustration with the fact that these cases were litigated as facial challenges rather than as applied challenges and the fact that they were presented with such an underdeveloped factual record. What I heard less, though, of from the justices was any theory that could help us get beyond these superficial analogies.
And I guess I’ll make one, final point, which is that. Trading labels like censorship and editorial judgment. Like I think the parties have done in this in these cases isn’t going to get us very far in coming up for a theory, coming up with a theory. Both of these labels, in my view, tend to obscure rather than illuminate.
And I think what we need is a framework that is appropriately attentive to the free speech and democratic interests that are, it’s. So instead of trading labels, whether it’s censorship and editorial judgment or conduct versus speech, we should ask ourselves whether content moderation by the social media platforms furthers First Amendment values.
In my view, it does. But in a way that is different in kind from the editorial decisions of newspapers. So unlike newspapers, social media platforms mainly serve as the vehicle for other people’s speech on. And what I think that means is that these platforms function as gatekeepers in a way that even in the most influential newspapers are not.
They also exercise less or more loose curatorial control over the speech. They do host, which means that users don’t usually attribute the content on that they see on platforms to the companies that operate them in the same way they would attribute material that is published articles that are published in newspapers to the newspapers.
At the same time, the problems content moderation decisions can and often do play an important role in creating distinctive speech communities that are responsive to users interests, they help users navigate and make sense of, a seemingly endless stream of information that is available online.
In other words, they help structure public conversation in a way that many users find useful. So this isn’t to say that any regulation of the platform’s content moderation decisions is unconstitutional, but if a law is passed in the name of promoting users free speech interests and does very little to advance that goal indeed actively serves to undermine it in important ways that should matter.
And that seems to me to be the case with the Texas and Florida must carry provisions. I’d put the transparency and disclosure provisions in a different category and happy to say more about that in the Q and a, but just to give an example of why I think this is the case. And it was one that came up again and again, I think in argument before the court.
And. Paul Clement, who argued the cases for net choice, made the point that, if we can’t discriminate on the basis of viewpoint in Texas’s law the sort of core provision at the heart of the must carry the must carry obligation is a requirement not to discriminate on the basis of viewpoint, said if we can’t engage in viewpoint discrimination, we’re going to have to use the far blunter tool of content discrimination, i.e. we’re going to have to take down. Whole subject matters, and he gave the example of suicide prevention versus promotion. And he said at the end of the day, if we’re going to have to take down speech on both sides of a, all controversial issues, we may just be left with With puppy videos.
And the Texas solicitor general had an exchange. Justice Kavanaugh asked him a question about terrorist speech and said you know what? What does Texas’s law mean for speech that glorifies? Terrorism. And the solicitor general said they they can take down the anti Al Qaeda speech, but they have to take down the pro Al Qaeda speech as well.
And I think a world in which the social media platforms are disincentivized to carry You know, just any controversial speech seems like a net loss for free speech, and it does, and in other words, does not seem to accomplish the goal that these states set out with. So I guess it seems to me that the appropriate basis on which to dispose these cases is that these must carry provisions.
Again, I think the transparency disclosure provisions are in a different category but these provisions do not in fact, Further the state’s asserted interest, but it is important to preserve the space for more carefully drawn laws that do serve free speech and democratic values. Wonderful.
Thanks. Next up, Eugene. Thanks. So I agree with a great deal of what my fellow panelists say, in fact, some of what we say. It’s going to be similar, perhaps, because we’re all right. There’s only so many ways of saying things that are right. But I want to start from a slightly different place.
Let’s go back, way back, 2010. Seems like a lifetime ago now. And Citizens United. So recall, Citizens United was a bitterly contested 5 4 decision. And it had to do with the question whether federal law could limit corporations ability to speak out in favor of or opposed to candidates. The majority said corporations have the free speech right to speak in favor or against candidates.
The dissent said no, that it is permissible for the government to restrict that, in large measure because of a concern that corporations could use their economic power, leverage it into political power, and that is something that the legal system is entitled to and perhaps ought to do something about, ought to limit the ability of large corporations.
To leverage their economic power into political power. So let’s walk a little bit through a few of the points in the dissent. Now, to be sure that it was just a dissent, but as I’ll, but as I’ll mention later on, it may be that it has something to tell us about the current situation. Here’s part of Justice Stevens argument.
A legislature might conclude that unregulated general treasury expenditures, again, just spending By newspapers, excuse me, by corporations will give corporations unfair influence in the electoral process and distort public debate. That’s one of the concerns that people have about Facebook’s ability and Twitter’s ability and Google’s through YouTube ability to constrain what is what is available on social media.
Because of the speech of corporations, the opinions of real people may be marginalized. To be sure, some real people might like it very much because they agree with the corporation’s decisions, but others who don’t share the views of the corporations, but to be sure, never entirely homogeneous set of views, but still ones that skew in a particular direction, may be marginalized.
Corporate expenditures restrictions are meant to ensure competition among actors in the political arena is truly competition among ideas, as opposed to what? As opposed to competition among who manages to persuade corporations to put their economic power behind certain ideas. Corporate domination of electioneering can also generate the impression that corporations dominate our democracy.
And politicians who fear that a certain corporation can make or break the re election chances may be cowed into silence about that corporation. Now I actually agree with the majority in Citizens United, but I always thought that this argument of Stevens was a very powerful argument, it’s an important argument.
And note he was arguing it as to corporate speech. The worry there is that simply the fact that corporations could spend some of their money to buy ads. Not to block others from buying ads, but to themselves buy ads might unduly influence the political process. Wouldn’t it apply even more to corporate restrictions on individual speech?
Back then, if the idea was General Motors, it was big once upon a time. If General Motors could buy ads, then it would skew the debate. But here, the concern is that social media platforms wouldn’t just buy ads. engage in their own speech, is they would actually stop speech, to be sure, speech on their platforms, but still very important important venues that they provide.
So that’s what I think the real heart of the debate is behind NetChoice, is how, what limits does the First Amendment impose on the ability of legislatures to say, we are worried that these very powerful corporations, ones that are much wealthier than many of the corporations we were talking about back in Citizens United, and then certainly than Citizens United itself or the corporations that funded it.
But on top of that ones who not only have money but have direct control over very important channels of communication. What limits is the first of them to impose on the government’s power to try to restrain their use of their power to affect political process? So with an eye towards that, I wanted to capture on a slide some of the things that my fellow panelists said.
It’ll be familiar, but maybe laying them out would be helpful, because I think that’s absolutely right, that this is not a question of here are two options, here are two, two buckets you could put things. What we have is we have a platform spectrum. Let’s think, let’s define platform to mean any place where people can speak on.
through other’s property. So one part, newspapers and magazines, they provide a platform for others to speak. A lot of what they cover is their own speech, a lot of it is op ed submissions, advertisements letters to the editor and such. They exercise control over what’s published in them.
Partly, they have First Amendment rights to do that. We heard about the Miami Herald case. Partly, it’s they have to do that, right? Their function is to deal with problems of information overload. If a newspaper somehow could provide all the stories in the world to you, of all qualities, all viewpoints, on all subjects, as in fact it probably could in the electronic age, it would be useless to you, right?
Because what you want, in large part, is their ability to winnow through this. Another example is the parades case, less important medium, but still a parade is usually the combination of speech by a bunch of different speakers of the parade organizers themselves put on only a few of the floats by and large, and they have the right to pick and choose because among other things, that’s what makes something as St.
Patrick’s day parade or July 4th parade or a world socialist Congress parade or whatever else. Then we might start looking separately at different functions of different platforms. So one is what I call the recommendation function. The news, the kind of the newsfeed, here are the top stories or here are stories you might like.
That I think sounds a lot like the front page of a newspaper or an ideological parade curated for you by the platform. You might also think separately of the content. Conversation management function. Facebook, YouTube, Twitter, managing, for example, comments by outsiders on users pages or tweets. There, too, they have to do some sort of information overload, although the overloading stuff is mostly spam.
You can’t have a social media platform where it doesn’t do something about spam. That would be useless. But also, you might say, one of the things that our users want is to block things and have our help in blocking things that are destructive of the conversation and various other things. Then you might say there’s the hosting function.
Facebook, YouTube, Twitter, providing hosting for users to reach willing viewers. Now all of a sudden, we’re not that interested in information overload, right? There are a lot of accounts on Twitter. I have absolutely no Knowledge that they’re even there. The mere fact that they host it doesn’t affect my experience of it.
Now maybe if I subscribe to it or people forward them to me or whatever else, it might, but whereas with a recommendation function, by definition it has to recommend only a small portion of the things when it comes to the hosting function, it could, maybe it shouldn’t have to, but it could host anything.
Just to give an example, by the way, people say, what about porn? Twitter has lots of porn on there. It’s very rare. Recently there’s been a little bit of porn spam, but setting aside porn spam comments. I’ve never, I don’t recall once having by inadvertently come across porn on Twitter, even though there are a huge number of porn accounts.
What about Facebook or Twitter or Gmail providing email or messaging? That also is hosting provided by the platform. But if there, it’s even less a matter of information overload. And again, it doesn’t matter to me if there are Nazis out there who are using Gmail. It doesn’t affect my experience of Gmail.
Likewise, if the question is what about people who want to send pro suicide messages or pro terrorism messages? Gmail in its terms of service might, you retain the right to block that kind of stuff? I don’t think it tries to. I don’t think we’d want it to try to. I don’t think we’d want it to say, Oh, sorry.
We refuse to host pro suicide messages. We see an email that RAI say, suggest that you are approving of suicide and we’re going to block it. It’s private property. Maybe they should, as a libertarian matter, have the power to do that. Not clear that’s something that that the first amendment ought to Then we get into some of the other cases that we hear about.
So there’s a cable systems case, which the Parade case characterized as saying, look, programming offered on various channels by a cable network consists of individual unrelated segments that happen to be transmitted together for individual selection by members of the audience. And that’s one of the reasons why cable systems can be required to host material that they’d rather not host.
That’s the must carry law. Another example is large shopping centers. Large shopping centers, under California law, have to host, on a content neutral basis, everybody. They have to host pro suicide messages. If somebody wants to say, we are the final exit campaigners, and we want to tell you suicide is a good solution to to to, I don’t know, to climate problems.
It’s a good solution to your depression or whatever else, probably the shopping centers would very much dislike that because that’s a total buzzkill on the Shopping experience, right? People who are thinking about suicide are probably going to be spending less money So just even in the on the most purely financial perspective They might want to block that as well as for other reasons under California law, they cannot do so And yet the Supreme Court has upheld that California law, Justice Breyer later reinterpreted it as just a saying, look, requiring someone to host another person’s speech is often a perfectly legitimate thing for the government to do, citing that very example.
And the last of the cases that is most on point here is Rumsfeld v. Fair. That is a case which involved universities refusing to host military recruiters because at the time, military recruiters discriminated based on sexual orientation. And Congress passed a law that said, if you do this, you will lose federal funds.
Supreme Court unanimously upheld that law and it upheld it as saying, we don’t care about the funding condition. This law is constitutional because it would have been constitutional even if it were just a categorical mandate. So the funding condition, which was a part of the statute without doubt, was not a part of the court’s rationale.
So here you have universities which are fundamentally speakers. Most of what happens in universities is the university’s own speech or speech that’s some measure endorsed by the university. Nonetheless, even there the court said it’s permissible essentially to require universities to host another person’s speech.
And by the way, speaking of universities, not California, but two other states, Pennsylvania and New Jersey, have a California, like the California rule for shopping malls. They have that for open spaces at universities. So if you want to go out there and speak in favor of suicide, or in favor of terrorism, or whatever else on the Penn campus, you’re entitled to do that, even if you’re not a Penn student.
And then we heard some examples, phone companies, UPS, FedEx they are can be used as tools of mass communication as well as individual UPS and FedEx may say we don’t want to deliver books from the pro suicide or the pro terrorist or the pro socialist or the atheist or whatever else bookstore, but they’re common carriers.
They’re not allowed to refuse, even though it’s their private property. Again as a libertarian matter, maybe they should be free to do that, but as a First Amendment matter, I think we generally assume that they should have to host that as well as the phone company has to host things like for example, get out the vote lines for the KKK or the Communist Party or whatever else.
And then of course the postal service is the classic example to be sure it’s owned by the government but also fits neatly in the spectrum. So really my point is, the interesting question is, where these, the social media items fall on the spectrum? Or perhaps where should the line be drawn? Should the line be drawn right below parades, which basically is what the Florida and Texas laws in some measure basically try to command?
Should it be drawn just above cable systems, which would allow social media platforms to restrict things any way they like, including by saying we’re not going to carry email that expresses certain views and direct messaging and such? Should it be drawn between, let’s say, email messaging and the hosting?
Should it? Should it be drawn between the hosting and the managing conversations so that you can’t kick off Donald Trump from the platform, but you might say we’re going to Limit what kinds of comments people can post on other people’s accounts or we may just not forward Trump’s or others post a post to anybody who isn’t already subscribed to the feeds.
So that’s how I Conceptualize this and I suppose real soon now the court will tell us where exactly it’s willing at least to draw that line
All right, super. So I want to open it up to the panelists, but I think I’ll just kick it off with with a question that’s framed by a lot of these comments which is how should we think about this problem? Is it a law of the horse problem where we just take the existing law and apply it to a new fact pattern, but without needing to grapple with the transformative impact of of social media and the internet?
Or do we think about it? From the standpoint of maybe the way the court has approached Fourth Amendment issues, recognizing the digital searches are particularly novel in their at least scope and therefore ought to be addressed through a sort of new doctrine. If I heard Commissioner Carr correctly, it seems like your take is to apply an existing doctrinal framework, but maybe articulate that framework in kind of functional terms.
And and Ramya I take you to be maybe challenging the kind of binary way that the court treats compelled speech or not compelled speech. Eugene, I take you to be embracing a kind of modern doctrinal framework, but maybe with a little less emphasis on functional considerations that delineate these different categories of speakers and perhaps a little more emphasis on.
 More formalistic considerations like attribution. I’m curious what you guys think of each other’s comments along those lines. Ramya, you want to kick us off? Sure. Yeah. I think that something that probably all of us were getting at is that analogies are helpful, but only to a point.
And I completely agree with Eugene that it makes sense to imagine these these examples and cases, a sort of I’m not sure if that’s the right way to put it, but he’s talking about falling along a continuum with, newspapers on one side of the spectrum and telephone companies on the other, but with many of the other examples he talked about is falling somewhere in the middle and trying to think about where the I say where the social media platforms fall on the spectrum, but I think really you have to disaggregate the functions that are played by these platforms and figure out where those functions lie on the spectrum because and this was something that came up again and again during the questioning by some of the justices of the argument, but yeah.
Direct messages. They look a lot like email. And it’s true, as Eugene noted, that a lot of these companies reserve for themselves the right to Curate as they see fit to suspend people’s accounts as they see fit for any reason or no reason at all. But I think that most users of these email services think that these companies are mainly in the business of transmitting speech from point A to point B and that they are, that they should not be in the business of suspending people at all.
Based on the content of what they have to say. The news feed, I think, starts to look more like what newspapers do. It’s not on all fours. And I think that there are meaningful differences that should matter to the First Amendment analysis. But that leaves and that leaves hosting, somewhere in between.
I think that there is, I think, a compelling argument that hosting and by which, what I’m referring to is like the ability to navigate manually On, say, something like Facebook to a user’s page in order to be able to see everything that they have said, even if Facebook reserves the sort of right to essentially bury those posts or comments.
In the news feed. I think that there’s a compelling argument that is perhaps more akin to what Telephone companies do, or at least there’s an argument that we should be able to prevent these platforms to the extent that they retain a lot of power of who gets to speak. But I think it’s complicated because I think that the platforms would say this is still our ability to decide who gets to speak or who gets to stay on our platform really is part of our ability to cultivate a community.
speech community with certain norms. And that if you ask us to carry the speech of everybody, regardless of whether or not they violate our rules, that impedes our ability to do that. So I guess I just find that to be a more complicated case, but I can certainly see the argument. For I think it would be a much closer case if that’s all what these Texas and Florida laws do.
In fact, they reach much more broadly to prioritization. I think there are even questions over to what extent Florida’s law, for example, which reaches Post prioritization and shadow banning. To what extent that requires the platforms to even disregard user expressed preferences through features like blocking through, usually the platforms algorithms will respond, for example, to the fact that you’re quickly scrolling that you have essentially ignored or jumped over certain people’s posts.
To the extent that those laws require platforms to disregard even user preferences, I think that’s really problematic. I agree. Obviously, a lot of what my colleagues were saying on those issues and appreciated Professor Volokh’s approach as well. I do think, in the main, you can pull these factors out again, whether it’s, centrality to the means of communication, risk of misattribution.
But I also think it was really interesting with Professor Volokh starting with Citizens United. I think as a thought experiment it’s interesting to think about whether there’s been sort of inversion politically on some elements of Citizens United, right? In the sense that arguably you could look at like the late 2000s and say conservatives were generally happy with corporate speech because it aligned with a lot of these like free market conservative principles and flash forward to today and arguably I think you can make the argument again, there’s legal doctrinal issues but as a realpolitik matter politically, you can look at some large corporations, Disney, and look at conservatives not liking a lot of the speech coming from corporations and so does the minority view of Silicon Valley, of citizens united become more of a majority.
Conservative Republican view with respect to the right to intrude on free willing decisions by corporations. And so I think that’s that Citizens United and potential inversion. politically of that is an interesting moment. Yeah, so I think it definitely is. And of course, the inversion really is a flip of both sides, right?
So it used to be in the around Citizens United. People were saying, look, obviously these large corporations are potentially a danger to democracy. They may have their property rights, but property rights have to be limited in some way. These are, this is the voice from the left, by the way, not like the socialist left.
This was the moderate centrist left. And now we hear a lot. Oh obviously you can’t restrict social media platforms. After all, it’s private property. Liberals have long understood. And even conservatives have, in some measure, understood, but liberals have long been open to regulating the use of private property in the public interest and have long been skeptical of the idea that, yes, Mark Zuckerberg, some billionaire, is going to be making all these decisions and even if it’s just not just him, but a hand picked group of people who, whom he selected.
So yes, it’s true that liberals are, have moved and conservatives have moved. Now how much of that, so you might say this is hypocrisy on both sides, or it could be learning on both sides. It could be that some liberals at least have been educated about some of the virtues of the free market and some of the conservatives have recognized, yes, maybe corporate power is something that that liberals have been right in some measure condemning, right?
There’s the old famous lines that a conservative is a liberal who’s been mugged and a liberal is a conservative who’s been arrested right? And part of it may just be we all just. Just pretend to have certain ideological views because of what’s convenient for us or what we are currently into Just what’s been happening in our lives, but it may be that we are learning in some measure, right?
So it may be that the answer is somewhere in between The private property is tremendously important and very valuable. And from to the extent there can be more and more competition, for example, among social media platforms, that would be great. But I will tell you, I actually had a personal little epiphany and in early 2021 with parlor, do you remember parlor?
Let’s go back a year or two before parlor. This is before, before a mosque took over Twitter. Facebook and Twitter are being faulted by conservatives for having a liberal bias, or at least in blocking certain kinds of conservative speech. It was clear that they were in some measure, although perhaps not that much.
And what was the standard response? Start your own platform. Stop your whining. You believe in the free market. Start your own platform. So they started Parler. It was big. I remember it was shooting right up there. And then. And following the the January 6th riots, then what happened was that I think Amazon Web Services and Google running the the Android App Store and Apple running the Apple App Store basically said we will de platform a parlor, basically make it impossible for it to function unless you implement the kinds of of moderation that we insist on.
So it’s basically just kidding. We said start your own. No, start your own that we like, that we approve of, otherwise too bad for you. Turns out there’s a lot of infrastructure out there that you need to plug into in a very networked world. And if you don’t play along with us as the people running it, we’re, that’s another form of moderation that we’re going to exercise.
Now again, from a pure private property perspective, if you really don’t if you really are hardcore on that which some of my libertarian friends are maybe that’s perfectly right, but to the extent you worry about kind of antitrust ish concern, corporate power concentration concerns, you might think that maybe the liberals had a good point in being worried about that.
So I’m curious we’ve had a kind of broad panel wide embrace of the idea that we’re going to categorize on some spectrum. I’m curious what the next step is. So if you have an idea that there is a continuum of different types of we might say hosting, running to some form of active moderation in the form of editorial judgments or what have you.
Are we just picking a point in between two of those categories and saying it’s constitutional to regulate on the left side and unconstitutional to regulate on the right side? Are we doing something else once we’ve identified where you fall on the continuum? I took Ramya to be embracing maybe some.
Additional analysis about the role of the regulation and fostering First Amendment values. I took maybe incorrectly Commissioner Carr and Eugene to be maybe delineating a more categorical based approach about what the next step would look like, that you just identify it as being akin to moderation or akin to hosting and then you’re done.
So I’m curious what that next step looks like for you, Ramya, and then whether you agree with. That’s more one step categorical approach for the other two.
Yeah. I think something which makes these cases so interesting is that there are free speech interests on all sides and that the states are they say acting to protect the free speech interests of the people that use these social media platforms, ensuring that they have access to one of the most important sort of forums for communication today.
And I think that, ought to figure in the sort of First Amendment. analysis, whether a, if a law is asserted to further free speech interests, I think, there needs to be an assessment of whether a law is in fact doing that. I think that where a particular function or entity falls on the sort of continuum we’ve been talking about is relevant to assessing the the burden that is.
Placed on First Amendment rights. And also the strength off the government’s First Amendment interests. So I think, for example, the differences between social media platforms and newspapers in particular, the sort of like newsfeed function off social media platforms Or two factor into the First Amendment analysis.
So I don’t think that the sort of analysis stops. It’s like placing the entity or the function in this continuum. But I think that then shades into the sort of assessment of whether First Amendment rights are being violated or not. Yeah, I think it’s a continuum, but to borrow a phrase from some of my friends on the left, it’s a living continuum.
That’s why I go one level below the actual sort of technology itself to certain values and factors. For instance, market power is one of the points that I’ve talked about, and that can change centrality. The public discourse as a factor can change over time, and I think cable must carry is an example of that.
I think that social media regulation today fits most analogously around that Turner must carry case. Now, that said, I think if the Supreme Court takes up Turner style must carry laws today, it doesn’t survive First Amendment scrutiny. Because I think being on cable is no longer central to economic vitality for business.
Thank you very much. into the centrality political discourse and in terms of monopoly of distribution the way it was in the 1990s. And again, I think social media is in the, roughly, the footsteps of cable, but cable today must carry laws, in my view, would most likely be struck down on the First Amendment because those values have eroded as applied to that technology.
So I’m not sure how much the doctrine actually turns on the centrality of the medium. For example, Pruneyard, even the late 1970s, When large, when people would still go out and leaflet and such and rather than staying home and post things on Twitter it’s not like that was a hugely important medium of communication.
Likewise, military recruiters would like to recruit on campus, but it’s not like that is central to their ability to do at the same time, Turner did indeed talk about the importance of cable. And if you look back on American telecommunications history, American media history, whenever there have been really powerful a small number of very powerful entities that have been seen as standing at the channels of communication, there have been attempts to regulate.
Genevieve Lakier, for example, at Chicago, has pointed out that in the late 1800s telegraph companies, of which there were very few were were seen as actually engaging in, call it censorship, call it moderation, whatever it is, they were preferring their pe their communications by their business partners over associated press, right?
Aligned with telling and also, I think also refused to communicate certain kinds of ideal things based on ideology. I want to say union and Congress regulated that and basically made them into into common carriers who had to carry everybody. And when it came to the television and radio networks, the same thing was done through things like the Fairness Doctrine and the Personal Attack Rule and cross ownership restrictions and such.
Now I think that some of those solutions were, the cures were worse than the disease. I think the Fairness Doctrine, for example, was unconstitutional and I’m very glad, even though the court did uphold it, very glad that the FCC rejected it. But Personal Attack Rule is actually a closer call, but I It’s certainly understand that when we saw three networks that were basically running national news in many ways, not entirely, there were always newspapers and magazines, but still to many people were tremendously important source of their daily news.
Unsurprising, Congress would be a little worried about that for good reasons and for bad. So yeah, functionally, when you’ve got entities that are as important as Facebook, especially Twitter, now, of course, the changes, TikTok, various others, what doesn’t change is it seems like it’s a relatively small set, each of which has an outside influence to it.
It’s unsurprising that we’d be wondering a little bit about about whether that power needs to be checked. So I’d like to invite folks to collect their questions and come up. I’ll ask one final question as you’re doing so and that is to raise a point that Ramya made and just turn it over to the other panelists about the way in which the court seems to be struggling with whether to consider these issues on a facial or an as applied basis.
A lot of the continuum that you’ve delineated has to do with. The type of activity that is at issue, it seems to be speaker focused, whether you’re engaged in content moderation, or whether you’re engaged in hosting, or whether you’re engaged in advertising, or promoting a news feed, or what have you.
And I’m just curious about that. From the standpoint of most of modern First Amendment law, we’ve shifted over to a neutrality based framework. We focus on the legality of rules for the most part. We no longer really do strike distinction between restrictions of different types of media and have shifted over to evaluating whether the rule is content based or content neutral satisfies compelling governmental interests, and so on.
Is, are the compelled speech cases just different, or do you see a place for facial analysis in these cases? How should the court be thinking through that issue? I do think the court cares a lot about content neutrality and viewpoint neutrality. And so the Texas law is probably somewhat more content neutral than the Florida law.
But I think that’s a separate question from facial challenges from versus as applied challenges. that you could imagine as applied challenges to neutral laws and facial challenges to neutral laws and same for content and viewpoint based laws. I think that the court was concerned in part. There are two things going on.
One is I think the court was concerned in part that the laws cover a huge set of things. They cover everything from Direct messaging where I think the justices did have some unease of the notion of the platform saying yes We can just refuse to carry direct messages that we think are racist or we think are Anti vax or whatever else really like I want to message you and if they’re gonna tell me to stop That wasn’t sitting well with some of the justices on the other hand to the extent that the some of the laws may actually ban any even even responses, even like fact checks and such, that seems to be very hard to justify.
So I think part of the concern with less to do whether we’re talking about facial versus is applied. A lot has to do with almost a severability question that if there had been subsections A, B, C, D, E, F, G, H. That covered each one of these particular functions, then maybe they might say this one’s constitutional.
This one’s not. But one of the things the net choice seems to be saying is, we want to throw the whole thing out, even as to maybe direct messaging and such. And I think some of the justices, their reaction was, there’s just too much going on here. Maybe we don’t even know how much going on here. We’re uneasy just saying the entire thing is facial.
It’s part of the court’s retreat from over breadth analysis in recent years. Great. Okay. Let’s start Q& A. Hi. Thank you. Hello. Thank you so much for the interesting talk, as always. And I’m Ethan. I’ve been here. I’m interested in free speech issues in general. I have a question for any one of the panelists.
Very interesting arguments. I was the platform from LinkedIn, and thanks to Elon Musk, I’m not the platform anymore. But I wonder whether the question is, you Everybody using Signal and ProtonMail, where all the communications are encrypted by the protocol itself. Meredith Whittaker, who is the president of Signal, has been doing a lot of work with foreign governments to prevent that.
If encryption was part of the table, where governments cannot see who is talking to who, and not even the content, would these problems go away? If you have a platform being that. Not the public facing. Opposed things would go away. Essentially, you would not have any public facing. You would join a group.
So when you join the group, like Signal has groups, and when you join the group, you can share information whoever you want, you will not have the ability to publish, but to the points that you were making about the ability of these platforms to modify what you talk to each other or who talk to who, that will be solved with encryption.
Maybe, although with modern AI technology, It’s very, we know that, for example, a lot of people want people to de platform, not just for what they say in a particular communication, but what they’ve said elsewhere. So you could certainly imagine a platform saying we’re going to refuse to allow you to use our email because you are a known member of group X.
Oh, I see. Or, got it. But you’re right, that maybe as to the one to one communications, if there was more more encryption, that would just become less of an issue.
Hi, I’m Danny. I’m a three L, excuse me, I’m a three L at the law school here, and, oh, I’m very interested in issues of compelled speech. Zooming out from the doctrine, I’m wondering if each of you could give us sort of your normative take on what social media regulation could look like within the confines of what you think is permissible under the First Amendment, whether that be to if you could talk a little bit about the interest that you would seek to advance with those regulations and the details of what they might look like from your view.
Yeah, for my part, in the main, I want more speech, more diversity of opinion, less what I would call censorship in part, I think this idea that it’s like a federalism idea the first amendment is about not having, the government, pick winners and losers among ideas.
And the idea that we should replace that with, a mega corporation, here in Silicon Valley that’s making these decisions, I think, has a lot of mistakes as well, in part because of the consequences, right? If you leave speech up, people are like, oh, you shouldn’t have left that up, it’s harmful speech, I don’t like that speech.
But if you take speech down, you don’t see all the negative consequences that flow from that idea not being able to get out. One of the things that makes us great, one of the ways we solve our biggest, most complex challenges, we take ideas. We spin them around, we think about them from different perspectives and we move forward.
In fact, I think the better way to think about this for, from those sides that sort of are encouraging more censorship by social media is, I don’t think the right framing is, should this idea be silenced? I think it should be, who should do it? And I think we need to decentralize it, push it down, and have the corporations engage in less censorship, and empower, The individual participants in the digital town square to do that and on X as an example, right?
It’s muting. It’s blocking. It’s potentially introducing a concept of your own sort of social media filters. If you want Fox News to filter your feed, maybe there’s a plug in there. You want, MSNBC to filter your feed, plug it in. over there, but find a way to take that concentration of power and redistribute.
So there’s a mistake that’s made, right? It’s made like individually, like thousands of times, and not just like a binary centralized mistake. And going back to something else we alluded to, I think, as a conservative, I think conservatives were, really attuned for years to the, overreach and abuse of government power, rightly the government can put you in jail. I think there was a blind spot for a long time to the abuses that come from. Concentrations of corporate power. I think that’s part of what you see being reemphasized at least within portions of conservative thinking. The abuses that come from the harms to individual liberty that come from abusive exercises of concentrated corporate power.
So I also really like this idea of Middleware, this idea that users could have access to tools that would help them curate their online and social media experiences to their own preferences rather than those sort of set by the platforms. I think that’s a really promising way through these two sort of undesirable options of the platform Getting to wield unchecked power and giving the government that power.
I think that there’s a real role for Congress to pass interoperability legislation legislation that would make it easier to make these, Tools, but also to I think, make it easier to make alternative social media platforms that are governed by different rules. I think one main reason that people don’t leave.
Facebook, for example, even when they disagree with Facebook’s content moderation rules, is that they don’t want to lose the communities that they have forged while they are on Facebook. And so you could imagine legislation that would require the largest social media companies Meta to, and we’ve been talking a lot about disaggregating functions, but you can imagine.
Imagine thinking that, Facebook plays is involves a couple of different layers. The infrastructure layers like the social graph. And then on top of that, the moderation layer on you could require Facebook to open up the infrastructure layer in a way that would mandate or facilitate users of other platforms interacting with.
uses that are still on Facebook. I think that could be a real engine for sort of competition in this space. I also think transparency legislation that requires the platforms to share sort of more information about how they are moderating content and the impact of those decisions.
For public discourse and democracy that is that absolutely has to be part of it and privacy legislation. So I would prefer not to see legislation that sort of directly interferes with the content moderation decisions of the platforms. But I think that there are a variety of other solutions regulatory interventions on the table.
That would help us address the problem of concentration of private power of a public discourse without getting into sort of that thicket.
Thank you.
Hi. My name is Silva. I’m a two. Oh I want to ask a little bit more about what theories you might have for How the what constitutional theory of First Amendment the Supreme Court could apply to platform speech in this case that might allow for potential like that might hit the middle ground of allowing for potential future legislation and not adopting the broad analogies placed on either side.
In terms of a like theory for striking the like Texas law, for example. It seemed Ramya, you focused more on that the platform is, that the state’s speech interest in passing the law is not actually being furthered by the law’s enactment, or like that it wouldn’t be. But we also talked about this theory of balancing different users speech that kind of comes up with Citizens United trying to have the government intervene correct outsized influence and have how, I’m curious what your thoughts are on that as a motive and how, if that’s a permissible motive for speech intervention.
And Romeo, if I could actually piggyback a little bit on, on that so you’re, I want to make sure I understand, you’re saying that the Texas and law in particular is the one I have in mind, although I’d love to hear views about this with Florida as well, just doesn’t serve the government’s interests here.
But I take it the Texas says our interest is in making sure that people aren’t going to be constrained by the viewpoint based judgment of the platform. It’s like our interest in making phone companies common carriers. It’s like California’s interest in not allowing shopping malls to pick and choose what kind of speakers to allow or to exclude them all together.
It’s there are some states, some cities that have public accommodations laws that ban restaurants from discriminating, not just based on race and religion and such, but also based on political affiliation. Why doesn’t the Texas law actually do a pretty good job of serving those interests?
Now maybe it does it at very high cost as to other things, but if the interest is in making, is essentially this non discrimination interest, is in making sure that people aren’t silenced by this large powerful platform that says we don’t like your views why doesn’t the law serve them pretty well?
Yeah, I think this kind of goes back to The kind of disaggregation of functions. It would be one thing if I think Texas was saying, all content has to be find a bull in some way manually by users who use the platform. I think it’s quite another to directly interfere with the moderation of the Facebook news feed and to the extent that, Texas’s interest is in maximizing the amount of speech that occurs on these platforms.
I think that a law that creates the kind of platform that essentially results in a kind of. Max Exodus from the platform by users who are turned off by a platform with essentially either no meaningful sort of speech rules that are responsive to their interests.
Is. Doesn’t actually achieve Texas is sort of state stated interest, and I worried that the law as crafted would have that effect precisely for the sort of reasons alluded to both by, the advocate for net choice in these cases before the Supreme Court, but also some of the justices that if you are required to carry the, opposing viewpoints on all issues, including things like terrorism, suicide, racism Then you’re going to either have a platform where the company is required to carry racist speech, pro terrorist speech suicide promotion, or you’re going to have a platform that just doesn’t carry speech on any controversial issue.
And I think both of those are undesirable. Great. I’m sorry, but I don’t want to distract from the question about what Can you think of a theory that So I’m sorry. I realized I hijacked the question a bit. I appreciate your answer, but I want to make sure we get to the yeah. So I think that, the transparency and disclosure provisions are a sort of we’ve been talking a lot about the sort of must carry provisions in these laws.
But the laws also included a number of sort of disclosure provisions. The only ones that were issue before this Supreme Court was the what were termed the sort of individualized explanation provisions that would require the companies to provide users with a reason or explanation for taking their speech down.
And we argued in the cases that the court should evaluate these provisions under a framework that was set out in a case called Zoudera. So the rule set forth in that case is if you’re talking about a compelled commercial disclosure of factual and uncontroversial information, then the court is going to apply a more lenient standard That us whether the disclosure serves a legitimate government interest and whether it’s unduly burdensome in the sense of being unduly burdensome on protected speech and a reason that, we advocated for this framework and really at the heart of the reason why the court established this framework in the first place.
It’s a recognition that sometimes regulations disclosure requirements in this case can actually serve important free speech values that actually you know requiring the disclosure of truthful and factual information in the commercial context about the terms of under which a service or product is being offered in the public domain that is actually serves an important free speech interest.
It not only allows consumers to make more informed purchasing decisions, but it helps us as a sort of collective make more informed democratic decisions. And so I think, A framework that allows us to take into account the the difference free speech interests at play is is important.
And that, that is why in these cases we argued at least with respect to Texas disclosure provision, which required the provision of a reason and to the extent that is just really like just requires Pointing to the part of your terms that you’re relying on when you decide to suspend somebody’s account to kick them off the platform.
That’s not a particularly onerous obligation. And it serves an important sort of free speech and democratic interest. And so it should survive First Amendment scrutiny. So that’s an example. Thank you. I just wanted to build on Danny’s earlier question about the kinds of regulatory regimes that you’re hoping to see.
And I was surprised that none of you mentioned antitrust law. And I’m curious, and I ask this because a lot of the concern that seems to be common among the three of you is about market power and the sort of the private power, the outsized private power that these companies exert over public discourse.
And it seems that some of the network effects that you’re concerned about might be diminished and the stakes would just be lower if these companies were smaller and had a smaller user base. So I’m wondering if, if and how antitrust law fits into the regulatory landscape that you think is appropriate, and if not, why not?
Yeah, no I agree. I do think there’s an appropriate role for antitrust here to vindicate, vindicate some of these concerns about abuse of market power. I also think that my, my theory as to must carry such as it is for social media really, in my view, only applies to the largest sort of general use platforms.
We talked about, telegram and signal elsewhere, and I think where you have something like a, a subreddit, a community that has a clear editorial bent, then again, that pushes you much more towards looking like. It’s a little bit more like the newspaper. Then the traditional sort of general use style platform.
I think that’s part of how you reconcile this with some public accommodation law cases like you know the Colorado cake bakery case where you have what’s effectively I think one artistic expression similar to Hurley the parade cases but also this idea that’s a single bakery. you know with a sort of a bespoke one off business as opposed to a general use social media platform with millions and millions of users.
Hello. I had a question that kind of centers around Mr. Carr, your analogy of social media being a form of essentially a newspaper or an editorial. I think the way I see it is if a newspaper spreads misinformation, then The company is the one that’s at fault or the corporation is the one that is at fault.
But if you have, for instance, someone messaging another person spreading misinformation, that specific person is the one at fault. So where do you see social media following between that line of letting the corporation take the blame for misinformation that is spread in either putting together a larger team of editorial or regular regulatory people that regulate that, or should social media kind of phone that area of.
Letting the people and the users make that idea for themselves of what is misinformation. What is, too far, what is too not far in a sense. Yeah, thanks. There’s a lot in there. I think part of it is, I did an op ed, I guess about four years ago, that was effectively titled that disinformation is the new disinformation.
Meaning that in many times people take political speech they don’t agree with. And put the label of misinformation or disinformation on it. I think other elements of your questions go to this concept of, Section 230, which is where X or Twitter or Facebook is speaking in its own name and gets something wrong in a sort of a way where there would be liability to attach.
Tortious speech or something. Then they do continue to be liable for that. But where it’s a user of the platform that says something that’s libelous 230 assigns, The blame for that to, the utterer of the Facebook post or the Twitter message rather than to the platform itself.
And that’s why I think if I were to reform Section 230, I would say we should keep Section 230C1, which again, largely says if you leave someone else’s speech up, you’re not liable for it. I think in the main, we need to reform 230 by effectively eliminating 230C2. Which there’s some dispute about it, but as I read it, basically gives social media companies carte blanche to censor speech, not just in reliance on their first amendment rights, whatever their scope, but with these bonus section 230 protections.
So assignment of liability to your question, it goes a little bit to section 230. One thing I wanted to just mention, which is not directly about about how platforms I’m sorry, let me just step back. If a newspaper publishes misinformation, as a general matter, if all the allegation is misinformation, there’s no liability.
In fact, that was decided some 200 plus years ago, when the consensus developed of the Sedition Act was a bad idea. Sedition Act was an early law that sought to ban misinformation about the government. And the rationales that were offered are actually very similar to what is offered today, which is people are spreading lies about the government.
Remember the Sedition Act only applied to false statements. That was seen as a huge improvement over the old English law of seditious lie, which applied to true statements, only false statements that were malicious and that damaged the reputation of the government. And the worry is that would lead to insurrection.
Like the Whiskey Rebellion, like Freeze Rebellion, and that why should anybody be able to spread misinformation about the government? Especially when in a malicious way and unsurprisingly The problem was that this was applied to things that today we would label opinions or possibly true statements and the consensus emerged that Newspapers could not be punished for misinformation as such.
Now there’s a particular kind of misinformation, which is defamation. That is false statements about particular people or corporations, not governments, but corporations, that damage the reputation. And there you’re quite right. Newspapers would be liable. Platforms are not liable, and one reason that platforms have become so immensely powerful is precisely because Section 230, for better or worse, I think probably for better, but still, it’s an artifact of 230, made it possible for them to operate without liability for defamation.
And for a few other related toys. So I do think that if you recast this as liable or a few other such things that would be a, actually a very important point. I just want to caution people not to assume that newspapers are normally liable for misinformation and platforms get the special exception.
Non liable is misinformation. Misinformation about the government or about science or about history or whatever else, that is immune from liabilities in general. Alright we’re at time. This is the last event of the Con Law Center’s programming for this academic year. But we’re gonna have a reception afterwards, have a chance to continue the conversation with the speakers and then we will resume next year with Constitution Day.
Please join me in thanking the panelists.