At the Supreme Court’s oral arguments about Florida and Texas’ social media content moderation laws, there was a fair bit of talk about Section 230. As we noted at the time, a few of the Justices (namely Clarence Thomas and Neil Gorsuch) seemed confused about Section 230 and also about what role (if any) it had regarding these laws.
The reality is that the only role for 230 is in preempting those laws. Section 230 has a preemption clause that basically says no state laws can go into effect that contradict Section 230 (in other words: no state laws that dictate how moderation must work). But that wasn’t what the discussion was about. The discussion was mostly about Thomas and Gorsuch’s confusion over 230 and thinking that the argument for Section 230 (that you’re not held liable for third party speech) contradicts the arguments laid out by NetChoice/CCIA in these cases, where they talked about the platforms’ own speech.
Gorsuch and Thomas were mixing up two separate things, as both the lawyers for the platforms and the US made clear. There are multiple kinds of speech at issue here. Section 230 does not hold platforms liable for third-party speech. But the issue with these laws was whether or not it constricted the platforms’ ability to express themselves in the way in which they moderated. That is, the editorial decisions that were being made expressing “this is what type of community we enable” are a form of public expression that the Florida & Texas laws seek to stifle.
That is separate from who is liable for individual speech.
But, as is the way of the world whenever it comes to discussions on Section 230, lots of people are going to get confused.
Today that person is Steven Brill, one of the founders of NewsGuard, a site that seeks to “rate” news organizations, including for their willingness to push misinformation. Brill publishes stories for NewsGuard on a Substack (!?!?) newsletter titled “Reality Check.” Unfortunately, Brill’s piece is chock full of misinformation regarding Section 230. Let’s do some correcting:
February marks the 28th anniversary of the passage of Section 230 of the Telecommunications Act of 1996. Today, Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online. But in February of 1996, this three-paragraph section of a massive telecommunications bill aimed at modernizing regulations related to the nascent cable television and cellular phone industries was an afterthought. Not a word was written about it in mainstream news reports covering the passage of the overall bill.
The article originally claimed it was the 48th anniversary, though it was later corrected (without a correction notice — which is something Newsguard checks on when rating the trustworthiness of publications). That’s not that big a deal, and I don’t think there’s anything wrong with “stealth” corrections for typos and minor errors like that.
But this sentence is just flat out wrong: “Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online.” It’s just not true. Section 230 gives limited exemptions from some forms of liability for third party content that they had no role in creating. That’s quite different than what Brill claims. His formulation suggests they’re not liable for anything they, themselves, put online. That’s false.
Section 230 is all about putting the liability on whichever party created the violation under the law. If a website is just hosting the content, but someone else created the content, the liability should go to the creator of the content, not the host.
Courts have had no problem finding liability on social media platforms for things they themselves post online. We have a string of such cases, covering Roommates, Amazon, HomeAway, InternetBrands, Snap and more. In every one of those cases (contrary to Brill’s claims), the courts have found that Section 230 does not protect things these platforms post online.
Brill gets a lot more wrong. He discusses the Prodigy and CompuServe cases and then says this (though he gives too much credit to CompuServe’s lack of moderation being the reason why the court ruled that way):
That’s why those who introduced Section 230 called it the “Protection for Good Samaritans” Act. However, nothing in Section 230 required screening for harmful content, only that those who did screen and, importantly, those who did not screen would be equally immune. And, as we now know, when social media replaced these dial-up services and opened its platforms to billions of people who did not have to pay to post anything, their executives and engineers became anything but good Samaritans. Instead of using the protection of Section 230 to exercise editorial discretion, they used it to be immune from liability when their algorithms deliberately steered people to inflammatory conspiracy theories, misinformation, state-sponsored disinformation, and other harmful content. As then-Federal Communications Commission Chairman Reed Hundt told me 25 years later, “We saw the internet as a way to break up the dominance of the big networks, newspapers, and magazines who we thought had the capacity to manipulate public opinion. We never dreamed that Section 230 would be a protection mechanism for a new group of manipulators — the social media companies with their algorithms. Those companies didn’t exist then.”
This is both wrong and misleading. First of all, nothing in Section 230 could “require” screening for harmful content, because both the First and Fourth Amendments would forbid that. So the complaint that it did not require such screening is not just misplaced, it’s silly.
We’ve gone over this multiple times. Pre-230, the understanding was that, under the First Amendment, liability of a distributor was dependent on whether or not the distributor had clear knowledge of the violative nature of the content. As the court in Smith v. California made clear, it would make no sense to hold someone liable without knowledge:
For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature.
That’s the First Amendment problem. But, we can take that a step further as well. If the state now requires scanning, you have a Fourth Amendment problem. Specifically, as soon as the government makes scanning mandatory, none of the content found during such scanning can ever be admissible in court, because no warrant was issued upon probable cause. As we again described a couple years ago:
The Fourth Amendment prohibits unreasonable searches and seizures by the government. Like the rest of the Bill of Rights, the Fourth Amendment doesn’t apply to private entities—except where the private entity gets treated like a government actor in certain circumstances. Here’s how that happens: The government may not make a private actor do a search the government could not lawfully do itself. (Otherwise, the Fourth Amendment wouldn’t mean much, because the government could just do an end-run around it by dragooning private citizens.) When a private entity conducts a search because the government wants it to, not primarily on its own initiative, then the otherwise-private entity becomes an agent of the government with respect to the search. (This is a simplistic summary of “government agent” jurisprudence; for details, see the Kosseff paper.) And government searches typically require a warrant to be reasonable. Without one, whatever evidence the search turns up can be suppressed in court under the so-called exclusionary rule because it was obtained unconstitutionally. If that evidence led to additional evidence, that’ll be excluded too, because it’s “the fruit of the poisonous tree.”
All of that seems kinda important?
Yet Brill rushes headlong on the assumption that 230 could have and should have required mandatory scanning for “harmful” content.
Also, most harmful content remains entirely protected by the First Amendment, making this idea even more ridiculous. There would be no liability for it.
Brill seems especially confused about how 230 and the First Amendment work together, suggesting (incorrectly) that 230 gives them some sort of extra editorial benefit that it does not convey:
With Section 230 in place, the platforms will not only have a First Amendment right to edit, but also have the right to do the kind of slipshod editing — or even the deliberate algorithmic promotion of harmful content — that has done so much to destabilize the world.
Again, this is incorrect on multiple levels. The First Amendment gives them the right to edit. It also gives them the right to slipshod editing. And the right to promote harmful content via algorithms. That has nothing to do with Section 230.
The idea that “algorithmic promotion of harmful content… has done so much to destabilize the world” is a myth that has mostly been debunked. Some early algorithms weren’t great, but most have gotten much better over time. There’s little to no supporting evidence that “algorithms” have been particularly harmful over the long run.
Indeed, what we’ve seen is that while there were some bad algorithms a decade or so ago, pressure from the market has pushed the companies to improve. Users, advertisers, the media, have all pressured the companies to improve their algorithms and it seems to work.
Either way, those algorithms still have nothing to do with Section 230. The First Amendment lets companies use algorithms to recommend things, because algorithms are, themselves, expressions of opinion (“we think you would like this thing more than the next thing”) and nothing in there would trigger legal liability even if you dropped Section 230 altogether.
It’s a best (or worst) of both worlds, enjoyed by no other media companies.
This is simply false. Outright false. EVERY company that has a website that allows third-party content is protected by Section 230 for that third-party content. No company is protected for first-party content, online or off.
For example, last year, Fox News was held liable to the tune of $787 million for defaming Dominion Voting Systems by putting on guests meant to pander to its audience by claiming voter fraud in the 2020 election. The social media platforms’ algorithms performed the same audience-pleasing editing with the same or worse defamatory claims. But their executives and shareholders were protected by Section 230.
Except… that’s not how any of this works, even without Section 230. Fox News was held liable because the content was produced by Fox News. All of the depositions and transcripts were… Fox News executives and staff. Because they created the defamatory content.
The social media apps didn’t create the content.
This is the right outcome. The blame should always go to the party who violated the law in creating the content.
And Fox News is equally as protected by Section 230 if there is defamation created by someone else but posted in a comment to a Fox News story (something that seems likely to happen frequently).
This whole column is misleading in the extreme, and simply wrong at other points. NewsGuard shouldn’t be publishing misinformation itself given that the company claims it’s promoting accuracy in news and pushing back against misinformation.