FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Techdirt
  • Ctrl-Alt-Speech: Do You Really Want The Government In Your DMs?Leigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Commission opens formal proceedings against Meta under the Digital Services Act related to the
     

Ctrl-Alt-Speech: Do You Really Want The Government In Your DMs?

18. Květen 2024 v 00:15

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

  • ✇Techdirt
  • Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?Mike Masnick
    There’s a fascinating new lawsuit against Meta that includes a surprisingly novel interpretation of Section 230. If the court buys it, this interpretation could make the open web a lot more open, while chipping away at the centralized control of the biggest tech companies. And, yes, that could mean that the law (Section 230) that is wrongly called “a gift to big tech” might be a tool that undermines the dominance of some of those companies. But the lawsuit could be tripped up for any number of r
     

Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?

2. Květen 2024 v 18:23

There’s a fascinating new lawsuit against Meta that includes a surprisingly novel interpretation of Section 230. If the court buys it, this interpretation could make the open web a lot more open, while chipping away at the centralized control of the biggest tech companies. And, yes, that could mean that the law (Section 230) that is wrongly called “a gift to big tech” might be a tool that undermines the dominance of some of those companies. But the lawsuit could be tripped up for any number of reasons, including a potentially consequential typo in the law that has been ignored for years.

Buckle in, this is a bit of a wild ride.

You would think with how much attention has been paid to Section 230 over the last few years (there’s an entire excellent book about it!), and how short the law is, that there would be little happening with the existing law that would take me by surprise. But the new Zuckerman v. Meta case filed on behalf of Ethan Zuckerman by the Knight First Amendment Institute has got my attention.

It’s presenting a fairly novel argument about a part of Section 230 that almost never comes up in lawsuits, but could create an interesting opportunity to enable all kinds of adversarial interoperability and middleware to do interesting (and hopefully useful) things that the big platforms have been using legal threats to shut down.

If the argument works, it may reveal a surprising and fascinating trojan horse for a more open internet, hidden in Section 230 for the past 28 years without anyone noticing.

Of course, it could also have much wider ramifications that a bunch of folks need to start thinking through. This is the kind of thing that happens when someone discovers something new in a law that no one really noticed before.

But there’s also a very good chance this lawsuit flops for a variety of other reasons without ever really exploring the nature of this possible trojan horse. There are a wide variety of possible outcomes here.

But first, some background.

For years, we’ve talked about the importance of tools and systems that give end users more control over their own experiences online, rather than leaving it entirely up to the centralized website owners. This has come up in a variety of different contexts in different ways, from “Protocols, not Platforms” to “adversarial interoperability,” to “magic APIs” to “middleware.” These are not all exactly the same thing, but they’re all directionally strongly related, and conceivably could work well together in interesting ways.

But there are always questions about how to get there, and what might stand in the way. One of the biggest things standing in the way over the last decade or so has been interpretations of various laws that effectively allow social media companies to threaten and/or bring lawsuits against companies trying to provide these kinds of additional services. This can take the form of a DMCA 1201 claim for “circumventing” a technological block. Or, more commonly, it has taken the form of a civil (Computer Fraud & Abuse Act) CFAA claim.

The most representative example of where this goes wrong is when Facebook sued Power Ventures years ago. Power was trying to build a unified dashboard across multiple social media properties. Users could provide Power with their own logins to social media sites. This would allow Power to log in to retrieve and post data, so that someone could interact with their Facebook community without having to personally go into Facebook.

This was a potentially powerful tool in limiting Facebook’s ability to become a walled-off garden with too much power. And Facebook realized that too. That’s why it sued Power, claiming that it violated the CFAA’s prohibition on “unauthorized access.”

The CFAA was designed (poorly and vaguely) as an “anti-hacking” law. And you can see where “unauthorized access” could happen as a result of hacking. But Facebook (and others) have claimed that “unauthorized access” can also be “because we don’t want you to do that with your own login.”

And the courts have agreed to Facebook’s interpretation, with a few limitations (that don’t make that big of a difference).

I still believe that this ability to block interoperability/middleware with law has been a major (perhaps the most major) reason “big tech” is so big. They’re able to use these laws to block out the kinds of companies who would make the market more competitive and pull down some the walls of walled gardens.

That brings us to this lawsuit.

Ethan Zuckerman has spent years trying to make the internet a better, more open space (partially, I think, in penance for creating the world’s first pop-up internet ad). He’s been doing some amazing work on reimagining the digital public infrastructure, which I keep meaning to write about, but never quite find the time to get to.

According to the lawsuit, he wants to build a tool called “Unfollow Everything 2.0.” The tool is based on a similar tool, also called Unfollow Everything, that was built by Louis Barclay a few years ago and did what it says on the tin: let you automatically unfollow everything on Facebook. Facebook sent Barclay a legal threat letter and banned him for life from the site.

Zuckerman wants to recreate the tool with some added features enabling users to opt-in to provide some data to researchers about the impact of not following anyone on social media. But he’s concerned that he’d face legal threats from Meta, given what happened with Barclay.

Using Unfollow Everything 2.0, Professor Zuckerman plans to conduct an academic research study of how turning off the newsfeed affects users’ Facebook experience. The study is opt-in—users may use the tool without participating in the study. Those who choose to participate will donate limited and anonymized data about their Facebook usage. The purpose of the study is to generate insights into the impact of the newsfeed on user behavior and well-being: for example, how does accessing Facebook without the newsfeed change users’ experience? Do users experience Facebook as less “addictive”? Do they spend less time on the platform? Do they encounter a greater variety of other users on the platform? Answering these questions will help Professor Zuckerman, his team, and the public better understand user behavior online and the influence that platform design has on that behavior

The tool and study are nearly ready to launch. But Professor Zuckerman has not launched them because of the near certainty that Meta will pursue legal action against him for doing so.

So he’s suing for declaratory judgment that he’s not violating any laws. If he were just suing for declaratory judgment over the CFAA, that would (maybe?) be somewhat understandable or conventional. But, while that argument is in the lawsuit, the main claim in the case is something very, very different. It’s using a part of Section 230, section (c)(2)(B), that almost never gets mentioned, let alone tested.

Most Section 230 lawsuits involve (c)(1): the famed “26 words” that state “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Some Section 230 cases involve (c)(2)(A) which states that “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Many people incorrectly think that Section 230 cases turn on this part of the law, when really, much of those cases are already cut off by (c)(1) because they try to treat a service as a speaker or publisher.

But then there’s (c)(2)(B), which says:

No provider or user of an interactive computer service shall be held liable on account of any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)

As noted, this basically never comes up in cases. But the argument being made here is that this creates some sort of proactive immunity from lawsuits for middleware creators who are building tools (“technical means”) to “restrict access.” In short: does Section 230 protect “Unfollow Everything” from basically any legal threats from Meta, because it’s building a tool to restrict access to content on Meta platforms?

Or, according to the lawsuit:

This provision would immunize Professor Zuckerman from civil liability for designing, releasing, and operating Unfollow Everything 2.0

First, in operating Unfollow Everything 2.0, Professor Zuckerman would qualify as a “provider . . . of an interactive computer service.” The CDA defines the term “interactive computer service” to include, among other things, an “access software provider that provides or enables computer access by multiple users to a computer server,” id. § 230(f)(2), and it defines the term “access software provider” to include providers of software and tools used to “filter, screen, allow, or disallow content.” Professor Zuckerman would qualify as an “access software provider” because Unfollow Everything 2.0 enables the filtering of Facebook content—namely, posts that would otherwise appear in the feed on a user’s homepage. And he would “provide[] or enable[] computer access by multiple users to a computer server” by allowing users who download Unfollow Everything 2.0 to automatically unfollow and re-follow friends, groups, and pages; by allowing users who opt into the research study to voluntarily donate certain data for research purposes; and by offering online updates to the tool.

Second, Unfollow Everything 2.0 would enable Facebook users who download it to restrict access to material they (and Zuckerman) find “objectionable.” Id. § 230(c)(2)(A). The purpose of the tool is to allow users who find the newsfeed objectionable, or who find the specific sequencing of posts within their newsfeed objectionable, to effectively turn off the feed.

I’ve been talking to a pretty long list of lawyers about this and I’m somewhat amazed at how this seems to have taken everyone by surprise. Normally, when new lawsuits come out, I’ll gut check my take on it with a few lawyers and they’ll all agree with each other whether I’m heading in the right direction or the totally wrong direction. But here… the reactions were all over the map, and not in any discernible pattern. More than one person I spoke to started by suggesting that this was a totally crazy legal theory, only to later come back and say “well, maybe it actually makes some sense.”

It could be a trojan horse that no one noticed in Section 230 that effectively bars websites from taking legal action against middleware providers who are providing technical means for people to filter or screen content on their feed. Now, it’s important to note that it does not bar those companies from putting in place technical measures to block such tools, or just banning accounts or whatever. But that’s very different from threatening or filing civil suits.

If this theory works, it could do a lot to enable these kinds of middleware services and make it significantly harder for big social media companies like Meta to stop them. If you believe in adversarial interoperability, that could be a very big deal. Like, “shift the future of the internet we all use” kind of big.

Now, there are many hurdles before we get to that point. And there are some concerns that if this legal theory succeeds, it could also lead to other problematic results (though I’m less convinced by those).

Let’s start with the legal concerns.

First, as noted, this is a very novel and untested legal theory. Upon reading the case initially, my first reaction was that it felt like one of those slightly wacky academic law journal articles you see law professors write sometimes, with some far-out theory they have that no one’s ever really thought about. This one is in the form of a lawsuit, so at some point we’ll find out how the theory works.

But that alone might make a judge unwilling to go down this path.

Then there are some more practical concerns. Is there even standing here? ¯\_(ツ)_/¯ Zuckerman hasn’t released his tool. Meta hasn’t threatened him. He makes a credible claim that given Meta’s past actions, they’re likely to react unfavorably, but is that enough to get standing?

Then there’s the question of whether or not you can even make use of 230 in an affirmative way like this. 230 is used as a defense to get cases thrown out, not proactively for declaratory judgment.

Also, this is not my area of expertise by any stretch of the imagination, but I remember hearing in the past that outside of IP law, courts (and especially courts in the 9th Circuit) absolutely disfavor lawsuits for declaratory judgment (i.e., a lawsuit before there’s any controversy, where you ask the court “hey, can you just check and make sure I’m on the right side of the law here…”). So I could totally see the judge saying “sorry, this is not a proper use of our time” and tossing it. In fact, that might be the most likely result.

Then there’s this kinda funny but possibly consequential issue: there’s a typo in Section 230 that almost everyone has ignored for years. Because it’s never really mattered. Except it matters in this case. Jeff Kosseff, the author of the book on Section 230, always likes to highlight that in (c)(2)(B), it says that the immunity is for using “the technical means to restrict access to material described in paragraph (1).”

But they don’t mean “paragraph (1).” They mean “paragraph (A).” Paragraph (1) is the “26 words” and does not describe any material, so it would make no sense to say “material described in paragraph (1).” It almost certainly means “paragraph (A),” which is the “good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” section. That’s the one that describes material.

I know that, at times, Jeff has joked when people ask him how 230 should be reformed he suggests they fix the typo. But Congress has never listened.

And now it might matter?

The lawsuit basically pretends that the typo isn’t there. Its language inserts the language from “paragraph (A)” where the law says “paragraph (1).”

I don’t know how that gets handled. Perhaps it gets ignored like every time Jeff points out the typo? Perhaps it becomes consequential? Who knows!

There are a few other oddities here, but this article is getting long enough and has mostly covered the important points. However, I will conclude on one other point that one of the people I spoke to raised. As discussed above, Meta has spent most of the past dozen or so years going legally ballistic about anyone trying to scrape or data mine its properties in anyway.

Yet, earlier this year, it somewhat surprisingly bailed out on a case where it had sued Bright Data for scraping/data mining. Lawyer Kieran McCarthy (who follows data scraping lawsuits like no one else) speculated that Meta’s surprising about-face may be because it suddenly realized that for all of its AI efforts, it’s been scraping everyone else. And maybe someone high up at Meta suddenly realized how it was going to look in court when it got sued for all the AI training scraping, if the plaintiffs point out that at the very same time it was suing others for scraping its properties.

For me, I suspect the decision not to appeal might be more about a shift in philosophy by Meta and perhaps some of the other big platforms than it is about their confidence in their ability to win this case. Today, perhaps more important to Meta than keeping others off their public data is having access to everyone else’s public data. Meta is concerned that their perceived hypocrisy on these issues might just work against them. Just last month, Meta had its success in prior scraping cases thrown back in their face in a trespass to chattels case. Perhaps they were worried here that success on appeal might do them more harm than good.

In short, I think Meta cares more about access to large volumes of data and AI than it does about outsiders scraping their public data now. My hunch is that they know that any success in anti-scraping cases can be thrown back at them in their own attempts to build AI training databases and LLMs. And they care more about the latter than the former.

I’ve separately spoken to a few experts who were worried about the consequences if Zuckerman succeeded here. They were worried that it might simultaneously immunize potential bad actors. Specifically, you could see a kind of Cambridge Analytica or Clearview AI situation, where companies trying to get access to data for malign purposes convince people to install their middleware app. This could lead to a massive expropriation of data, and possibly some very sketchy services as a result.

But I’m less worried about that, mainly because it’s the sketchy eventuality of how that data is being used that would still (hopefully?) violate certain laws, not the access to the data itself. Still, there are at least some questions being raised about how this type of more proactive immunity might result in immunizing bad actors that is at least worth thinking about.

Either way, this is going to be a case worth following.

  • ✇Techdirt
  • Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The RecordMike Masnick
    A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness. I’m not going to review all the reasons it was wrong. You c
     

Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The Record

19. Duben 2024 v 18:26

A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness.

I’m not going to review all the reasons it was wrong. You can go back to my original article for that, though I will note that the argument seemed to suggest that getting rid of Section 230 would both lead to better content moderation and, at the same time, only moderation based on the First Amendment. Both of those points are obviously wrong, but the latter one is incoherent.

Given his long track record of wrongness, I had assumed that much of the article likely came from Lanier. However, I’m going to reassess that in light of Stanger’s recent performance before the House Energy & Commerce Committee. Last week, there was this weird hearing about Section 230, in which the Committee invited three academic critics of Section 230, and not a single person who could counter their arguments and falsehoods. We talked about this hearing a bit in this week’s podcast, with Rebecca MacKinnon from the Wikimedia Foundation.

Stanger was one of the three witnesses. The other two, Mary Anne Franks and Mary Graw Leary, presented some misleading and confused nonsense about Section 230. However, the misleading and confused nonsense about Section 230 at least fits into the normal framework of the debate around Section 230. There is confusion about how (c)(1) and (c)(2) interact, the purpose of Section 230, and (especially) some confusion about CSAM and Section 230 and an apparent unawareness that federal criminal behavior is exempted from Section 230.

But, let’s leave that aside. Because Stanger’s submission was so far off the mark that whoever invited her should be embarrassed. I’ve seen some people testify before Congress without knowing what they’re talking about, but I cannot recall seeing testimony this completely, bafflingly wrong before. Her submitted testimony is wrong in all the ways that the Wired article was wrong and more. There are just blatant factual errors throughout it.

It is impossible to cover all of the nonsense, so we’re just going to pick some gems.

Without Section 230, existing large social media companies would have to adapt. Decentralized Autonomous Organizations, (DAOs) such as BlueSky and Mastodon, would become more attractive. The emergent DAO social media landscape should serve to put further brakes on virality, allowing a more regional social media ecosystem to emerge, thereby creating new demand for local media. In an ideal world, networks of DAOs would comprise a new fediverse (a collection of social networking servers which can communicate with each other, while remaining independently controlled), where users would have greater choice and control over the communities of which they are a part.

So, um. That’s not what DAOs are, professor. You seem to be confusing decentralized social media with decentralized autonomous organizations, which are a wholly different thing. This is kind of like saying “social security benefits” when you mean “social media influencers” because both begin with “social.” They’re not the same thing.

A decentralized social media site is what it says on the tin. It’s a type of social media that isn’t wholly controlled by a single company. Different bits of it can be controlled by others, whether its users or alternative third-party providers. A DAO is an operation, often using mechanisms like cryptocurrency and tokens, to enable a kind of democratic voting, or (possibly) a set of smart contracts, that determine how the loosely defined organization is run. They are not the same.

In theory, a decentralized social media site could be run by a DAO, but I don’t know of any that currently are.

Also, um, decentralized social media can only really exist because of Section 230. “Without Section 230,” you wouldn’t have Bluesky or Mastodon, because they would face ruinous litigation for hosting content that people would sue over. So, no, you would not have either more decentralized social media (which I think is what you meant) or DAOs (which are wholly unrelated). You’d have a lot less, because hosting third-party speech would come with way more liability risk.

Also, there’s nothing inherent to decentralized social media that means you’d “put the brakes on virality.” Mastodon has developed to date in a manner designed to tamp down virality, but Bluesky hasn’t? Nor have other decentralized social media offerings, many of which hope to serve a global conversation where virality is a part of it. And that wouldn’t really change with or without Section 230. Mastodon made that decision because of the types of communities it wanted to foster. And, indeed, its ability to do that is, in part, due to intermediary liability protections like Section 230, that enable the kind of small, more focused community moderation Mastodon embraces already.

It’s really not clear to me that Professor Stanger even knows what Section 230 does.

Non-profits like Wikipedia are concerned that their enterprise could be shut down through gratuitous defamation lawsuits that would bleed them dry until they ceased to exist (such as what happened with Gawker). I am not convinced this is a danger for Wikipedia, since their editing is done by humans who have first amendment rights, and their product is not fodder for virality….

Again, wut? The fact that their editing is “done by humans” has literally no impact on anything here. Why even mention that? Humans get sued for defamation all the time. And, if they’re more likely to get sued for defamation, they’re less likely to even want to edit at all.

And people get mad about their Wikipedia articles all the time, and sometimes they sue over them. Section 230 gets those lawsuits thrown out. Without it, those lawsuits would last longer and be more expensive.

Again, it’s not at all clear if Prof. Stanger even knows what Section 230 is or how it works.

The Facebook Files show that Meta knew that its engagement algorithms had adverse effects on the mental health of teenage girls, yet it has done nothing notable to combat those unintended consequences. Instead, Meta’s lawyers have invoked Section 230 in lawsuits to defend itself against efforts to hold it liable for serious harms

Again, this is just wrong. What the crux of the Facebook Files showed was that Meta was, in fact, doing research to learn about where its algorithms might cause harm in order to try to minimize that harm. However, because of some bad reporting, it now means that companies will be less likely to even do that research, because people like Professor Stanger will misrepresent it, claiming that they did nothing to try to limit the harms. This is just outright false information.

Also, the cases where Meta has invoked Section 230 would be unrelated to the issue being discussed here because 230 is about not being held liable for user content.

The online world brought to life by Section 230 now dehumanizes us by highlighting our own insignificance. Social media and cancel culture make us feel small and vulnerable, where human beings crave feeling large and living lives of meaning, which cannot blossom without a felt sense of personal agency that our laws and institutions are designed to protect. While book publishers today celebrate the creative contributions of their authors, for-profit Internet platforms do not.

I honestly have no idea what’s being said here. “Dehumanizes us by highlighting our own insignificance?” What are you even talking about? People were a lot more “insignificant” pre-internet, when they had no way to speak out. And what does “cancel culture” have to do with literally any of this?

Without Section 230, companies would be liable for the content on their platforms. This would result in an explosion of lawsuits and greater caution in such content moderation, although companies would have to balance such with first amendment rights. Think of all the human jobs that could be generated!

Full employment for tort lawyers! I mean, this is just a modern version of Bastiat’s broken window fallacy. Think of all the economic activity if we just break all the windows in our village!

Again and again, it becomes clear that Stanger has no clue how any of this works. She does not understand Section 230. She does not understand the internet. She does not understand the First Amendment. And she does not understand content moderation. It’s a hell of a thing, considering she is testifying about Section 230 and its impact on social media and the First Amendment.

At a stroke, content moderation for companies would be a vastly simpler proposition. They need only uphold the First Amendment, and the Courts would develop the jurisprudence to help them do that, rather than to put the onus of moderation entirely on companies.

That is… not at all how it would work. They don’t just need to “uphold the First Amendment” (which is not a thing that companies can even do). The First Amendment’s only role is in restricting the government, not companies, from passing laws that infringe on a person’s ability to express themselves.

Instead, as has been detailed repeatedly, companies would face the so-called “moderator’s dilemma.” Because the First Amendment requires distributors to have actual knowledge of content violating the law to be liable, a world without Section 230 would incentivize one of two things, neither of which is “upholding the First Amendment.” They would either let everything go and do as little moderation as possible (so as to avoid the requisite knowledge), or they’d become very aggressive in limiting and removing content to avoid liability (even though this wouldn’t work and they’d still get hit with tons of lawsuits).

We’ve been here before. When government said the American public owned the airwaves, so television broadcasting would be regulated, they put in place regulations that supported the common good. The Internet affects everyone, and our public square is now virtual, so we must put in place measures to ensure that our digital age public dialogue includes everyone. In the television era, the fairness doctrine laid that groundwork. A new lens needs to be developed for the Internet age.

Except, no, that’s just factually wrong. The only reason that the government was able to put regulations on broadcast television was because the government controlled the public spectrum which they licensed to the broadcasters. The Supreme Court made clear in Red Lion that without that, they could not hinder the speech of media companies. So, the idea that you can just apply similar regulations to the internet is just fundamentally clueless. The internet is not publicly owned spectrum licensed to anyone.

While Section 230 perpetuates an illusion that today’s social media companies are common carriers like the phone companies, they are not. Unlike Ma Bell, they curate the content they transmit to users

Again, it appears the Professor is wholly unaware of Section 230 and how it works. The authors of Section 230 made it clear over and over again that they wrote 230 to be the opposite of common carriers. No one who supports Section 230 thinks it makes platforms into common carriers, because it does not. The entire point was to free up companies to choose how to curate content, so as to allow those companies to craft the kinds of communities they wanted. They only people claiming the “illusion” of common carrierness are those who are trying to destroy Section 230.

So there is no “illusion” here, unless you don’t understand what you’re talking about.

The repeal of Section 230 would also be a step in the right direction in addressing what are presently severe power imbalances between government and corporate power in shaping democratic life. It would also shine a spotlight on a globally disturbing fact: the overwhelming majority of global social media is currently in the hands of one man (Mark Zuckerberg), while nearly half the people on earth have a Meta account. How can that be a good thing under any scenario for the free exchange of ideas?

I mean, we agree that it’s bad that Meta is so big. But if you remove Section 230 (as Meta itself has advocated for!), you help Meta get bigger and harm the competition. Meta has a building full of lawyers. They can handle the onslaught of lawsuits that this would bring (as Stanger herself gleefully cheers on). It’s everyone else, the smaller sites, such as the decentralized players (not DAOs) who would get destroyed.

Mastodon admins aren’t going to be able to afford to pay to defend the lawsuits. Bluesky doesn’t have a building full of lawyers. The big winner here would be Meta. The cost to Meta of removing Section 230 is minimal. The cost to everyone trying to eat away at Meta’s marketshare would be massive.

The new speech is governed by the allocation of virality in our virtual public square. People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice. If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm. It is as if the constitution had required citizens to speak through actors or lawyers who answered to the Dutch East India Company, or some other large remote entity. What power should these intermediaries have? When the very logic of speech must shift in order for people to be heard, is that still free speech? This was not a problem foreseen in the law.

I mean, this is just ahistorical nonsense. Historically, most people had no way to get their message out at all. You could talk to your friends, family, co-workers, and neighbors, and that was about it. If you wanted to reach beyond that small group, you required some large gatekeeper (a publisher, a TV or radio producer, a newspaper) to grant you access, which they refused for the vast majority of people.

The internet flipped all that on its head, allowing anyone to effectively speak to anyone. The reason we have algorithms is not “Section 230” and the algorithms aren’t “setting the volume,” they came in to deal with the simple fact that there’s just too much information, and it was flooding the zone. People wanted to find information that was more relevant to them, and with the amount of content available online, the only way to manage that was with some sort of algorithm.

But, again, the rise of algorithms is not a Section 230 issue, even though Stanger seems to think it is.

Getting rid of the liability shield for all countries operating in the United States would have largely unacknowledged positive implications for national security, as well as the profit margins for US-headquartered companies. Foreign electoral interference is not in the interests of democratic stability, precisely because our enemies benefit from dividing us rather than uniting us. All foreign in origin content could therefore be policed at a higher standard, without violating the first amendment or the privacy rights of US citizens. As the National Security Agency likes to emphasize, the fourth amendment does not apply to foreigners and that has been a driver of surveillance protocols since the birth of the Internet. It is probable that the Supreme Court’s developing first amendment jurisprudence for social media in a post-230 world would embrace the same distinction. At a stroke, the digital fentanyl that TikTok represents in its American version could easily be shut down, and we could through a process of public deliberation leading to new statutory law collectively insist on the same optimization targets for well-being, test scores, and time on the platform that Chinese citizens currently enjoy in the Chinese version of TikTok (Douyin)

Again, this is a word salad that is mostly meaningless.

First of all, none of this has anything to do with Section 230, but rather the First Amendment. And it’s already been noted, clearly, that the First Amendment protects American users of foreign apps.

No one is saying “you can’t ban TikTok because of 230,” they’re saying “you can’t ban TikTok because of the First Amendment.” The Supreme Court isn’t going to magically reinvent long-standing First Amendment doctrine because 230 is repealed. This is nonsense.

And, we were just discussing what utter nonsense it is to claim that TikTok is “digital fentanyl” so I won’t even bother repeating that.

There might also be financial and innovation advantages for American companies with this simple legislative act. Any commercial losses for American companies from additional content moderation burdens would be offset by reputational gains and a rule imposed from without on what constitutes constitutionally acceptable content. Foreign electoral interference through misinformation and manipulation could be shut down as subversive activity directed at the Constitution of the United States, not a particular political party.

This part is particularly frustrating. This is why internet companies already moderate. Stanger’s piece repeatedly seems to complain both about too little moderation (electoral interference! Alex Jones!) and too much moderation (algorithms! dastardly Zuck deciding what I can read!).

She doesn’t even seem to realize that her argument is self-contradictory.

But, here, the supposed “financial and innovation advantages” from American companies being able to get “reputational gains” by stopping “misinformation” already exists. And it only exists because of Section 230. Which Professor Stanger is saying we need to remove to get the very thing it enables, and which would be taken away if it were repealed.

This whole thing makes me want to bang my head on my desk repeatedly.

Companies moderate today to (1) make users’ experience better and (2) to make advertisers happier that they’re not facing brand risk from having ads appear next to awful content. The companies that do better already achieve that “reputational benefit,” and they can do that kind of moderation because they know Section 230 prevents costly, wasteful, vexatious litigation from getting too far.

If you remove Section 230, that goes away. As discussed above, companies then are much more limited in the kinds of moderation they can do, which means users have a worse experience and advertisers have a worse experience, leading to reputational harm.

Today, companies already try to remove or diminish the power of electoral interference. That’s a giant part of trust & safety teams’ efforts. But they can really only do it safely because of 230.

The attention-grooming model fostered by Section 230 leads to stupendous quantities of poor-quality data. While an AI model can tolerate a significant amount of poor-quality data, there is a limit. It is unrealistic to imagine a society mediated by mostly terrible communication where that same society enjoys unmolested, high-quality AI. A society must seek quality as a whole, as a shared cultural value, in order to maximize the benefits of AI. Now is the best time for the tech business to mature and develop business models based on quality.

I’ve read this paragraph multiple times, and I still don’t know what it’s saying. Section 230 does not lead to an “attention-grooming model.” That’s just how society works. And, then, when she says society must seek quality as a whole, given how many people are online, the only way to do that is with algorithms trying to make some sort of call on what is, and what is not, quality.

That’s how this works.

Does she imagine that without Section 230, algorithms will go away, but good quality content will magically rise up? Because that’s not how any of this actually works.

Again, there’s much more in her written testimony, and none of it makes any sense at all.

Her spoken testimony was just as bad. Rep. Bob Latta asked her about the national security claims (some of which were quoted above) and we got this word salad, none of which has anything to do with Section 230:

I think it’s important to realize that our internet is precisely unique because it’s so open and that makes it uniquely vulnerable to all sorts of cyber attacks. Just this week, we saw an extraordinarily complicated plot that is most likely done by China, Russia or North Korea that could have blown up the internet as we know it. If you want to look up XZ Utils, Google that and you’ll find all kinds of details. They’re still sorting out what the intention was. It’s extraordinarily sophisticated though, so I think that the idea that we have a Chinese company where data on American children is being stored and potentially utilized in China, can be used to influence our children. It can be used in any number of ways no matter what they tell you. So I very much support and applaud the legislation to repeal, not to repeal, but to end TikToks operations in the United States.

The national security implications are extraordinary. Where the data is stored is so important and how it can be used to manipulate and influence us is so important. And I think the next frontier that I’ll conclude with this, for warfare, is in cyberspace. It’s where weak countries have huge advantages. They can pour resources into hackers who could really blow up our infrastructure, our hospitals, our universities. They’re even trying to get, as you know, into the House. This House right here. So I think repealing Section 230 is connected to addressing a host of potential harms

Nothing mentioned in there — from supply chain attacks like xz utils, to a potential TikTok ban, to hackers breaking into hospitals — has anything whatsoever to do with Section 230. She just throws it in at the end as if they’re connected.

She also claimed that Eric Schmidt has come out in favor of “repealing Section 230,” which was news to me. It also appears to be absolutely false. I went and looked, and the only thing I can find is a Digiday article which claims he called for reforms (not a repeal). The article never actually quotes him saying anything related to Section 230 at all, so it’s unclear what (if anything) he actually said. Literally the only quotes from Schmidt are old man stuff about how the kids these days just need to learn how to put down their phones, and then something weird about the fairness doctrine. Not 230.

Later, in the hearing, she was asked about the impact on smaller companies (some of which I mentioned above) and again demonstrates a near total ignorance of how this all works:

There is some concern, it’s sometimes expressed from small businesses that they are going to be the subject of frivolous lawsuits, defamation lawsuits, and they can be sued out of business even though they’ve defamed no one. I’m less concerned about that because if we were to repeal section (c)(1) of Section 230 of those 26 words, I think the First Amendment would govern and we would develop the jurisprudence to deal with small business in a more refined way. I think if anything, small businesses are in a better position to control and oversee what’s on their platforms than these monolithic large companies we have today. So with a bit of caution, I think that could be addressed.

The First Amendment always governs. But Section 230 is the “more refined way” that we’ve developed to help protect small businesses. The main function of Section 230 is to get cases, that would be long and costly if you had to defend them under the First Amendment, tossed out much earlier at the motion to dismiss stage. Literally that’s Section 230’s main purpose.

If you had to fight it out under the First Amendment, you’re talking about hundreds of thousands of dollars and a much longer case. And that cost is going to lead companies to (1) refuse to host lots of protected content, because it’s not worth the hassle, and (2) be much more open to pulling down any content that anyone complains about.

This is not speculative. There have been studies on this. Weaker intermediary laws always lead to massive overblocking. If Stanger had done her research, or even understood any of this, she would know this.

So why is she the one testifying before Congress?

I’ll just conclude with this banger, which was her final statement to Congress:

I just want to maybe take you back to the first part of your question to explain that, which I thought was a good one, which is that we have a long history of First Amendment jurisprudence in this country that in effect has been stopped by Section 230. In other words, if you review, if you remove (c)(1), that First Amendment jurisprudence will develop to determine when it is crime fire in a crowded theater, whether there’s defamation, whether there’s libel. We believe in free speech in this country, but even the First Amendment has some limits put on it and those could apply to the platforms. We have a strange situation right now if we take that issue of fentanyl that we were discussing earlier, what we have right now is essentially a system where we can go after the users, we can go after the dealers, but we can’t go after the mules. And I think that’s very problematic. We should hold the mules liable. They’re part of the system.

Yeah. So. She actually went to the whole fire in a crowded theater thing. This is the dead-on giveaway that the person speaking has no clue about the First Amendment. That’s dicta from a case from over 100 years ago, in a case that is no longer considered good law, and hasn’t been in decades. Even worse, that dicta came in a case about jailing war protestors.

She also trots out yet another of Ken “Popehat” White’s (an actual First Amendment expert) most annoying tropes about people opining on the First Amendment without understanding it: because the First Amendment has some limits, this new limit must be okay. That’s not how it works. As Ken and others have pointed out, the exceptions to the First Amendment are an established, known, and almost certainly closed set.

The Supreme Court has no interest in expanding that set. It refused to do so for animal crush videos, so it’s not going to magically do it for whatever awful speech you think it should limit.

Anyway, it was a shame that Congress chose to hold a hearing on Section 230 and only bring in witnesses who hate Section 230. Not a single witness who could explain why Section 230 is so important was brought in. But, even worse, they gave one of the three witness spots to someone who was spewing word salad level nonsense, that didn’t make any sense at all, was often factually incorrect (in hilariously embarrassing ways), and seemed wholly unaware of how any relevant thing worked.

Do better, Congress.

  • ✇Techdirt
  • Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The RecordMike Masnick
    A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness. I’m not going to review all the reasons it was wrong. You c
     

Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The Record

19. Duben 2024 v 18:26

A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness.

I’m not going to review all the reasons it was wrong. You can go back to my original article for that, though I will note that the argument seemed to suggest that getting rid of Section 230 would both lead to better content moderation and, at the same time, only moderation based on the First Amendment. Both of those points are obviously wrong, but the latter one is incoherent.

Given his long track record of wrongness, I had assumed that much of the article likely came from Lanier. However, I’m going to reassess that in light of Stanger’s recent performance before the House Energy & Commerce Committee. Last week, there was this weird hearing about Section 230, in which the Committee invited three academic critics of Section 230, and not a single person who could counter their arguments and falsehoods. We talked about this hearing a bit in this week’s podcast, with Rebecca MacKinnon from the Wikimedia Foundation.

Stanger was one of the three witnesses. The other two, Mary Anne Franks and Mary Graw Leary, presented some misleading and confused nonsense about Section 230. However, the misleading and confused nonsense about Section 230 at least fits into the normal framework of the debate around Section 230. There is confusion about how (c)(1) and (c)(2) interact, the purpose of Section 230, and (especially) some confusion about CSAM and Section 230 and an apparent unawareness that federal criminal behavior is exempted from Section 230.

But, let’s leave that aside. Because Stanger’s submission was so far off the mark that whoever invited her should be embarrassed. I’ve seen some people testify before Congress without knowing what they’re talking about, but I cannot recall seeing testimony this completely, bafflingly wrong before. Her submitted testimony is wrong in all the ways that the Wired article was wrong and more. There are just blatant factual errors throughout it.

It is impossible to cover all of the nonsense, so we’re just going to pick some gems.

Without Section 230, existing large social media companies would have to adapt. Decentralized Autonomous Organizations, (DAOs) such as BlueSky and Mastodon, would become more attractive. The emergent DAO social media landscape should serve to put further brakes on virality, allowing a more regional social media ecosystem to emerge, thereby creating new demand for local media. In an ideal world, networks of DAOs would comprise a new fediverse (a collection of social networking servers which can communicate with each other, while remaining independently controlled), where users would have greater choice and control over the communities of which they are a part.

So, um. That’s not what DAOs are, professor. You seem to be confusing decentralized social media with decentralized autonomous organizations, which are a wholly different thing. This is kind of like saying “social security benefits” when you mean “social media influencers” because both begin with “social.” They’re not the same thing.

A decentralized social media site is what it says on the tin. It’s a type of social media that isn’t wholly controlled by a single company. Different bits of it can be controlled by others, whether its users or alternative third-party providers. A DAO is an operation, often using mechanisms like cryptocurrency and tokens, to enable a kind of democratic voting, or (possibly) a set of smart contracts, that determine how the loosely defined organization is run. They are not the same.

In theory, a decentralized social media site could be run by a DAO, but I don’t know of any that currently are.

Also, um, decentralized social media can only really exist because of Section 230. “Without Section 230,” you wouldn’t have Bluesky or Mastodon, because they would face ruinous litigation for hosting content that people would sue over. So, no, you would not have either more decentralized social media (which I think is what you meant) or DAOs (which are wholly unrelated). You’d have a lot less, because hosting third-party speech would come with way more liability risk.

Also, there’s nothing inherent to decentralized social media that means you’d “put the brakes on virality.” Mastodon has developed to date in a manner designed to tamp down virality, but Bluesky hasn’t? Nor have other decentralized social media offerings, many of which hope to serve a global conversation where virality is a part of it. And that wouldn’t really change with or without Section 230. Mastodon made that decision because of the types of communities it wanted to foster. And, indeed, its ability to do that is, in part, due to intermediary liability protections like Section 230, that enable the kind of small, more focused community moderation Mastodon embraces already.

It’s really not clear to me that Professor Stanger even knows what Section 230 does.

Non-profits like Wikipedia are concerned that their enterprise could be shut down through gratuitous defamation lawsuits that would bleed them dry until they ceased to exist (such as what happened with Gawker). I am not convinced this is a danger for Wikipedia, since their editing is done by humans who have first amendment rights, and their product is not fodder for virality….

Again, wut? The fact that their editing is “done by humans” has literally no impact on anything here. Why even mention that? Humans get sued for defamation all the time. And, if they’re more likely to get sued for defamation, they’re less likely to even want to edit at all.

And people get mad about their Wikipedia articles all the time, and sometimes they sue over them. Section 230 gets those lawsuits thrown out. Without it, those lawsuits would last longer and be more expensive.

Again, it’s not at all clear if Prof. Stanger even knows what Section 230 is or how it works.

The Facebook Files show that Meta knew that its engagement algorithms had adverse effects on the mental health of teenage girls, yet it has done nothing notable to combat those unintended consequences. Instead, Meta’s lawyers have invoked Section 230 in lawsuits to defend itself against efforts to hold it liable for serious harms

Again, this is just wrong. What the crux of the Facebook Files showed was that Meta was, in fact, doing research to learn about where its algorithms might cause harm in order to try to minimize that harm. However, because of some bad reporting, it now means that companies will be less likely to even do that research, because people like Professor Stanger will misrepresent it, claiming that they did nothing to try to limit the harms. This is just outright false information.

Also, the cases where Meta has invoked Section 230 would be unrelated to the issue being discussed here because 230 is about not being held liable for user content.

The online world brought to life by Section 230 now dehumanizes us by highlighting our own insignificance. Social media and cancel culture make us feel small and vulnerable, where human beings crave feeling large and living lives of meaning, which cannot blossom without a felt sense of personal agency that our laws and institutions are designed to protect. While book publishers today celebrate the creative contributions of their authors, for-profit Internet platforms do not.

I honestly have no idea what’s being said here. “Dehumanizes us by highlighting our own insignificance?” What are you even talking about? People were a lot more “insignificant” pre-internet, when they had no way to speak out. And what does “cancel culture” have to do with literally any of this?

Without Section 230, companies would be liable for the content on their platforms. This would result in an explosion of lawsuits and greater caution in such content moderation, although companies would have to balance such with first amendment rights. Think of all the human jobs that could be generated!

Full employment for tort lawyers! I mean, this is just a modern version of Bastiat’s broken window fallacy. Think of all the economic activity if we just break all the windows in our village!

Again and again, it becomes clear that Stanger has no clue how any of this works. She does not understand Section 230. She does not understand the internet. She does not understand the First Amendment. And she does not understand content moderation. It’s a hell of a thing, considering she is testifying about Section 230 and its impact on social media and the First Amendment.

At a stroke, content moderation for companies would be a vastly simpler proposition. They need only uphold the First Amendment, and the Courts would develop the jurisprudence to help them do that, rather than to put the onus of moderation entirely on companies.

That is… not at all how it would work. They don’t just need to “uphold the First Amendment” (which is not a thing that companies can even do). The First Amendment’s only role is in restricting the government, not companies, from passing laws that infringe on a person’s ability to express themselves.

Instead, as has been detailed repeatedly, companies would face the so-called “moderator’s dilemma.” Because the First Amendment requires distributors to have actual knowledge of content violating the law to be liable, a world without Section 230 would incentivize one of two things, neither of which is “upholding the First Amendment.” They would either let everything go and do as little moderation as possible (so as to avoid the requisite knowledge), or they’d become very aggressive in limiting and removing content to avoid liability (even though this wouldn’t work and they’d still get hit with tons of lawsuits).

We’ve been here before. When government said the American public owned the airwaves, so television broadcasting would be regulated, they put in place regulations that supported the common good. The Internet affects everyone, and our public square is now virtual, so we must put in place measures to ensure that our digital age public dialogue includes everyone. In the television era, the fairness doctrine laid that groundwork. A new lens needs to be developed for the Internet age.

Except, no, that’s just factually wrong. The only reason that the government was able to put regulations on broadcast television was because the government controlled the public spectrum which they licensed to the broadcasters. The Supreme Court made clear in Red Lion that without that, they could not hinder the speech of media companies. So, the idea that you can just apply similar regulations to the internet is just fundamentally clueless. The internet is not publicly owned spectrum licensed to anyone.

While Section 230 perpetuates an illusion that today’s social media companies are common carriers like the phone companies, they are not. Unlike Ma Bell, they curate the content they transmit to users

Again, it appears the Professor is wholly unaware of Section 230 and how it works. The authors of Section 230 made it clear over and over again that they wrote 230 to be the opposite of common carriers. No one who supports Section 230 thinks it makes platforms into common carriers, because it does not. The entire point was to free up companies to choose how to curate content, so as to allow those companies to craft the kinds of communities they wanted. They only people claiming the “illusion” of common carrierness are those who are trying to destroy Section 230.

So there is no “illusion” here, unless you don’t understand what you’re talking about.

The repeal of Section 230 would also be a step in the right direction in addressing what are presently severe power imbalances between government and corporate power in shaping democratic life. It would also shine a spotlight on a globally disturbing fact: the overwhelming majority of global social media is currently in the hands of one man (Mark Zuckerberg), while nearly half the people on earth have a Meta account. How can that be a good thing under any scenario for the free exchange of ideas?

I mean, we agree that it’s bad that Meta is so big. But if you remove Section 230 (as Meta itself has advocated for!), you help Meta get bigger and harm the competition. Meta has a building full of lawyers. They can handle the onslaught of lawsuits that this would bring (as Stanger herself gleefully cheers on). It’s everyone else, the smaller sites, such as the decentralized players (not DAOs) who would get destroyed.

Mastodon admins aren’t going to be able to afford to pay to defend the lawsuits. Bluesky doesn’t have a building full of lawyers. The big winner here would be Meta. The cost to Meta of removing Section 230 is minimal. The cost to everyone trying to eat away at Meta’s marketshare would be massive.

The new speech is governed by the allocation of virality in our virtual public square. People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice. If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm. It is as if the constitution had required citizens to speak through actors or lawyers who answered to the Dutch East India Company, or some other large remote entity. What power should these intermediaries have? When the very logic of speech must shift in order for people to be heard, is that still free speech? This was not a problem foreseen in the law.

I mean, this is just ahistorical nonsense. Historically, most people had no way to get their message out at all. You could talk to your friends, family, co-workers, and neighbors, and that was about it. If you wanted to reach beyond that small group, you required some large gatekeeper (a publisher, a TV or radio producer, a newspaper) to grant you access, which they refused for the vast majority of people.

The internet flipped all that on its head, allowing anyone to effectively speak to anyone. The reason we have algorithms is not “Section 230” and the algorithms aren’t “setting the volume,” they came in to deal with the simple fact that there’s just too much information, and it was flooding the zone. People wanted to find information that was more relevant to them, and with the amount of content available online, the only way to manage that was with some sort of algorithm.

But, again, the rise of algorithms is not a Section 230 issue, even though Stanger seems to think it is.

Getting rid of the liability shield for all countries operating in the United States would have largely unacknowledged positive implications for national security, as well as the profit margins for US-headquartered companies. Foreign electoral interference is not in the interests of democratic stability, precisely because our enemies benefit from dividing us rather than uniting us. All foreign in origin content could therefore be policed at a higher standard, without violating the first amendment or the privacy rights of US citizens. As the National Security Agency likes to emphasize, the fourth amendment does not apply to foreigners and that has been a driver of surveillance protocols since the birth of the Internet. It is probable that the Supreme Court’s developing first amendment jurisprudence for social media in a post-230 world would embrace the same distinction. At a stroke, the digital fentanyl that TikTok represents in its American version could easily be shut down, and we could through a process of public deliberation leading to new statutory law collectively insist on the same optimization targets for well-being, test scores, and time on the platform that Chinese citizens currently enjoy in the Chinese version of TikTok (Douyin)

Again, this is a word salad that is mostly meaningless.

First of all, none of this has anything to do with Section 230, but rather the First Amendment. And it’s already been noted, clearly, that the First Amendment protects American users of foreign apps.

No one is saying “you can’t ban TikTok because of 230,” they’re saying “you can’t ban TikTok because of the First Amendment.” The Supreme Court isn’t going to magically reinvent long-standing First Amendment doctrine because 230 is repealed. This is nonsense.

And, we were just discussing what utter nonsense it is to claim that TikTok is “digital fentanyl” so I won’t even bother repeating that.

There might also be financial and innovation advantages for American companies with this simple legislative act. Any commercial losses for American companies from additional content moderation burdens would be offset by reputational gains and a rule imposed from without on what constitutes constitutionally acceptable content. Foreign electoral interference through misinformation and manipulation could be shut down as subversive activity directed at the Constitution of the United States, not a particular political party.

This part is particularly frustrating. This is why internet companies already moderate. Stanger’s piece repeatedly seems to complain both about too little moderation (electoral interference! Alex Jones!) and too much moderation (algorithms! dastardly Zuck deciding what I can read!).

She doesn’t even seem to realize that her argument is self-contradictory.

But, here, the supposed “financial and innovation advantages” from American companies being able to get “reputational gains” by stopping “misinformation” already exists. And it only exists because of Section 230. Which Professor Stanger is saying we need to remove to get the very thing it enables, and which would be taken away if it were repealed.

This whole thing makes me want to bang my head on my desk repeatedly.

Companies moderate today to (1) make users’ experience better and (2) to make advertisers happier that they’re not facing brand risk from having ads appear next to awful content. The companies that do better already achieve that “reputational benefit,” and they can do that kind of moderation because they know Section 230 prevents costly, wasteful, vexatious litigation from getting too far.

If you remove Section 230, that goes away. As discussed above, companies then are much more limited in the kinds of moderation they can do, which means users have a worse experience and advertisers have a worse experience, leading to reputational harm.

Today, companies already try to remove or diminish the power of electoral interference. That’s a giant part of trust & safety teams’ efforts. But they can really only do it safely because of 230.

The attention-grooming model fostered by Section 230 leads to stupendous quantities of poor-quality data. While an AI model can tolerate a significant amount of poor-quality data, there is a limit. It is unrealistic to imagine a society mediated by mostly terrible communication where that same society enjoys unmolested, high-quality AI. A society must seek quality as a whole, as a shared cultural value, in order to maximize the benefits of AI. Now is the best time for the tech business to mature and develop business models based on quality.

I’ve read this paragraph multiple times, and I still don’t know what it’s saying. Section 230 does not lead to an “attention-grooming model.” That’s just how society works. And, then, when she says society must seek quality as a whole, given how many people are online, the only way to do that is with algorithms trying to make some sort of call on what is, and what is not, quality.

That’s how this works.

Does she imagine that without Section 230, algorithms will go away, but good quality content will magically rise up? Because that’s not how any of this actually works.

Again, there’s much more in her written testimony, and none of it makes any sense at all.

Her spoken testimony was just as bad. Rep. Bob Latta asked her about the national security claims (some of which were quoted above) and we got this word salad, none of which has anything to do with Section 230:

I think it’s important to realize that our internet is precisely unique because it’s so open and that makes it uniquely vulnerable to all sorts of cyber attacks. Just this week, we saw an extraordinarily complicated plot that is most likely done by China, Russia or North Korea that could have blown up the internet as we know it. If you want to look up XZ Utils, Google that and you’ll find all kinds of details. They’re still sorting out what the intention was. It’s extraordinarily sophisticated though, so I think that the idea that we have a Chinese company where data on American children is being stored and potentially utilized in China, can be used to influence our children. It can be used in any number of ways no matter what they tell you. So I very much support and applaud the legislation to repeal, not to repeal, but to end TikToks operations in the United States.

The national security implications are extraordinary. Where the data is stored is so important and how it can be used to manipulate and influence us is so important. And I think the next frontier that I’ll conclude with this, for warfare, is in cyberspace. It’s where weak countries have huge advantages. They can pour resources into hackers who could really blow up our infrastructure, our hospitals, our universities. They’re even trying to get, as you know, into the House. This House right here. So I think repealing Section 230 is connected to addressing a host of potential harms

Nothing mentioned in there — from supply chain attacks like xz utils, to a potential TikTok ban, to hackers breaking into hospitals — has anything whatsoever to do with Section 230. She just throws it in at the end as if they’re connected.

She also claimed that Eric Schmidt has come out in favor of “repealing Section 230,” which was news to me. It also appears to be absolutely false. I went and looked, and the only thing I can find is a Digiday article which claims he called for reforms (not a repeal). The article never actually quotes him saying anything related to Section 230 at all, so it’s unclear what (if anything) he actually said. Literally the only quotes from Schmidt are old man stuff about how the kids these days just need to learn how to put down their phones, and then something weird about the fairness doctrine. Not 230.

Later, in the hearing, she was asked about the impact on smaller companies (some of which I mentioned above) and again demonstrates a near total ignorance of how this all works:

There is some concern, it’s sometimes expressed from small businesses that they are going to be the subject of frivolous lawsuits, defamation lawsuits, and they can be sued out of business even though they’ve defamed no one. I’m less concerned about that because if we were to repeal section (c)(1) of Section 230 of those 26 words, I think the First Amendment would govern and we would develop the jurisprudence to deal with small business in a more refined way. I think if anything, small businesses are in a better position to control and oversee what’s on their platforms than these monolithic large companies we have today. So with a bit of caution, I think that could be addressed.

The First Amendment always governs. But Section 230 is the “more refined way” that we’ve developed to help protect small businesses. The main function of Section 230 is to get cases, that would be long and costly if you had to defend them under the First Amendment, tossed out much earlier at the motion to dismiss stage. Literally that’s Section 230’s main purpose.

If you had to fight it out under the First Amendment, you’re talking about hundreds of thousands of dollars and a much longer case. And that cost is going to lead companies to (1) refuse to host lots of protected content, because it’s not worth the hassle, and (2) be much more open to pulling down any content that anyone complains about.

This is not speculative. There have been studies on this. Weaker intermediary laws always lead to massive overblocking. If Stanger had done her research, or even understood any of this, she would know this.

So why is she the one testifying before Congress?

I’ll just conclude with this banger, which was her final statement to Congress:

I just want to maybe take you back to the first part of your question to explain that, which I thought was a good one, which is that we have a long history of First Amendment jurisprudence in this country that in effect has been stopped by Section 230. In other words, if you review, if you remove (c)(1), that First Amendment jurisprudence will develop to determine when it is crime fire in a crowded theater, whether there’s defamation, whether there’s libel. We believe in free speech in this country, but even the First Amendment has some limits put on it and those could apply to the platforms. We have a strange situation right now if we take that issue of fentanyl that we were discussing earlier, what we have right now is essentially a system where we can go after the users, we can go after the dealers, but we can’t go after the mules. And I think that’s very problematic. We should hold the mules liable. They’re part of the system.

Yeah. So. She actually went to the whole fire in a crowded theater thing. This is the dead-on giveaway that the person speaking has no clue about the First Amendment. That’s dicta from a case from over 100 years ago, in a case that is no longer considered good law, and hasn’t been in decades. Even worse, that dicta came in a case about jailing war protestors.

She also trots out yet another of Ken “Popehat” White’s (an actual First Amendment expert) most annoying tropes about people opining on the First Amendment without understanding it: because the First Amendment has some limits, this new limit must be okay. That’s not how it works. As Ken and others have pointed out, the exceptions to the First Amendment are an established, known, and almost certainly closed set.

The Supreme Court has no interest in expanding that set. It refused to do so for animal crush videos, so it’s not going to magically do it for whatever awful speech you think it should limit.

Anyway, it was a shame that Congress chose to hold a hearing on Section 230 and only bring in witnesses who hate Section 230. Not a single witness who could explain why Section 230 is so important was brought in. But, even worse, they gave one of the three witness spots to someone who was spewing word salad level nonsense, that didn’t make any sense at all, was often factually incorrect (in hilariously embarrassing ways), and seemed wholly unaware of how any relevant thing worked.

Do better, Congress.

  • ✇Techdirt
  • We Can’t Have Serious Discussions About Section 230 If People Keep Misrepresenting ItMike Masnick
    At the Supreme Court’s oral arguments about Florida and Texas’ social media content moderation laws, there was a fair bit of talk about Section 230. As we noted at the time, a few of the Justices (namely Clarence Thomas and Neil Gorsuch) seemed confused about Section 230 and also about what role (if any) it had regarding these laws. The reality is that the only role for 230 is in preempting those laws. Section 230 has a preemption clause that basically says no state laws can go into effect that
     

We Can’t Have Serious Discussions About Section 230 If People Keep Misrepresenting It

1. Březen 2024 v 18:33

At the Supreme Court’s oral arguments about Florida and Texas’ social media content moderation laws, there was a fair bit of talk about Section 230. As we noted at the time, a few of the Justices (namely Clarence Thomas and Neil Gorsuch) seemed confused about Section 230 and also about what role (if any) it had regarding these laws.

The reality is that the only role for 230 is in preempting those laws. Section 230 has a preemption clause that basically says no state laws can go into effect that contradict Section 230 (in other words: no state laws that dictate how moderation must work). But that wasn’t what the discussion was about. The discussion was mostly about Thomas and Gorsuch’s confusion over 230 and thinking that the argument for Section 230 (that you’re not held liable for third party speech) contradicts the arguments laid out by NetChoice/CCIA in these cases, where they talked about the platforms’ own speech.

Gorsuch and Thomas were mixing up two separate things, as both the lawyers for the platforms and the US made clear. There are multiple kinds of speech at issue here. Section 230 does not hold platforms liable for third-party speech. But the issue with these laws was whether or not it constricted the platforms’ ability to express themselves in the way in which they moderated. That is, the editorial decisions that were being made expressing “this is what type of community we enable” are a form of public expression that the Florida & Texas laws seek to stifle.

That is separate from who is liable for individual speech.

But, as is the way of the world whenever it comes to discussions on Section 230, lots of people are going to get confused.

Today that person is Steven Brill, one of the founders of NewsGuard, a site that seeks to “rate” news organizations, including for their willingness to push misinformation. Brill publishes stories for NewsGuard on a Substack (!?!?) newsletter titled “Reality Check.” Unfortunately, Brill’s piece is chock full of misinformation regarding Section 230. Let’s do some correcting:

February marks the 28th anniversary of the passage of Section 230 of the Telecommunications Act of 1996. Today, Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online. But in February of 1996, this three-paragraph section of a massive telecommunications bill aimed at modernizing regulations related to the nascent cable television and cellular phone industries was an afterthought. Not a word was written about it in mainstream news reports covering the passage of the overall bill.

The article originally claimed it was the 48th anniversary, though it was later corrected (without a correction notice — which is something Newsguard checks on when rating the trustworthiness of publications). That’s not that big a deal, and I don’t think there’s anything wrong with “stealth” corrections for typos and minor errors like that.

But this sentence is just flat out wrong: “Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online.” It’s just not true. Section 230 gives limited exemptions from some forms of liability for third party content that they had no role in creating. That’s quite different than what Brill claims. His formulation suggests they’re not liable for anything they, themselves, put online. That’s false.

Section 230 is all about putting the liability on whichever party created the violation under the law. If a website is just hosting the content, but someone else created the content, the liability should go to the creator of the content, not the host.

Courts have had no problem finding liability on social media platforms for things they themselves post online. We have a string of such cases, covering Roommates, Amazon, HomeAway, InternetBrands, Snap and more. In every one of those cases (contrary to Brill’s claims), the courts have found that Section 230 does not protect things these platforms post online.

Brill gets a lot more wrong. He discusses the Prodigy and CompuServe cases and then says this (though he gives too much credit to CompuServe’s lack of moderation being the reason why the court ruled that way):

That’s why those who introduced Section 230 called it the “Protection for Good Samaritans” Act. However, nothing in Section 230 required screening for harmful content, only that those who did screen and, importantly, those who did not screen would be equally immune. And, as we now know, when social media replaced these dial-up services and opened its platforms to billions of people who did not have to pay to post anything, their executives and engineers became anything but good Samaritans. Instead of using the protection of Section 230 to exercise editorial discretion, they used it to be immune from liability when their algorithms deliberately steered people to inflammatory conspiracy theories, misinformation, state-sponsored disinformation, and other harmful content. As then-Federal Communications Commission Chairman Reed Hundt told me 25 years later, “We saw the internet as a way to break up the dominance of the big networks, newspapers, and magazines who we thought had the capacity to manipulate public opinion. We never dreamed that Section 230 would be a protection mechanism for a new group of manipulators — the social media companies with their algorithms. Those companies didn’t exist then.”

This is both wrong and misleading. First of all, nothing in Section 230 could “require” screening for harmful content, because both the First and Fourth Amendments would forbid that. So the complaint that it did not require such screening is not just misplaced, it’s silly.

We’ve gone over this multiple times. Pre-230, the understanding was that, under the First Amendment, liability of a distributor was dependent on whether or not the distributor had clear knowledge of the violative nature of the content. As the court in Smith v. California made clear, it would make no sense to hold someone liable without knowledge:

For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature.

That’s the First Amendment problem. But, we can take that a step further as well. If the state now requires scanning, you have a Fourth Amendment problem. Specifically, as soon as the government makes scanning mandatory, none of the content found during such scanning can ever be admissible in court, because no warrant was issued upon probable cause. As we again described a couple years ago:

The Fourth Amendment prohibits unreasonable searches and seizures by the government. Like the rest of the Bill of Rights, the Fourth Amendment doesn’t apply to private entities—except where the private entity gets treated like a government actor in certain circumstances. Here’s how that happens: The government may not make a private actor do a search the government could not lawfully do itself. (Otherwise, the Fourth Amendment wouldn’t mean much, because the government could just do an end-run around it by dragooning private citizens.) When a private entity conducts a search because the government wants it to, not primarily on its own initiative, then the otherwise-private entity becomes an agent of the government with respect to the search. (This is a simplistic summary of “government agent” jurisprudence; for details, see the Kosseff paper.) And government searches typically require a warrant to be reasonable. Without one, whatever evidence the search turns up can be suppressed in court under the so-called exclusionary rule because it was obtained unconstitutionally. If that evidence led to additional evidence, that’ll be excluded too, because it’s “the fruit of the poisonous tree.”

All of that seems kinda important?

Yet Brill rushes headlong on the assumption that 230 could have and should have required mandatory scanning for “harmful” content.

Also, most harmful content remains entirely protected by the First Amendment, making this idea even more ridiculous. There would be no liability for it.

Brill seems especially confused about how 230 and the First Amendment work together, suggesting (incorrectly) that 230 gives them some sort of extra editorial benefit that it does not convey:

With Section 230 in place, the platforms will not only have a First Amendment right to edit, but also have the right to do the kind of slipshod editing — or even the deliberate algorithmic promotion of harmful content — that has done so much to destabilize the world.  

Again, this is incorrect on multiple levels. The First Amendment gives them the right to edit. It also gives them the right to slipshod editing. And the right to promote harmful content via algorithms. That has nothing to do with Section 230.

The idea that “algorithmic promotion of harmful content… has done so much to destabilize the world” is a myth that has mostly been debunked. Some early algorithms weren’t great, but most have gotten much better over time. There’s little to no supporting evidence that “algorithms” have been particularly harmful over the long run.

Indeed, what we’ve seen is that while there were some bad algorithms a decade or so ago, pressure from the market has pushed the companies to improve. Users, advertisers, the media, have all pressured the companies to improve their algorithms and it seems to work.

Either way, those algorithms still have nothing to do with Section 230. The First Amendment lets companies use algorithms to recommend things, because algorithms are, themselves, expressions of opinion (“we think you would like this thing more than the next thing”) and nothing in there would trigger legal liability even if you dropped Section 230 altogether.

It’s a best (or worst) of both worlds, enjoyed by no other media companies.

This is simply false. Outright false. EVERY company that has a website that allows third-party content is protected by Section 230 for that third-party content. No company is protected for first-party content, online or off.

For example, last year, Fox News was held liable to the tune of $787 million for defaming Dominion Voting Systems by putting on guests meant to pander to its audience by claiming voter fraud in the 2020 election. The social media platforms’ algorithms performed the same audience-pleasing editing with the same or worse defamatory claims. But their executives and shareholders were protected by Section 230. 

Except… that’s not how any of this works, even without Section 230. Fox News was held liable because the content was produced by Fox News. All of the depositions and transcripts were… Fox News executives and staff. Because they created the defamatory content.

The social media apps didn’t create the content.

This is the right outcome. The blame should always go to the party who violated the law in creating the content.

And Fox News is equally as protected by Section 230 if there is defamation created by someone else but posted in a comment to a Fox News story (something that seems likely to happen frequently).

This whole column is misleading in the extreme, and simply wrong at other points. NewsGuard shouldn’t be publishing misinformation itself given that the company claims it’s promoting accuracy in news and pushing back against misinformation.

❌
❌