FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Techdirt
  • Dear Taylor Swift: There Are Better Ways To Respond To Trump’s AI Images Of You Than A LawsuitDark Helmet
    We’ve written a ton about Taylor Swift’s various adventures in intellectual property law and the wider internet. Given her sheer popularity and presence in pop culture, that isn’t itself particularly surprising. What has been somewhat interesting about her as a Techdirt subject, though, has been how she has straddled the line between being a victim of overly aggressive intellectual property enforcement as well as being a perpetrator of the same. All of this is to say that Swift is not a stranger
     

Dear Taylor Swift: There Are Better Ways To Respond To Trump’s AI Images Of You Than A Lawsuit

21. Srpen 2024 v 05:04

We’ve written a ton about Taylor Swift’s various adventures in intellectual property law and the wider internet. Given her sheer popularity and presence in pop culture, that isn’t itself particularly surprising. What has been somewhat interesting about her as a Techdirt subject, though, has been how she has straddled the line between being a victim of overly aggressive intellectual property enforcement as well as being a perpetrator of the same. All of this is to say that Swift is not a stranger to negative outcomes in the digital realm, nor is she a stranger to being the legal aggressor.

Which is why the point of this post is to be something of an open letter to Her Swiftness to not listen to roughly half the internet that is clamoring for her to sue Donald Trump for sharing some AI-generated images on social media falsely implying that Swift had endorsed him. First, the facts.

Taylor Swift has yet to endorse any presidential candidate this election cycle. But former President Donald Trump says he accepts the superstar’s non-existent endorsement.

Trump posted “I accept!” on his Truth Social account, along with a carousel of (Swift) images – at least some of which appear to be AI-generated.

One of the AI-manipulated photos depicts Swift as Uncle Sam with the text, “Taylor wants you to vote for Donald Trump.” The other photos depict fans of Swift wearing “Swifties for Trump” T-shirts.

As the quote notes, not all of the images were AI generated “fakes.” At least one of them was from a very real woman, who is very much a Swift fan, wearing a “Swifties for Trump” shirt. There is likewise a social media campaign for supporters from the other side of the aisle, too, “Swifties for Kamala”. None of that is really much of an issue, of course. But the images shared by Trump on Truth Social implied far more than a community of her fans that also like him. So much so, in fact, that he appeared to accept an endorsement that never was.

In case you didn’t notice, immediately below that top left picture is a label that clearly marks the article and associated images as “satire.” The image of Swift doing the Uncle Sam routine to recruit people to back Trump is also obviously not something that came directly from Swift or her people. In fact, while she has not endorsed a candidate in this election cycle (more on that in a moment), Swift endorsed Biden in 2020 with some particularly biting commentary around why she would not vote for Trump.

Now, Trump sharing misleading information on social media is about as newsworthy as the fact that the sun will set tonight. But it is worth noting that social media exploded in response, with a ton of people online advocating Swift to “get her legal team involved” or “sue Trump!” And that is something she absolutely should not do. Some outlets have even suggested that Swift should sue under Tennesse’s new ELVIS Act, which both prohibits the use of people’s voice or image without their authorization, and which has never been tested in court.

Trump’s post might be all it takes to give Swift’s team grounds to sue Trump under Tennessee’s Ensuring Likeness Voice and Image Security Act, or ELVIS Act. The law protects against “just about any unauthorized simulation of a person’s voice or appearance,” said Joseph Fishman, a law professor at Vanderbilt University.

“It doesn’t matter whether an image is generated by AI or not, and it also doesn’t matter whether people are actually confused by it or not,” Fishman said. “In fact, the image doesn’t even need to be fake — it could be a real photo, just so long as the person distributing it knows the subject of the photo hasn’t authorized the use.”

Please don’t do this. First, it probably won’t work. Suing via an untested law that is very likely to run afoul of First Amendment protections is a great way to waste money. Trump also didn’t create the images, presumably, and is merely sharing or re-truthing them. That’s going to make making him liable for them a challenge.

But the larger point here is that all Swift really has to do here is respond, if she chooses, with her own political endorsement or thoughts. It’s not as though she didn’t do so in the last election cycle. If she’s annoyed at what Trump did and wants to punish him, she can solve that with more speech: her own. Hell, there aren’t a ton of people out there who can command an audience that rivals Donald Trump’s… but she almost certainly can!

Just point out that what he shared was fake. Mention, if she wishes, that she voted against him last time. If she likes, she might want to endorse a different candidate. Or she can merely leave it with a biting denial, such as:

“The images Donald Trump shared implied that I have endorsed him. I have not. In fact, I didn’t authorize him to use my image in any way and request that he does not in the future. On the other hand, Donald Trump has a history of not minding much when it comes to getting a woman’s consent, so I won’t get my hopes up too much.”

  • ✇Techdirt
  • Tennessee’s New Quasi-Book Ban Law Results In School Shutting Down Library Right Before Classes ResumeTim Cushing
    Like far too many legislators in far too many states, Tennessee’s lawmakers have jumped on the book banning bandwagon. For years, public libraries and school libraries were stocked at the discretion of librarians and largely operated without a lot of interference from state governments. While attempts to ban certain books happened now and then, there was never a concerted effort to remove wide swaths of literature from public library shelves. Now, it’s just the sort of thing that happens multipl
     

Tennessee’s New Quasi-Book Ban Law Results In School Shutting Down Library Right Before Classes Resume

21. Srpen 2024 v 01:37

Like far too many legislators in far too many states, Tennessee’s lawmakers have jumped on the book banning bandwagon. For years, public libraries and school libraries were stocked at the discretion of librarians and largely operated without a lot of interference from state governments. While attempts to ban certain books happened now and then, there was never a concerted effort to remove wide swaths of literature from public library shelves.

Now, it’s just the sort of thing that happens multiple times on a daily basis. And the number of book challenges and book ban attempts continue to increase exponentially as idiots push their personal agendas using the government’s power to control what content the public has access to.

The law passed by the state legislature doesn’t actually ban books from school libraries. But no matter what the text says, that’s obviously the end goal. (h/t BookRiot)

Passed earlier this year, the bill amended the state’s “Age-Appropriate Materials Act,” signed into law by Republican Gov. Bill Lee in 2022, which, according to the ACLU of Tennessee, requires schools to maintain and post lists of the materials in their libraries and to evaluate challenged materials to determine whether or not they are “age-appropriate.”

So, now every Tom, Dick, and Karen can simply challenge a book and force librarians to review the content to see whether or not it’s “age-appropriate.” The initial bill didn’t even bother to define the few terms it bothered to use to describe the age-appropriateness of content, much less provide librarians with guidelines for handling challenges and/or eventual book removals.

The “fixed” version isn’t much better. While it does provide a list of things legislators think are inappropriate for all students (including those in their senior year of high school, where they’re often treated legally as adults when charged with crimes), the laundry list of inappropriate things is still far too vague.

H.B. 843 clarifies that books containing “nudity, or descriptions or depictions of sexual excitement, sexual conduct, excess violence, or sadomasochistic abuse” are not appropriate for K–12 students, regardless of the context in which those descriptions or depictions appear in the material.

How much violence is “excessive?” Will health textbooks depicting nudity, sexual conduct, and “sexual excitement” be removed from classrooms? Will no one under the age of 18 be able to access content they’re legally allowed to access anywhere else but in a public library?

Perhaps more importantly, what of the Bible?

During debate on the Tennessee Senate floor, state Sen. Jeff Yarbro (D) noted that the bill’s definition of what is “inappropriate” applies to the Bible. “You cannot read the book of Samuel or Kings or Chronicles, much less much of the first five books of the Bible, without significant discussions of rape, sexual excitement, multiple wives, bestiality — numerous things. That’s before you get in just to, you know, very express and explicit descriptions of violence,” Yarbro argued, according to WKRN News 2.

If this point gets pressed, you can rest assured a carve-out will be created for “religious texts,” but… you know… only applied to one specific religion and its main text.

The terms are vague and overly broad. The guidelines for compliance are still mostly nonexistent. And so, at least one school is reopening for the school year with its library closed.

A Wilson County high school is warning teachers to skip classroom libraries and closed the school library over concerns surrounding a new state law.

Under the law, any brief mention of sex, nudity or excess violence can lead to a book ban.

The Wilson County Director of Schools says they are temporarily closing the library at Green Hill High School to sort through books to make sure they get rid of the those that are required to be banned.

So, as teachers and librarians follow the government’s orders to ensure they’re only exposed to content the legislative majority likes, students are going be struggling to comprehend the things they’re learning in civics classes about their fundamental rights.

And all the bill’s supporters have to offer are patently false assertions about how bad things have been for unprotected students prior to the institution of this law.

Senator Pody explains they are trying to protect children from pornography which they’ve found in the past to be available in public schools.

I guarantee you this isn’t true. Notably, Senator Pody offers no times, dates, locations, or any other verification of his claim “pornography” has been found in school libraries or classrooms. Unfortunately, he’s representative of the legislative majority and its ideals. It’s nothing but censorship propelled by bigotry and backed by lies. Caught in the crossfire are the kids and the public school employees who just want to give them the best education they can.

  • ✇Techdirt
  • Age Verification Laws Are Just A Path Towards A Full Ban On Porn, Proponent AdmitsTim Cushing
    It’s never about the children. Supporters of age verification laws, book bans, drag show bans, and abortion bans always claim they’re doing these things to protect children. But it’s always just about themselves. They want to impose their morality on other adults. That’s all there is to it. Abortion bans are just a way to strip women of bodily autonomy. If it was really about cherishing children and new lives, these same legislators wouldn’t be routinely stripping school lunch programs of fundi
     

Age Verification Laws Are Just A Path Towards A Full Ban On Porn, Proponent Admits

20. Srpen 2024 v 19:50

It’s never about the children. Supporters of age verification laws, book bans, drag show bans, and abortion bans always claim they’re doing these things to protect children. But it’s always just about themselves. They want to impose their morality on other adults. That’s all there is to it.

Abortion bans are just a way to strip women of bodily autonomy. If it was really about cherishing children and new lives, these same legislators wouldn’t be routinely stripping school lunch programs of funding, introducing onerous means testing to government aid programs, and generally treating children as a presumptive drain on society.

The same goes for book bans. They claim they want to prevent children from accessing inappropriate material. But you can only prevent children from accessing it by removing it entirely from public libraries, which means even adults will no longer be able to read these books.

The laws targeting drag shows aren’t about children. They’re about punishing certain people for being the way they are — people whose mere existence seems to be considered wholly unacceptable by bigots with far too much power.

The slew of age verification laws introduced in recent years are being shot down by courts almost as swiftly as they’re enacted. And for good reason. Age verification laws are unconstitutional. And they’re certainly not being enacted to prevent children from accessing porn.

Of course, none of the people pushing this kind of legislation will ever openly admit their reasons for doing so. But they will admit it to people they think are like-minded. All it takes is a tiny bit of subterfuge to tease these admissions out of activist groups that want to control what content adults have access to — something that’s barely hidden by their “for the children” facade.

As Shawn Musgrave reports for The Intercept, a couple of people managed to coax this admission out of a former Trump official simply by pretending they were there to give his pet project a bunch of cash.

“I actually never talk about our porn agenda,” said Russell Vought, a former top Trump administration official, in late July. Vought was chatting with two men he thought were potential donors to his right-wing think tank, the Center for Renewing America. 

For the last three years, Vought and the CRA have been pushing laws that require porn websites to verify their visitors are not minors, on the argument that children need to be protected from smut. Dozens of states have enacted or considered these “age verification laws,” many of them modeled on the CRA’s proposals. 

[…]

But in a wide-ranging, covertly recorded conversation with two undercover operatives — a paid actor and a reporter for the British journalism nonprofit Centre for Climate Reporting — Vought let them in on a thinly veiled secret: These age verification laws are a pretext for restricting access to porn more broadly. 

“Thinly veiled” is right. While it’s somewhat amusing Vought was taken in so easily and was immediately willing to say the quiet part loud when he thought cash was on the line, he’s made his antipathy towards porn exceedingly clear. As Musgrave notes in his article, Vought’s contribution to Project 2025 — a right-wing masturbatory fantasy masquerading as policy proposals should Trump take office again — almost immediately veers into the sort of territory normally only explored by dictators and autocrats who relied heavily on domestic surveillance, forced labor camps, and torture to rein in those who disagreed with their moral stances.

Pornography, manifested today in the omnipresent propagation of transgender ideology and sexualization of children, for instance, is not a political Gordian knot inextricably binding up disparate claims about free speech, property rights, sexual liberation, and child welfare. It has no claim to First Amendment protection. Its purveyors are child predators and misogynistic exploiters of women. Their product is as addictive as any illicit drug and as psychologically destructive as any crime. Pornography should be outlawed. The people who produce and distribute it should be imprisoned. Educators and public librarians who purvey it should be classed as registered sex offenders. And telecommunications and technology firms that facilitate its spread should be shuttered.

Perhaps the most surprising part of this paragraph (and, indeed, a lot of Vought’s contribution to Project 2025) is that it isn’t written in all caps with a “follow me on xTwitter” link attached. These are not the words of a hinged person. They are the opposite — the ravings of a man in desperate need of a competent re-hinging service.

And he’s wrong about everything in this paragraph, especially his assertion that pornography is not a First Amendment issue. It is. That’s why so many of these laws are getting rejected by federal courts. The rest is hyperbole that pretends it’s just bold, common sense assertions. I would like to hear more about the epidemic of porn overdoses that’s leaving children parentless and overloading our health system. And who can forget the recent killing sprees of the Sinoloa Porn Cartel, which has led to federal intervention from the Mexican government?

But the most horrifying part is Vought’s desire to imprison people for producing porn and converting librarians to registered sex offenders just because their libraries carry some content that personally offends his sensibilities.

These are the words and actions of people who strongly support fascism so long as they’re part of the ruling party. They don’t care about kids, America, democracy, or the Constitution. They want a nation of followers and the power to punish anyone who steps out of line. The Center for Renewing America is only one of several groups with the same ideology and the same censorial urges. These are dangerous people, but their ideas and policy proposals are now so common it’s almost impossible to classify it as “extremist.” There are a lot of Americans who would rather see the nation destroyed than have to, at minimum, tolerate people and ideas they don’t personally like. Their ugliness needs to be dragged out into the open as often as possible, if only to force them to confront the things they’ve actually said and done.

  • ✇Techdirt
  • Justice Alito Almost Messed Up The Internet; Then He Threw A Temper TantrumMike Masnick
    It turns out the internet was one Sam Alito petulant tantrum away from being a total disaster. In two key First Amendment cases, Alito was given the majority opinion to write. And, in both of them, his insistence on obliterating the old boundaries of the First Amendment caused other Justices to switch sides – and Alito to act like a spoiled brat. This year, the Supreme Court session ran later than usual. Usually, they finish up by the end of June, but this year it extended the term over to July
     

Justice Alito Almost Messed Up The Internet; Then He Threw A Temper Tantrum

2. Srpen 2024 v 21:56

It turns out the internet was one Sam Alito petulant tantrum away from being a total disaster. In two key First Amendment cases, Alito was given the majority opinion to write. And, in both of them, his insistence on obliterating the old boundaries of the First Amendment caused other Justices to switch sides – and Alito to act like a spoiled brat.

This year, the Supreme Court session ran later than usual. Usually, they finish up by the end of June, but this year it extended the term over to July 1st. There were, obviously, a bunch of “big” decisions (Presidential immunity! Chevron deference!) that were held to the very end, including the two big internet cases: the NetChoice cases and the Murthy case.

As people awaited the decisions, there was a fair bit of SCOTUSology as court experts (and non-experts) speculated based on the number of decisions written by each Justice (and which months the cases were heard in) as to which Justice would have the majority decisions in remaining cases. I heard from quite a few such experts who expected that Alito would have the majority decision in the NetChoice cases, given that the other Justices all seemed to have majority opinions from February cases, and Alito’s name seemed to be missing.

Some people were surprised because in basically all of the internet cases oral arguments, Alito seemed quite out of step with the rest of the Court (and reality). When the decision finally came out, saying that the lower courts didn’t do the proper analysis for a “facial challenge,” it sent the cases back to the lower courts for a redo. But the majority opinion included some very important commentary about how the First Amendment still applies to social media editorial discretion. The overall ruling was technically a unanimous decision, but some noted that Justice Alito’s “concurrence” read like it had been written to be the majority opinion. It delves deeper into the facts of the case than a concurrence normally would (the majority opinion normally handles that).

Oh, and one other weird thing: in that final week of June, people were confused by Justice Alito not showing up to a couple of decision days, and his absence was never explained. Until now.

CNN now has quite an incredible insider’s tale of how Justice Alito had, in fact, been given the job of writing the majority opinion in the NetChoice cases, but lost it because he tried to push the decision too far into saying that states could regulate content moderation.

Alito, while receptive to the 5th Circuit’s opinion minimizing the companies’ speech interests, emphasized the incompleteness of the record and the need to remand the cases. Joining him were fellow conservatives Clarence Thomas and Neil Gorsuch and, to some extent, Barrett and Jackson.

On the other side was Kagan, leaning toward the 11th Circuit’s approach. She wanted to clarify the First Amendment implications when states try to control how platforms filter messages and videos posted by their users. She was generally joined by Chief Justice John Roberts and Justices Sonia Sotomayor and Brett Kavanaugh.

Alito began writing the court’s opinion for the dominant five-member bloc, and Kagan for the remaining four.

It’s also interesting that Justice Jackson was siding with Alito. During oral arguments, Justice Jackson asked some… odd questions, leading some to worry about how she might come down. The CNN report suggests those fears were legitimate.

Either way, Alito pushed his views too far and caused both Barrett and Jackson to bail out.

But when Alito sent his draft opinion around to colleagues several weeks later, his majority began to crumble. He questioned whether any of the platforms’ content-moderation could be considered “expressive” activity under the First Amendment.

Barrett, a crucial vote as the case played out, believed some choices regarding content indeed reflected editorial judgments protected by the First Amendment. She became persuaded by Kagan, but she also wanted to draw lines between the varying types of algorithms platforms use.

“A function qualifies for First Amendment protection only if it is inherently expressive,” Barrett wrote in a concurring statement, asserting that if platform employees create an algorithm that identifies and deletes information, the First Amendment protects that exercise of editorial judgment. That might not be the situation, Barrett said, for algorithms that automatically present content aimed at users’ preferences.

Kagan added a footnote to her majority opinion buttressing that point and reinforcing Barrett’s view. Kagan wrote that the court was not dealing “with feeds whose algorithms respond solely to how users act online – giving them the content they appear to want, without any regard to independent content standards.”

Barrett’s concerns have been worrying to some, as it suggests that algorithmic recommendations may not be protected by the First Amendment. This would upset a bunch of what people thought was settled law regarding things like search engine recommendations. However, the hope is that if such a case comes before the Court (which it almost certainly will…), that a fuller briefing on the record would clarify that algorithmic recommendations are still speech.

As we noted, Alito’s concurrence reads pretty petulant. It declares the majority’s “First Amendment applies to social media” explanation as “nonbinding dicta.” CNN details that this was him being angry that he lost the majority on that case.

But the key reason he lost control over the decision seems to be that he, unlike the eventual majority, would have sided a lot more with the Fifth Circuit’s ruling, which upended a century’s worth of First Amendment law.

Alito had the backing of only two justices in the end, Thomas and Gorsuch. He expressed sympathy for state efforts to restrict what, in an earlier phase of the Texas case Alito called “the power of dominant social media corporations to shape public discussion of the important issues of the day.”

In his separate July 1 opinion for a minority, Alito pointed up why states might want to regulate how platforms filter content: “Deleting the account of an elected official or candidate for public office may seriously impair that individual’s efforts to reach constituents or voters, as well as the ability of voters to make a fully informed electoral choice. And what platforms call ‘content moderation’ of the news or user comments on public affairs can have a substantial effect on popular views.”

Like Oldham, Alito took jabs at the “sophisticated counsel” who challenged the state regulations.

The same article notes that Alito also lost the majority on another “Fifth Circuit misunderstands the First Amendment” case. The one involving Sylvia Gonzalez, who was retaliated against by the mayor for her efforts to shake up the local government. The Fifth Circuit originally said this was totally fine. Eventually, the Supreme Court sent the case back to the Fifth Circuit to try again.

But again, Alito tried to go too far:

When the justices voted on the case in March, the majority agreed that the 5th Circuit erred in the standard it used. Alito was assigned the opinion.

But as he began writing, he went further than the other justices in his review of Gonzalez’s case. Alito and his colleagues realized he couldn’t “hold five,” as the expression goes, for a majority.

A new majority agreed to dispatch the case with a limited rationale in unsigned opinion. Rejecting the 5th Circuit’s reasoning, the Supreme Court said the 5th Circuit had applied an “overly cramped view” of the court’s precedent for when people may sue for First Amendment retaliation claims. The high court noted that Gonzalez could not show evidence of whether officers handled similar situations differently because her situation, involving the alleged removal of a document, was exceedingly rare.

Alito also wrote a concurrence for that case, but here he went on a long rant basically explaining why even if the Fifth Circuit used the wrong standard, there were lots of reasons why Gonzalez should have lost her case. Basically, if he had written the majority opinion, all of this would have qualified as “nonbinding dicta” under Alito’s own standard. Now, at least, it’s just a concurrence.

But, apparently, because Alito was ticked off that he couldn’t “hold five” in either of these cases, it caused him to take his ball and go home (i.e., just not show up at the Court on decision days):

On June 20, when the chief justice announced the opinion in Gonzalez v. Trevino, Alito’s chair at the bench was empty. Alito missed that day, as a total four opinions were handed down, and the next, June 21, when the justices released five other opinions.

Justices sometimes skip one of these final days of the annual session, but usually there’s an obvious reason for the absence, such as travel to a previously scheduled speech. Court officials declined to provide any explanation.

Alito returned for the final four announcement days of the term, yet sometimes appeared preoccupied. On the last day, when Kagan announced the decision in the NetChoice case, Alito was reading through material he had brought along to the bench.

Poor baby.

In both cases, Alito’s view of the First Amendment seems disconnected from reality and history. And, in both cases, he still had a chance to write the majority opinion (sending both cases down on what is, effectively, technicalities). But, in both cases, he was unable to write a reasonable opinion, causing his colleagues on the bench to jump ship to more reasonable rulings.

And, in response, he decided to just sulk like a teenager who didn’t get his way. In the end, that left us with a much better, more First Amendment supportive majority decision (in both cases). But it’s truly incredible how close we came to bad decisions in each, and how both of those flipped due to Alito’s insistence on pushing his terrible, unsupported ideas about free speech.

  • ✇Techdirt
  • Court: Your 1st Amendment Rights End Where A Cop’s Horse’s Ears BeginTim Cushing
    Say what you will about the roster of Trump apologists being hosted by the Volokh Conspiracy (and I will say plenty if given the chance), but at least Eugene Volokh continues to surface truly interesting cases. (Ilya Somin remains worth reading as well.) And this one is one for the record books. Possibly the first First Amendment case that involves one human and one horse. And it’s no regular horse, which is why this is a First Amendment lawsuit. The horse in question was ridden by Ocean City (
     

Court: Your 1st Amendment Rights End Where A Cop’s Horse’s Ears Begin

2. Srpen 2024 v 04:50

Say what you will about the roster of Trump apologists being hosted by the Volokh Conspiracy (and I will say plenty if given the chance), but at least Eugene Volokh continues to surface truly interesting cases. (Ilya Somin remains worth reading as well.)

And this one is one for the record books. Possibly the first First Amendment case that involves one human and one horse. And it’s no regular horse, which is why this is a First Amendment lawsuit. The horse in question was ridden by Ocean City (Maryland) police officer Matthew Foreman.

What started as a nuisance call quickly became something else once plaintiff Reniel Meyler realized just how easily Officer Foreman’s mount could be taken off task.

The opinion [PDF] issued by the Maryland federal court recounts the evening’s events that led to this lawsuit. Reniel Meyler finished his shift at work and then headed to a local bar. Having only arrived shortly before the Cork Bar’s closing, Meyler downed a beer and then headed out to the parking lot to socialize with a couple of his friends, Yokimba Bernier and Christoper Clarke. While waiting in the car for Clarke to return from taking a walk with another friend, Meyler and Bernier listened to some music at high volume.

How high? That’s not on the record. It was apparently loud enough to “draw the attention” of two Ocean City PD officers, one of which was riding a horse named “Moose.” As the officers (and their horse) approached, the pair turned down the volume.

Most of the ensuing encounter was captured on Officer Foreman’s body camera. What it captured was something out of the ordinary. But first, the ordinary stuff.

Immediately upon Foreman’s arrival on the scene, the following exchange took place.

Foreman: Where in the world do you guys think this is OK at 2 o’clock in the morning?

Meyler: Jamaica.

Foreman: Well then go back to Jamaica, cause you can’t do it here.

Bernier: [Inaudible]

Foreman: We can hear you [from] three blocks away, and you can go to jail for noise in Ocean City. OK? You guys really want to go to jail for noise?

Bernier: [Inaudible]

Foreman: No, not a ticket. Jail. Like, handcuffs. Jail. Noise.

Having established the baseline and his control of the scene, Foreman hung around to make sure the music didn’t again rise to law-breaking levels. However, it soon became clear that although Foreman had control of the scene, he no longer had control of his horse.

About 1 minute and 50 seconds into the video, Meyler turns towards Moose and makes some clicking sounds. Moose does not immediately react, but about five seconds later he visibly moves his head and appears to take a step or two in response to the clicking.

!!!

Officer Foreman reined in Moose to stop the movement. As the ruling notes, the horse appeared to “remain calm for the remainder of the video.” Not so for everyone else. Some more arguing about noise levels occurred with Officer Foreman delivering some noises of his own.

In response to either Bernier or Meyler, Foreman says “no, no, you don’t wave your hands at me, boom boom boom you go to these.” As he says “boom boom boom” Foreman takes out a pair of handcuffs and brandishes them in front of Meyler and Bernier, implying they will be arrested.

Shortly after Officer Foreman’s “boom boom boom,” Meyler went back to his click click click, earning this response from the horse’s boss:

“Stop antagonizing my horse. You’re not allowed to do that. You can’t interrupt my animal.”

Just an amazing set of sentences, each one more amazing than the last. Even in context, there’s nothing quite like a cop telling a civilian not to “interrupt” their “animal.”

Then Meyler’s friend (Bernier) decided to up the ante by declaring it wasn’t illegal to “interrupt” Foreman’s horse, pointing out that people pet police horses and talk to them or whatever without being threatened with an arrest. Au contraire, said Officer Foreman, albeit in different words. And different actions.

TL; DR: Meyler continued to click. Foreman continued to yell stuff about “interfering” with his horse. The end result was Meyler being arrested for antagonizing a cop, even if the cop said it was all about antagonizing an animal that remained pretty much unperturbed for the running time of the body cam video.

The official charges were “failure to obey a lawful order” and “interference with a police animal.” The charges were voluntarily dismissed by the prosecutor a month after the arrest. The lawsuit followed, with Meyler arguing being arrested for clicking at a police horse violated his First Amendment rights.

While Meyler still has the opportunity to pursue this in court (the complaint was dismissed without prejudice), it’s unlikely any of his federal constitutional claims have any chance of being found in his favor. (He still has a state law claim he can pursue, however.) And he certainly won’t be allowed to claim his free speech rights were violated when he was first told, then arrested for talking to a cop horse.

Unsurprisingly, there’s absolutely no precedent establishing this particular form of expression:

Here, Meyler has not pointed to a single case involving an arrest made under Ocean City’s police animal interference ordinance, or, for that matter, any case anywhere involving any claims of wrongful arrest related to alleged interference with police animals. Nor has he pointed to any cases involving the application of First Amendment rights to human-animal interactions.

With probable cause supporting the arrest and the complete lack of precedent in play, qualified immunity protects Officer Foreman from this lawsuit. And the court’s not about to use this case to establish Dr. Doolittle-esque precedent protecting people who say things to or make noises at government animals. Meyler’s moonshot has failed. He’ll just have to live with the less satisfying victory of having the charges dismissed. And, given the circumstances, that’s probably the better of both options.

  • ✇Techdirt
  • Unanimous SCOTUS To States: No Strong-Arming Third Parties To Silence Those You DislikeCathy Gellis
    This week all nine Supreme Court justices found in favor of the NRA. Not because they all like what the NRA is selling (although some of them probably do) but because the behavior of New York State, to try to silence the NRA by threatening third parties, was so constitutionally alarming. If New York could get away with doing what it had done, and threaten a speaker’s business relationships as a means of punishing the speaker, then so could any other state against any other speaker, including tho
     

Unanimous SCOTUS To States: No Strong-Arming Third Parties To Silence Those You Dislike

31. Květen 2024 v 18:27

This week all nine Supreme Court justices found in favor of the NRA. Not because they all like what the NRA is selling (although some of them probably do) but because the behavior of New York State, to try to silence the NRA by threatening third parties, was so constitutionally alarming. If New York could get away with doing what it had done, and threaten a speaker’s business relationships as a means of punishing the speaker, then so could any other state against any other speaker, including those who might be trying to speak out against the NRA. Like with the 303 Creative decision, the merit of this decision does not hinge on the merit of the prevailing party, because it is one that serves to protect every speaker of any merit (including those at odds with, say, the preferred policies of states like Texas and Florida, which would cover those conveying pretty much every liberal viewpoint).

The decision was written by Justice Sotomayor, which was something of a welcome surprise given how she’s gotten the First Amendment badly wrong in some of her more recent jurisprudence, including her dissent in 303 Creative and her decision in the Warhol case, where its expressive protection was conspicuously, and alarmingly, absent from her analysis entirely. But in this case she produced a good and important decision that contemporizes earlier First Amendment precedent, and, importantly, in a way entirely consistent with it. In doing so the Court has strengthened the hand of advocates seeking to protect speakers from a certain type of injury that state actors have been trying to use to silence them.

The Court does not break new ground in deciding this case. It only reaffirms the general principle from Bantam Books that where, as here, the complaint plausibly alleges coercive threats aimed at punishing or suppressing disfavored speech, the plaintiff states a First Amendment claim. [p.18]

In these cases it’s not a direct injury, because the First Amendment pretty clearly says that state actors cannot directly silence expression they do not like (although, true, we still see cases where the government has nevertheless tried to go that route). What this decision says is that state actors also cannot try to silence speakers indirectly by threatening anyone they need to interact with to no longer interact with them.

[A] government official cannot do indirectly what she is barred from doing directly: A government official cannot coerce a private party to punish or suppress disfavored speech on her behalf. [p.11]

Here, the New York official, Vullo, pressured insurance companies she regulated to not do business with the NRA.

As superintendent of the New York Department of Financial Services, Vullo allegedly pressured regulated entities to help her stifle the NRA’s pro-gun advocacy by threatening enforcement actions against those entities that refused to disassociate from the NRA and other gun-promotion advocacy groups. Those allegations, if true, state a First Amendment claim. [p. 1]

As alleged Vullo did more than argue that the companies not do business with the NRA, which might be a legitimate exercise of a government official’s ability to try to persuade.

A government official can share her views freely and criticize particular beliefs, and she can do so forcefully in the hopes of persuading others to follow her lead. In doing so, she can rely on the merits and force of her ideas, the strength of her convictions, and her ability to inspire others. What she cannot do, however, is use the power of the State to punish or suppress disfavored expression. See Rosenberger, 515 U. S., at 830 (explaining that governmental actions seeking to suppress a speaker’s particular views are presumptively unconstitutional). In such cases, it is “the application of state power which we are asked to scrutinize.” NAACP v. Alabama ex rel. Patterson, 357 U. S. 449, 463 (1958). [p.8-9]

What she did also went beyond a legitimate exercise of regulatory authority.

In sum, the complaint, assessed as a whole, plausibly alleges that Vullo threatened to wield her power against those refusing to aid her campaign to punish the NRA’s gun-promotion advocacy. If true, that violates the First Amendment. [p.15]

[A]lthough Vullo can pursue violations of state insurance law, she cannot do so in order to punish or suppress the NRA’s protected expression. So, the contention that the NRA and the insurers violated New York law does not excuse Vullo from allegedly employing coercive threats to stifle gun-promotion advocacy. [p.17]

It was using that regulatory authority against a third party as a means of punishing a speaker for its views that violated the First Amendment.

As discussed below, Vullo was free to criticize the NRA and pursue the conceded violations of New York insurance law. She could not wield her power, however, to threaten enforcement actions against DFS-regulated entities in order to punish or suppress the NRA’s gun-promotion advocacy. Because the complaint plausibly alleges that Vullo did just that, the Court holds that the NRA stated a First Amendment violation. [p.8]

Nothing in this case gives advocacy groups like the NRA a “right to absolute immunity from [government] investigation,” or a “right to disregard [state or federal] laws.” Patterson, 357 U. S., at 463. Similarly, nothing here prevents government officials from forcefully condemning views with which they disagree. For those permissible actions, the Constitution “relies first and foremost on the ballot box, not on rules against viewpoint discrimination, to check the government when it speaks.” Shurtleff v. Boston, 596 U. S. 243, 252 (2022). Yet where, as here, a government official makes coercive threats in a private meeting behind closed doors, the “ballot box” is an especially poor check on that official’s authority. Ultimately, the critical takeaway is that the First Amendment prohibits government officials from wielding their power selectively to punish or suppress speech, directly or (as alleged here) through private intermediaries. [p.19]

This decision is not the first time that courts have said no to this sort of siege warfare state officials have tried to wage against speakers they don’t like, to cut them off from relationships the speakers depend on when they can’t attack the speakers directly.

The NRA’s allegations, if true, highlight the constitutional concerns with the kind of intermediary strategy that Vullo purportedly adopted to target the NRA’s advocacy. Such a strategy allows government officials to “expand their regulatory jurisdiction to suppress the speech of organizations that they have no direct control over.” Brief for First Amendment Scholars as Amici Curiae Supporting Petitioner 8. It also allows government officials to be more effective in their speech-suppression efforts “[b]ecause intermediaries will often be less invested in the speaker’s message and thus less likely to risk the regulator’s ire.” [p.19]

One such earlier decision that we’ve discussed here is Backpage v. Dart, where the Seventh Circuit said no to government actors flexing their enforcement muscles against third parties in a way calculated to hurt the speaker they are really trying to target. But instead of there being just a few such decisions binding on just a few courts, suddenly there is a Supreme Court decision saying no to this practice now binding on all courts.

The big question for the moment is what happens next. There are still several cases pending before the Supreme Court – the two NetChoice/CCIA cases and Murthy v. Missouri – which all involve questions of whether the government has acted in a way designed to silence a speaker. The NetChoice/CCIA cases are framed a bit differently than this case, with the central question being whether state regulation of a platform directly implicates the platform’s own First Amendment rights, but for the Court to rule in NetChoice and CCIA’s favor and find that platforms do have such rights it would need to recognize that what Texas and Florida are trying to do in regulating Internet platforms is punish viewpoints they don’t favor. But if the Court could recognize that sort of viewpoint punishment is what the state of New York was trying to do indirectly here, perhaps it can also recognize that these other states are trying to do it directly there.

Meanwhile, in Murthy v. Missouri, the legal question is closer to the one raised here, and indeed the case was even heard on the same day. In that case the federal government is alleged to have unconstitutionally pressured platforms to cut certain speakers off from their services. It would be the same unconstitutional mechanics, to punish a speaker by coming after a third party the speaker depends on, but as even this decision suggests, only if the conduct of the government was in fact coercive and not simply an expression of preference the platforms were free to take or leave.

Which is why the concurrences from Justices Gorsuch and Jackson may be meaningful, if not for this NRA case but for others. With the latter concurrence Jackson appears to want to ensure that government actors are not chilled from exercising legitimate enforcement authority if they also disfavor the speaker who is in their regulatory sights.

The lesson of Bantam Books is that “a government official cannot do indirectly what she is barred from doing directly.” Ante, at 11. That case does not hold that government coercion alone violates the First Amendment. And recognizing the distinction between government coercion and a First Amendment violation is important because our democracy can function only if the government can effectively enforce the rules embodied in legislation; by its nature, such enforcement often involves coercion in the form of legal sanctions. The existence of an allegation of government coercion of a third party thus merely invites, rather than answers, the question whether that coercion indirectly worked a violation of the plaintiff’s First Amendment rights. [p.2 Jackson concurrence]

In her view, the earlier Bantam Books case the decision is rooted in is not the correct precedent; Jackson would instead look at cases challenging retaliatory actions by the government as a First Amendment violation, and here she thinks that analytical shoe better fits.

[It] does suggest that our First Amendment retaliation cases might provide a better framework for analyzing these kinds of allegations—i.e., coercion claims that are not directly related to the publication or distribution of speech. And, fortunately for the NRA, the complaint in this case alleges both censorship and retaliation theories for how Vullo violated the First Amendment—theories that, in my opinion, deserve separate analyses. [p.4 Jackson concurrence]

As for the Gorsuch concurrence, it is quite brief, and follows here in its entirety:

I write separately to explain my understanding of the Court’s opinion, which I join in full. Today we reaffirm a well-settled principle: “A government official cannot coerce a private party to punish or suppress disfavored speech on her behalf.” Ante, at 11. As the Court mentions, many lower courts have taken to analyzing this kind of coercion claim under a four-pronged “multifactor test.” Ibid. These tests, the Court explains, might serve “as a useful, though nonexhaustive, guide.” Ante, at 12. But sometimes they might not. Cf. Axon Enterprise, Inc. v. FTC, 598 U. S. 175, 205–207 (2023) (G ORSUCH , J., concurring in judgment). Indeed, the Second Circuit’s decision to break up its analysis into discrete parts and “tak[e] the [complaint’s] allegations in isolation” appears only to have contributed to its mistaken conclusion that the National Rifle Association failed to state a claim. Ante, at 15. Lower courts would therefore do well to heed this Court’s directive: Whatever value these “guideposts” serve, they remain “just” that and nothing more. Ante, at 12. “Ultimately, the critical” question is whether the plaintiff has “plausibly allege[d] conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech.” Ante, at 12, 19.

What seems key to him is the last line, and reads like a canary of an issue potentially splitting the Court in Murthy, where there the government clearly engaged in communications with intermediary platforms but the question is whether those communications amounted to attempts at persuasion, which is lawful, or coercion, which is not.

Meanwhile, this case itself will now be remanded. The Court ruled based on the facts as the NRA pled them – as was procedurally proper to do at this stage of the litigation – but it’s conceivable that when put to a standard of proof there won’t be enough to maintain its First Amendment claim. And even if the claim survives, the state for its part can still litigate whether it has an immunity defense to this alleged constitutional injury. So the matter has not yet been put to rest, but presumably the underlying First Amendment question it raised now has.

  • ✇Techdirt
  • Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its RightsMike Masnick
    “Do you now, or have you ever, worked with TikTok to help defend its rights?” That McCarthyism-esque question is apparently being asked by members of Congress to organizations that have been working with TikTok to defend its Constitutional rights. Does anyone think it’s right for Congress to threaten to punish organizations from working with TikTok? Does that sound like a First Amendment violation to you? Because it sure does to me. Over the last year or so, we’ve been hearing a lot of talk out
     

Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights

10. Květen 2024 v 18:27

“Do you now, or have you ever, worked with TikTok to help defend its rights?”

That McCarthyism-esque question is apparently being asked by members of Congress to organizations that have been working with TikTok to defend its Constitutional rights.

Does anyone think it’s right for Congress to threaten to punish organizations from working with TikTok? Does that sound like a First Amendment violation to you? Because it sure does to me.

Over the last year or so, we’ve been hearing a lot of talk out of Congress on two specific issues: the supposed horrors of government officials suppressing speech and, at the same time, the supposed horrors of a successful social media app that has ties to China.

Would it surprise you to find that there are some hypocrites in Congress about all of this? Shocking, I know.

We already highlighted how a bunch of members of Congress both signed an amicus brief in the Murthy case saying that governments should never, ever, interfere with speech and also voted to ban TikTok. But, would those same members of Congress who are so worried about “jawboning” by government officials to suppress speech also then use the power of Congress to silence voices trying to defend TikTok?

Yeah, you know where this is going.

NetChoice has been the main trade group that has been defending against all the terrible laws being thrust upon the internet over the last few years. Often people dismiss NetChoice as “big tech” or “the tech industry,” but in my experience they’ve been solidly standing up for good and important internet speech policies. NetChoice has been structured to be independent of its members (i.e., they get to decide what cases they take on, not their members, which sometimes means their members dislike the causes and cases NetChoice takes on).

On Wednesday of this week, NetChoice’s membership roster looked like this:

Image

I highlighted TikTok in particular, because on Thursday, NetChoice’s membership roster looked like this:

Image

TikTok is missing.

Why? Well, because members of Congress threatened to investigate NetChoice if it didn’t drop TikTok from its roster. Politico had some of this story last night, claiming that there was pressure from Congress to drop TikTok:

“The Select Committee’s brazen efforts to intimidate private organizations for associating with a company with 170 million American users is a clear abuse of power that smacks of McCarthyism,” TikTok spokesperson Alex Haurek said in a statement, referring to the House China panel. “It’s a sad day when Members of Congress single out individual companies without evidence while trampling on constitutional rights and the democratic process,” Haurek added. A spokesperson for NetChoice didn’t respond to a request for comment.

The two people told Daniel that NetChoice faced pressure from the office of House Majority Leader Steve Scalise (R-La.) to dump TikTok. A third person said that while no threat was made, NetChoice was told that the Select Committee on China would be investigating groups associated with TikTok and decided to sever ties as a result.

I’ve heard that the claim there was “no threat” is not accurate. As the rest of that paragraph makes clear, there was very much an implied threat that Congress would investigate organizations working with TikTok to defend its rights. I’m also hearing that others, like PR agencies and lobbying organizations that work with TikTok, are now facing similar threats from Congress.

Indeed, despite the “denial” of any threat, Politico gets the “House Select Committee on the CCP” to admit that it will launch an investigation into any organization that helps TikTok defend its rights:

“Significant bipartisan majorities in both the House and the Senate deemed TikTok a grave national security threat and the President signed a bill into law requiring them to divest from the CCP,” a Scalise spokesperson told PI. “It should not come as a surprise to those representing TikTok that as long as TikTok remains connected to the CCP, Congress will continue its rigorous oversight efforts to safeguard Americans from foreign threats.”

Guys, that’s not “rigorous oversight” or “safeguarding Americans.” That’s using the threats of bogus costly investigations to force companies to stop working with TikTok and helping it defend its rights under the Constitution. That seems to be a hell of a lot more like “jawboning” and a much bigger First Amendment problem than the Biden administration complaining publicly that they didn’t like how Facebook was handling COVID misinformation.

Remember, this is what the GOP Congressional folks said when they filed their amicus in the Murthy case:

Wielding threats of intervention, the executive branch of the federal government has engaged in a sustained effort to coerce private parties into censoring speech on matters of public concern. On issue after issue, the Biden Administration has distorted the free marketplace of ideas promised by the First Amendment, bringing the weight of federal authority to bear on any speech it dislikes

Isn’t that… exactly what these Congressional committees are now doing themselves? Except, much worse? Because the threats are much more direct, and the punitive nature of not obeying is even clearer and more directly tied to the speech at issue?

This sure seems to be exactly unconstitutional “jawboning.”

Whether or not you believe that there are real risks from China, it seems absolutely ridiculous that Congress is now basically following an authoritarian playbook, threatening companies for merely associating with and/or defending the rights of a company.

It undermines the principles of free speech and association, allowing governmental entities to dictate what organizations can and cannot support. This overreach of power directly chills advocacy efforts and hinders the protection of fundamental rights.

  • ✇Techdirt
  • Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its RightsMike Masnick
    “Do you now, or have you ever, worked with TikTok to help defend its rights?” That McCarthyism-esque question is apparently being asked by members of Congress to organizations that have been working with TikTok to defend its Constitutional rights. Does anyone think it’s right for Congress to threaten to punish organizations from working with TikTok? Does that sound like a First Amendment violation to you? Because it sure does to me. Over the last year or so, we’ve been hearing a lot of talk out
     

Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights

10. Květen 2024 v 18:27

“Do you now, or have you ever, worked with TikTok to help defend its rights?”

That McCarthyism-esque question is apparently being asked by members of Congress to organizations that have been working with TikTok to defend its Constitutional rights.

Does anyone think it’s right for Congress to threaten to punish organizations from working with TikTok? Does that sound like a First Amendment violation to you? Because it sure does to me.

Over the last year or so, we’ve been hearing a lot of talk out of Congress on two specific issues: the supposed horrors of government officials suppressing speech and, at the same time, the supposed horrors of a successful social media app that has ties to China.

Would it surprise you to find that there are some hypocrites in Congress about all of this? Shocking, I know.

We already highlighted how a bunch of members of Congress both signed an amicus brief in the Murthy case saying that governments should never, ever, interfere with speech and also voted to ban TikTok. But, would those same members of Congress who are so worried about “jawboning” by government officials to suppress speech also then use the power of Congress to silence voices trying to defend TikTok?

Yeah, you know where this is going.

NetChoice has been the main trade group that has been defending against all the terrible laws being thrust upon the internet over the last few years. Often people dismiss NetChoice as “big tech” or “the tech industry,” but in my experience they’ve been solidly standing up for good and important internet speech policies. NetChoice has been structured to be independent of its members (i.e., they get to decide what cases they take on, not their members, which sometimes means their members dislike the causes and cases NetChoice takes on).

On Wednesday of this week, NetChoice’s membership roster looked like this:

Image

I highlighted TikTok in particular, because on Thursday, NetChoice’s membership roster looked like this:

Image

TikTok is missing.

Why? Well, because members of Congress threatened to investigate NetChoice if it didn’t drop TikTok from its roster. Politico had some of this story last night, claiming that there was pressure from Congress to drop TikTok:

“The Select Committee’s brazen efforts to intimidate private organizations for associating with a company with 170 million American users is a clear abuse of power that smacks of McCarthyism,” TikTok spokesperson Alex Haurek said in a statement, referring to the House China panel. “It’s a sad day when Members of Congress single out individual companies without evidence while trampling on constitutional rights and the democratic process,” Haurek added. A spokesperson for NetChoice didn’t respond to a request for comment.

The two people told Daniel that NetChoice faced pressure from the office of House Majority Leader Steve Scalise (R-La.) to dump TikTok. A third person said that while no threat was made, NetChoice was told that the Select Committee on China would be investigating groups associated with TikTok and decided to sever ties as a result.

I’ve heard that the claim there was “no threat” is not accurate. As the rest of that paragraph makes clear, there was very much an implied threat that Congress would investigate organizations working with TikTok to defend its rights. I’m also hearing that others, like PR agencies and lobbying organizations that work with TikTok, are now facing similar threats from Congress.

Indeed, despite the “denial” of any threat, Politico gets the “House Select Committee on the CCP” to admit that it will launch an investigation into any organization that helps TikTok defend its rights:

“Significant bipartisan majorities in both the House and the Senate deemed TikTok a grave national security threat and the President signed a bill into law requiring them to divest from the CCP,” a Scalise spokesperson told PI. “It should not come as a surprise to those representing TikTok that as long as TikTok remains connected to the CCP, Congress will continue its rigorous oversight efforts to safeguard Americans from foreign threats.”

Guys, that’s not “rigorous oversight” or “safeguarding Americans.” That’s using the threats of bogus costly investigations to force companies to stop working with TikTok and helping it defend its rights under the Constitution. That seems to be a hell of a lot more like “jawboning” and a much bigger First Amendment problem than the Biden administration complaining publicly that they didn’t like how Facebook was handling COVID misinformation.

Remember, this is what the GOP Congressional folks said when they filed their amicus in the Murthy case:

Wielding threats of intervention, the executive branch of the federal government has engaged in a sustained effort to coerce private parties into censoring speech on matters of public concern. On issue after issue, the Biden Administration has distorted the free marketplace of ideas promised by the First Amendment, bringing the weight of federal authority to bear on any speech it dislikes

Isn’t that… exactly what these Congressional committees are now doing themselves? Except, much worse? Because the threats are much more direct, and the punitive nature of not obeying is even clearer and more directly tied to the speech at issue?

This sure seems to be exactly unconstitutional “jawboning.”

Whether or not you believe that there are real risks from China, it seems absolutely ridiculous that Congress is now basically following an authoritarian playbook, threatening companies for merely associating with and/or defending the rights of a company.

It undermines the principles of free speech and association, allowing governmental entities to dictate what organizations can and cannot support. This overreach of power directly chills advocacy efforts and hinders the protection of fundamental rights.

  • ✇Techdirt
  • Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its RightsMike Masnick
    “Do you now, or have you ever, worked with TikTok to help defend its rights?” That McCarthyism-esque question is apparently being asked by members of Congress to organizations that have been working with TikTok to defend its Constitutional rights. Does anyone think it’s right for Congress to threaten to punish organizations from working with TikTok? Does that sound like a First Amendment violation to you? Because it sure does to me. Over the last year or so, we’ve been hearing a lot of talk out
     

Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights

10. Květen 2024 v 18:27

“Do you now, or have you ever, worked with TikTok to help defend its rights?”

That McCarthyism-esque question is apparently being asked by members of Congress to organizations that have been working with TikTok to defend its Constitutional rights.

Does anyone think it’s right for Congress to threaten to punish organizations from working with TikTok? Does that sound like a First Amendment violation to you? Because it sure does to me.

Over the last year or so, we’ve been hearing a lot of talk out of Congress on two specific issues: the supposed horrors of government officials suppressing speech and, at the same time, the supposed horrors of a successful social media app that has ties to China.

Would it surprise you to find that there are some hypocrites in Congress about all of this? Shocking, I know.

We already highlighted how a bunch of members of Congress both signed an amicus brief in the Murthy case saying that governments should never, ever, interfere with speech and also voted to ban TikTok. But, would those same members of Congress who are so worried about “jawboning” by government officials to suppress speech also then use the power of Congress to silence voices trying to defend TikTok?

Yeah, you know where this is going.

NetChoice has been the main trade group that has been defending against all the terrible laws being thrust upon the internet over the last few years. Often people dismiss NetChoice as “big tech” or “the tech industry,” but in my experience they’ve been solidly standing up for good and important internet speech policies. NetChoice has been structured to be independent of its members (i.e., they get to decide what cases they take on, not their members, which sometimes means their members dislike the causes and cases NetChoice takes on).

On Wednesday of this week, NetChoice’s membership roster looked like this:

Image

I highlighted TikTok in particular, because on Thursday, NetChoice’s membership roster looked like this:

Image

TikTok is missing.

Why? Well, because members of Congress threatened to investigate NetChoice if it didn’t drop TikTok from its roster. Politico had some of this story last night, claiming that there was pressure from Congress to drop TikTok:

“The Select Committee’s brazen efforts to intimidate private organizations for associating with a company with 170 million American users is a clear abuse of power that smacks of McCarthyism,” TikTok spokesperson Alex Haurek said in a statement, referring to the House China panel. “It’s a sad day when Members of Congress single out individual companies without evidence while trampling on constitutional rights and the democratic process,” Haurek added. A spokesperson for NetChoice didn’t respond to a request for comment.

The two people told Daniel that NetChoice faced pressure from the office of House Majority Leader Steve Scalise (R-La.) to dump TikTok. A third person said that while no threat was made, NetChoice was told that the Select Committee on China would be investigating groups associated with TikTok and decided to sever ties as a result.

I’ve heard that the claim there was “no threat” is not accurate. As the rest of that paragraph makes clear, there was very much an implied threat that Congress would investigate organizations working with TikTok to defend its rights. I’m also hearing that others, like PR agencies and lobbying organizations that work with TikTok, are now facing similar threats from Congress.

Indeed, despite the “denial” of any threat, Politico gets the “House Select Committee on the CCP” to admit that it will launch an investigation into any organization that helps TikTok defend its rights:

“Significant bipartisan majorities in both the House and the Senate deemed TikTok a grave national security threat and the President signed a bill into law requiring them to divest from the CCP,” a Scalise spokesperson told PI. “It should not come as a surprise to those representing TikTok that as long as TikTok remains connected to the CCP, Congress will continue its rigorous oversight efforts to safeguard Americans from foreign threats.”

Guys, that’s not “rigorous oversight” or “safeguarding Americans.” That’s using the threats of bogus costly investigations to force companies to stop working with TikTok and helping it defend its rights under the Constitution. That seems to be a hell of a lot more like “jawboning” and a much bigger First Amendment problem than the Biden administration complaining publicly that they didn’t like how Facebook was handling COVID misinformation.

Remember, this is what the GOP Congressional folks said when they filed their amicus in the Murthy case:

Wielding threats of intervention, the executive branch of the federal government has engaged in a sustained effort to coerce private parties into censoring speech on matters of public concern. On issue after issue, the Biden Administration has distorted the free marketplace of ideas promised by the First Amendment, bringing the weight of federal authority to bear on any speech it dislikes

Isn’t that… exactly what these Congressional committees are now doing themselves? Except, much worse? Because the threats are much more direct, and the punitive nature of not obeying is even clearer and more directly tied to the speech at issue?

This sure seems to be exactly unconstitutional “jawboning.”

Whether or not you believe that there are real risks from China, it seems absolutely ridiculous that Congress is now basically following an authoritarian playbook, threatening companies for merely associating with and/or defending the rights of a company.

It undermines the principles of free speech and association, allowing governmental entities to dictate what organizations can and cannot support. This overreach of power directly chills advocacy efforts and hinders the protection of fundamental rights.

  • ✇Techdirt
  • Bipartisan Group Of Senators Introduce New Terrible ‘Protect The Kids Online’ BillMike Masnick
    Apparently, the world needs even more terrible bills that let ignorant senators grandstand to the media about how they’re “protecting the kids online.” There’s nothing more serious to work on than that. The latest bill comes from Senators Brian Schatz and Ted Cruz (with assists from Senators Chris Murphy, Katie Britt, Peter Welch, Ted Budd, John Fetterman, Angus King, and Mark Warner). This one is called the “The Kids Off Social Media Act” (KOSMA) and it’s an unconstitutional mess built on a lon
     

Bipartisan Group Of Senators Introduce New Terrible ‘Protect The Kids Online’ Bill

2. Květen 2024 v 21:05

Apparently, the world needs even more terrible bills that let ignorant senators grandstand to the media about how they’re “protecting the kids online.” There’s nothing more serious to work on than that. The latest bill comes from Senators Brian Schatz and Ted Cruz (with assists from Senators Chris Murphy, Katie Britt, Peter Welch, Ted Budd, John Fetterman, Angus King, and Mark Warner). This one is called the “The Kids Off Social Media Act” (KOSMA) and it’s an unconstitutional mess built on a long list of debunked and faulty premises.

It’s especially disappointing to see this from Schatz. A few years back, I know his staffers would regularly reach out to smart people on tech policy issues in trying to understand the potential pitfalls of the regulations he was pushing. Either he’s no longer doing this, or he is deliberately ignoring their expert advice. I don’t know which one would be worse.

The crux of the bill is pretty straightforward: it would be an outright ban on social media accounts for anyone under the age of 13. As many people will recognize, we kinda already have a “soft” version of that because of COPPA, which puts much stricter rules on sites directed at those under 13. Because most sites don’t want to deal with those stricter rules, they officially limit account creation to those over the age of 13.

In practice, this has been a giant mess. Years and years ago, Danah Boyd pointed this out, talking about how the “age 13” bit is a disaster for kids, parents, and educators. Her research showed that all this generally did was to have parents teach kids that “it’s okay to lie,” as parents wanted kids to use social media tools to communicate with grandparents. Making that “soft” ban a hard ban is going to create a much bigger mess and prevent all sorts of useful and important communications (which, yeah, is a 1st Amendment issue).

Schatz’s reasons put forth for the bill are just… wrong.

No age demographic is more affected by the ongoing mental health crisis in the United States than kids, especially young girls. The Centers for Disease Control and Prevention’s Youth Risk Behavior Survey found that 57 percent of high school girls and 29 percent of high school boys felt persistently sad or hopeless in 2021, with 22 percent of all high school students—and nearly a third of high school girls—reporting they had seriously considered attempting suicide in the preceding year.

Gosh. What was happening in 2021 with kids that might have made them feel hopeless? Did Schatz and crew simply forget about the fact that most kids were under lockdown and physically isolated from friends for much of 2021? And that there were plenty of other stresses, including millions of people, including family members, dying? Noooooo. Must be social media!

Studies have shown a strong relationship between social media use and poor mental health, especially among children.

Note the careful word choice here: “strong relationship.” They won’t say a causal relationship because studies have not shown that. Indeed, as the leading researcher in the space has noted, there continues to be no real evidence of any causal relationship. The relationship appears to work the other way: kids who are dealing with poor mental health and who are desperate for help turn to the internet and social media because they’re not getting help elsewhere.

Maybe offer a bill that helps kids get access to more resources that help them with their mental health, rather than taking away the one place they feel comfortable going? Maybe?

From 2019 to 2021, overall screen use among teens and tweens (ages 8 to 12) increased by 17 percent, with tweens using screens for five hours and 33 minutes per day and teens using screens for eight hours and 39 minutes.

I mean, come on Schatz. Are you trolling everyone? Again, look at those dates. WHY DO YOU THINK that screen time might have increased 17% for kids from 2019 to 2021? COULD IT POSSIBLY BE that most kids had to do school via computers and devices at home, because there was a deadly pandemic making the rounds?

Maybe?

Did Schatz forget that? I recognize that lots of folks would like to forget the pandemic lockdowns, but this seems like a weird way to manifest that.

I mean, what a weird choice of dates to choose. I’m honestly kind of shocked that the increase was only 17%.

Also, note that the data presented here isn’t about an increase in social media use. It could very well be that the 17% increase was Zoom classes.

Based on the clear and growing evidence, the U.S. Surgeon General issued an advisory last year, calling for new policies to set and enforce age minimums and highlighting the importance of limiting the use of features, like algorithms, that attempt to maximize time, attention, and engagement.

Wait. You mean the same Surgeon General’s report that denied any causal link between social media and mental health (which you falsely claim has been proved) and noted just how useful and important social media is to many young people?

From that report, which Schatz misrepresents:

Social media can provide benefits for some youth by providing positive community and connection with others who share identities, abilities, and interests. It can provide access to important information and create a space for self-expression. The ability to form and maintain friendships online and develop social connections are among the positive effects of social media use for youth. , These relationships can afford opportunities to have positive interactions with more diverse peer groups than are available to them offline and can provide important social support to youth. The buffering effects against stress that online social support from peers may provide can be especially important for youth who are often marginalized, including racial, ethnic, and sexual and gender minorities. , For example, studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support. Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.

Did Schatz’s staffers just, you know, skip over that part of the report or nah?

The bill also says that companies need to not allow algorithmic targeting of content to anyone under 17. This is also based on a widely believed myth that algorithmic content is somehow problematic. No studies have legitimately shown that of current algorithms. Indeed, a recent study showed that removing algorithmic targeting leads to people being exposed to more disinformation.

Is this bill designed to force more disinformation on kids? Why would that be a good idea?

Yes, some algorithms can be problematic! About a decade ago, algorithms that tried to optimize solely for “engagement” definitely created some bad outcomes. But it’s been a decade since most such algorithms have been designed that way. On most social media platforms, the algorithms are designed in other ways, taking into account a variety of different factors, because they know that optimizing just on engagement leads to bad outcomes.

Then the bill tacks on Cruz’s bill to require schools to block social media. There’s an amusing bit when reading the text of that part of the law. It says that you have to block social media on “federally funded networks and devices” but also notes that it does not prohibit “a teacher from using a social media platform in the classroom for educational purposes.”

But… how are they going to access those if the school is required by law to block access to such sites? Most schools are going to do a blanket ban, and teachers are going to be left to do what? Show kids useful YouTube science videos on their phones? Or maybe some schools will implement a special teacher code that lets them bypass the block. And by the end of the first week of school half the kids in the school will likely know that password.

What are we even doing here?

Schatz has a separate page hyping up the bill, and it’s even dumber than the first one above. It repeats some of the points above, though this time linking to Jonathan Haidt, whose work has been trashed left, right, and center by actual experts in this field. And then it gets even dumber:

Big Tech knows it’s complicit – but refuses to do anything about it…. Moreover, the platforms know about their central role in turbocharging the youth mental health crisis. According to Meta’s own internal study, “thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” It concluded, “teens blame Instagram for increases in the rate of anxiety and depression.”

This is not just misleading, it’s practically fraudulent misrepresentation. The study Schatz is citing is one that was revealed by Frances Haugen. As we’ve discussed, it was done because Meta was trying to understand how to do better. Indeed, the whole point of that study was to see how teens felt about using social media in 12 different categories. Meta found that most boys felt neutral or better about themselves in all 12 categories. For girls, it was 11 out of 12. It was only in one category, body image, where the split was more pronounced. 32% of girls said that it made them feel worse. Basically the same percentage said it had no impact, or that it made them feel better.

Image

Also, look at that slide’s title. The whole point of this study was to figure out if they were making kids feel worse in order to look into how to stop doing that. And now, because grandstanders like Schatz are falsely claiming that this proves they were “complicit” and “refuse to do anything about it,” no social media company will ever do this kind of research again.

Because, rather than proactively looking to see if they’re creating any problems that they need to try to fix, Schatz and crew are saying “simply researching this is proof that you’re complicit and refuse to act.”

Statements like this basically ensure that social media companies stick their heads in the sand, rather than try to figure out where harm might be caused and take steps to stop that harm.

Why would Schatz want to do that?

That page then also falsely claims that the bill does not require age verification. This is a silly two-step that lying politicians claim every time they do this. Does it directly mandate age verification? No. But, by making the penalties super serious and costly for failing to stop kids from accessing social media that will obviously drive companies to introduce stronger age verification measures that are inherently dangerous and an attack on privacy.

Perhaps Schatz doesn’t understand this, but it’s been widely discussed by many of the experts his staff used to talk to. So, really, he has no excuse.

The FAQ also claims that the bill will pass constitutional muster, while at the same time admitting that they know there will be lawsuits challenging it:

Yes. As, for example, First Amendment expert Neil Richards explains, “[i]nstead of censoring the protected expression present on these platforms, the act takes aim at the procedures and permissions that determine the time, place and manner of speech for underage consumers.” The Supreme Court has long held that the government has the right to regulate products to protect children, including by, for instance, restricting the sale of obscene content to minors. As Richards explains: “[i]n the same way a crowded bar or nightclub is no place for a child on their own”—or in the way every state in the country requires parental consent if it allows a minor to get a tattoo—“this rule would set a reasonable minimum age and maturity limitation for social media customers.” 

While we expect legal challenges to any bill aimed at regulating social media companies, we are confident that this content-neutral bill will pass constitutional muster given the government interests at play.

There are many reasons why this is garbage under the law, but rather than breaking them all down (we’ll wait for judges to explain it in detail), I’ll just point out the major tell is in the law itself. In the definition of what a “social media platform” is in the law, there is a long list of exceptions of what the law does not cover. It includes a few “moral panics of yesteryear” that gullible politicians tried to ban and were found to have violated the First Amendment in the process.

It explicitly carves out video games and content that is professionally produced, rather than user-generated:

Image

Remember the moral panics about video games and TV destroying kids’ minds? Yeah. So this child protection bill is hasty to say “but we’re not banning that kind of content!” Because whoever drafted the bill recognized that the Supreme Court has already made it clear that politicians can’t do that for video games or TV.

So, instead, they have to pretend that social media content is somehow on a whole different level.

But it’s not. It’s still the government restricting access to content. They’re going to pretend that there’s something unique and different about social media, and that they’re not banning the “content” but rather the “place” and “manner” of accessing that content. Except that’s laughable on its face.

You can see that in the quote above where Schatz does the fun dance where he first says “it’s okay to ban obscene content to minors” and then pretends that’s the same as restrictions on access to a bar (it’s not). One is about the content, and one is about a physical place. Social media is all about the content, and it’s not obscene content (which is already an exception to the First Amendment).

And, the “parental consent” for tattoos… I mean, what the fuck? Literally 4 questions above in the FAQ where that appears Schatz insists that his bill has nothing about parental consent. And then he tries to defend it by claiming it’s no different than parental consent laws?

The FAQ also claims this:

This bill does not prevent LGBTQ+ youth from accessing relevant resources online and we have worked closely with LGBTQ+ groups while crafting this legislation to ensure that this bill will not negatively impact that community.

I mean, it’s good you talked to some experts, but I note that most of the LGBTQ+ groups I’m aware of are not listed on your list of “groups supporting the bill” on the very same page. That absence stands out.

And, again, the Surgeon General’s report that you misleadingly cited elsewhere highlights how helpful social media can be to many LGBTQ+ youth. You can’t just say “nah, it won’t harm them” without explaining why all those benefits that have been shown in multiple studies, including the Surgeon General’s report, somehow don’t get impacted.

There’s a lot more, but this is just a terrible bill that would create a mess. And, I’m already hearing from folks in DC that Schatz is trying to get this bill added to the latest Christmas tree of a bill to reauthorize the FAA.

It would be nice if we had politicians looking to deal with the actual challenges facing kids these days, including the lack of mental health support for those who really need it. Instead, we get unconstitutional grandstanding nonsense bills like this.

Everyone associated with this bill should feel ashamed.

  • ✇Techdirt
  • SCOTUS Needs To Take Up The Texas Age Verification LawsuitMike Masnick
    I think we could witness one of the most important First Amendment legal showdowns ever. The U.S. Supreme Court is being asked to rule on the constitutionality of mandatory age verification for porn websites. If the high court takes up the case, it would queue up a landmark debate pertaining to the First Amendment and privacy rights of millions of people. Free Speech Coalition and the parent companies of the largest adult entertainment websites on the web filed suit in the U.S. District Court fo
     

SCOTUS Needs To Take Up The Texas Age Verification Lawsuit

19. Duben 2024 v 21:52

I think we could witness one of the most important First Amendment legal showdowns ever.

The U.S. Supreme Court is being asked to rule on the constitutionality of mandatory age verification for porn websites. If the high court takes up the case, it would queue up a landmark debate pertaining to the First Amendment and privacy rights of millions of people.

Free Speech Coalition and the parent companies of the largest adult entertainment websites on the web filed suit in the U.S. District Court for the Western District of Texas with the intention to block House Bill (HB) 1181.

HB 1181 requires mandatory age verification for porn websites with users from Texas IP addresses. It also requires pseudoscientific health warnings to be posted on adult websites. Counsel representing the coalition and the porn companies argued that it violated the First Amendment rights of consumers and owners of the websites. This prompted the federal district court to initially enjoin the state of Texas from enforcing the law because its text appeared to be unconstitutional.

Acting Texas Attorney General Angela Colmenero appealed the injunction to the Fifth Circuit Court of Appeals. After a clear demonstration of classic Fifth Circuit tap dancing and the return of Ken Paxton to helm of the Attorney General’s office, Texas was granted permission to enforce the age verification requirements outlined in the law. Luckily, the circuit judges properly applied the Zauderer standard, denying the requirement to post the bogus health warnings.

Soon after this, Paxton announced lawsuits against the parent companies of Pornhub, xHamster, and Stripchat for violations of HB 1181. The penalties total in millions of dollars in damages, per the law. After the lawsuits for HB 1181 violations were announced and filed in circuit courts in Travis County, counsel for the plaintiffs tried to hold enforcement while they petitioned the high court to take up the case for consideration. Justice Samuel Alito, the circuit justice for the Fifth Circuit, has yet to indicate that the case will be taken up by the Supreme Court. There is no reason why they shouldn’t take it up because of how important this case is moving forward, and how this issue is showing up in so many other states.

The case, Free Speech Coalition et al. v. Paxton, is so important that the national affiliate of the American Civil Liberties Union announced they are aiding the plaintiffs and their current counsel, a team from the big law firm Quinn Emanuel, in their case. They will support the petition for writ of certiorari, potential oral arguments, etc. to render House Bill 1181 and all age verification laws as unconstitutional pipedreams.

Plaintiffs accurately argue that this is settled law, referring to the high court’s landmark decision in Reno v. American Civil Liberties Union. This decision found that segregating the content of the internet by age violates the rights of not only adults but for minors. The vast majority of age verification laws as they are structured now do just that.

While the Supreme Court provided for a less restrictive means to filter out minors from viewing age-restricted materials and potentially facing some level of harm, the vehicles of enforcement and some of the options touted in these bills for controlling minors’ web usage are, to the plaintiffs and civil liberties organizations, a violation of the First Amendment. ACLU and Quinn Emanuel attorneys for the plaintiffs present these arguments in their petition for writ of certiorari, which was filed in April 2024. Now, we just need the Supreme Court to take this seriously and not let the Fifth Circuit, the circuit that upheld a ban on drag shows, dictate law for the nation.

Michael McGrady covers the legal and tech side of the online porn business, among other topics.

  • ✇Techdirt
  • Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The RecordMike Masnick
    A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness. I’m not going to review all the reasons it was wrong. You c
     

Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The Record

19. Duben 2024 v 18:26

A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness.

I’m not going to review all the reasons it was wrong. You can go back to my original article for that, though I will note that the argument seemed to suggest that getting rid of Section 230 would both lead to better content moderation and, at the same time, only moderation based on the First Amendment. Both of those points are obviously wrong, but the latter one is incoherent.

Given his long track record of wrongness, I had assumed that much of the article likely came from Lanier. However, I’m going to reassess that in light of Stanger’s recent performance before the House Energy & Commerce Committee. Last week, there was this weird hearing about Section 230, in which the Committee invited three academic critics of Section 230, and not a single person who could counter their arguments and falsehoods. We talked about this hearing a bit in this week’s podcast, with Rebecca MacKinnon from the Wikimedia Foundation.

Stanger was one of the three witnesses. The other two, Mary Anne Franks and Mary Graw Leary, presented some misleading and confused nonsense about Section 230. However, the misleading and confused nonsense about Section 230 at least fits into the normal framework of the debate around Section 230. There is confusion about how (c)(1) and (c)(2) interact, the purpose of Section 230, and (especially) some confusion about CSAM and Section 230 and an apparent unawareness that federal criminal behavior is exempted from Section 230.

But, let’s leave that aside. Because Stanger’s submission was so far off the mark that whoever invited her should be embarrassed. I’ve seen some people testify before Congress without knowing what they’re talking about, but I cannot recall seeing testimony this completely, bafflingly wrong before. Her submitted testimony is wrong in all the ways that the Wired article was wrong and more. There are just blatant factual errors throughout it.

It is impossible to cover all of the nonsense, so we’re just going to pick some gems.

Without Section 230, existing large social media companies would have to adapt. Decentralized Autonomous Organizations, (DAOs) such as BlueSky and Mastodon, would become more attractive. The emergent DAO social media landscape should serve to put further brakes on virality, allowing a more regional social media ecosystem to emerge, thereby creating new demand for local media. In an ideal world, networks of DAOs would comprise a new fediverse (a collection of social networking servers which can communicate with each other, while remaining independently controlled), where users would have greater choice and control over the communities of which they are a part.

So, um. That’s not what DAOs are, professor. You seem to be confusing decentralized social media with decentralized autonomous organizations, which are a wholly different thing. This is kind of like saying “social security benefits” when you mean “social media influencers” because both begin with “social.” They’re not the same thing.

A decentralized social media site is what it says on the tin. It’s a type of social media that isn’t wholly controlled by a single company. Different bits of it can be controlled by others, whether its users or alternative third-party providers. A DAO is an operation, often using mechanisms like cryptocurrency and tokens, to enable a kind of democratic voting, or (possibly) a set of smart contracts, that determine how the loosely defined organization is run. They are not the same.

In theory, a decentralized social media site could be run by a DAO, but I don’t know of any that currently are.

Also, um, decentralized social media can only really exist because of Section 230. “Without Section 230,” you wouldn’t have Bluesky or Mastodon, because they would face ruinous litigation for hosting content that people would sue over. So, no, you would not have either more decentralized social media (which I think is what you meant) or DAOs (which are wholly unrelated). You’d have a lot less, because hosting third-party speech would come with way more liability risk.

Also, there’s nothing inherent to decentralized social media that means you’d “put the brakes on virality.” Mastodon has developed to date in a manner designed to tamp down virality, but Bluesky hasn’t? Nor have other decentralized social media offerings, many of which hope to serve a global conversation where virality is a part of it. And that wouldn’t really change with or without Section 230. Mastodon made that decision because of the types of communities it wanted to foster. And, indeed, its ability to do that is, in part, due to intermediary liability protections like Section 230, that enable the kind of small, more focused community moderation Mastodon embraces already.

It’s really not clear to me that Professor Stanger even knows what Section 230 does.

Non-profits like Wikipedia are concerned that their enterprise could be shut down through gratuitous defamation lawsuits that would bleed them dry until they ceased to exist (such as what happened with Gawker). I am not convinced this is a danger for Wikipedia, since their editing is done by humans who have first amendment rights, and their product is not fodder for virality….

Again, wut? The fact that their editing is “done by humans” has literally no impact on anything here. Why even mention that? Humans get sued for defamation all the time. And, if they’re more likely to get sued for defamation, they’re less likely to even want to edit at all.

And people get mad about their Wikipedia articles all the time, and sometimes they sue over them. Section 230 gets those lawsuits thrown out. Without it, those lawsuits would last longer and be more expensive.

Again, it’s not at all clear if Prof. Stanger even knows what Section 230 is or how it works.

The Facebook Files show that Meta knew that its engagement algorithms had adverse effects on the mental health of teenage girls, yet it has done nothing notable to combat those unintended consequences. Instead, Meta’s lawyers have invoked Section 230 in lawsuits to defend itself against efforts to hold it liable for serious harms

Again, this is just wrong. What the crux of the Facebook Files showed was that Meta was, in fact, doing research to learn about where its algorithms might cause harm in order to try to minimize that harm. However, because of some bad reporting, it now means that companies will be less likely to even do that research, because people like Professor Stanger will misrepresent it, claiming that they did nothing to try to limit the harms. This is just outright false information.

Also, the cases where Meta has invoked Section 230 would be unrelated to the issue being discussed here because 230 is about not being held liable for user content.

The online world brought to life by Section 230 now dehumanizes us by highlighting our own insignificance. Social media and cancel culture make us feel small and vulnerable, where human beings crave feeling large and living lives of meaning, which cannot blossom without a felt sense of personal agency that our laws and institutions are designed to protect. While book publishers today celebrate the creative contributions of their authors, for-profit Internet platforms do not.

I honestly have no idea what’s being said here. “Dehumanizes us by highlighting our own insignificance?” What are you even talking about? People were a lot more “insignificant” pre-internet, when they had no way to speak out. And what does “cancel culture” have to do with literally any of this?

Without Section 230, companies would be liable for the content on their platforms. This would result in an explosion of lawsuits and greater caution in such content moderation, although companies would have to balance such with first amendment rights. Think of all the human jobs that could be generated!

Full employment for tort lawyers! I mean, this is just a modern version of Bastiat’s broken window fallacy. Think of all the economic activity if we just break all the windows in our village!

Again and again, it becomes clear that Stanger has no clue how any of this works. She does not understand Section 230. She does not understand the internet. She does not understand the First Amendment. And she does not understand content moderation. It’s a hell of a thing, considering she is testifying about Section 230 and its impact on social media and the First Amendment.

At a stroke, content moderation for companies would be a vastly simpler proposition. They need only uphold the First Amendment, and the Courts would develop the jurisprudence to help them do that, rather than to put the onus of moderation entirely on companies.

That is… not at all how it would work. They don’t just need to “uphold the First Amendment” (which is not a thing that companies can even do). The First Amendment’s only role is in restricting the government, not companies, from passing laws that infringe on a person’s ability to express themselves.

Instead, as has been detailed repeatedly, companies would face the so-called “moderator’s dilemma.” Because the First Amendment requires distributors to have actual knowledge of content violating the law to be liable, a world without Section 230 would incentivize one of two things, neither of which is “upholding the First Amendment.” They would either let everything go and do as little moderation as possible (so as to avoid the requisite knowledge), or they’d become very aggressive in limiting and removing content to avoid liability (even though this wouldn’t work and they’d still get hit with tons of lawsuits).

We’ve been here before. When government said the American public owned the airwaves, so television broadcasting would be regulated, they put in place regulations that supported the common good. The Internet affects everyone, and our public square is now virtual, so we must put in place measures to ensure that our digital age public dialogue includes everyone. In the television era, the fairness doctrine laid that groundwork. A new lens needs to be developed for the Internet age.

Except, no, that’s just factually wrong. The only reason that the government was able to put regulations on broadcast television was because the government controlled the public spectrum which they licensed to the broadcasters. The Supreme Court made clear in Red Lion that without that, they could not hinder the speech of media companies. So, the idea that you can just apply similar regulations to the internet is just fundamentally clueless. The internet is not publicly owned spectrum licensed to anyone.

While Section 230 perpetuates an illusion that today’s social media companies are common carriers like the phone companies, they are not. Unlike Ma Bell, they curate the content they transmit to users

Again, it appears the Professor is wholly unaware of Section 230 and how it works. The authors of Section 230 made it clear over and over again that they wrote 230 to be the opposite of common carriers. No one who supports Section 230 thinks it makes platforms into common carriers, because it does not. The entire point was to free up companies to choose how to curate content, so as to allow those companies to craft the kinds of communities they wanted. They only people claiming the “illusion” of common carrierness are those who are trying to destroy Section 230.

So there is no “illusion” here, unless you don’t understand what you’re talking about.

The repeal of Section 230 would also be a step in the right direction in addressing what are presently severe power imbalances between government and corporate power in shaping democratic life. It would also shine a spotlight on a globally disturbing fact: the overwhelming majority of global social media is currently in the hands of one man (Mark Zuckerberg), while nearly half the people on earth have a Meta account. How can that be a good thing under any scenario for the free exchange of ideas?

I mean, we agree that it’s bad that Meta is so big. But if you remove Section 230 (as Meta itself has advocated for!), you help Meta get bigger and harm the competition. Meta has a building full of lawyers. They can handle the onslaught of lawsuits that this would bring (as Stanger herself gleefully cheers on). It’s everyone else, the smaller sites, such as the decentralized players (not DAOs) who would get destroyed.

Mastodon admins aren’t going to be able to afford to pay to defend the lawsuits. Bluesky doesn’t have a building full of lawyers. The big winner here would be Meta. The cost to Meta of removing Section 230 is minimal. The cost to everyone trying to eat away at Meta’s marketshare would be massive.

The new speech is governed by the allocation of virality in our virtual public square. People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice. If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm. It is as if the constitution had required citizens to speak through actors or lawyers who answered to the Dutch East India Company, or some other large remote entity. What power should these intermediaries have? When the very logic of speech must shift in order for people to be heard, is that still free speech? This was not a problem foreseen in the law.

I mean, this is just ahistorical nonsense. Historically, most people had no way to get their message out at all. You could talk to your friends, family, co-workers, and neighbors, and that was about it. If you wanted to reach beyond that small group, you required some large gatekeeper (a publisher, a TV or radio producer, a newspaper) to grant you access, which they refused for the vast majority of people.

The internet flipped all that on its head, allowing anyone to effectively speak to anyone. The reason we have algorithms is not “Section 230” and the algorithms aren’t “setting the volume,” they came in to deal with the simple fact that there’s just too much information, and it was flooding the zone. People wanted to find information that was more relevant to them, and with the amount of content available online, the only way to manage that was with some sort of algorithm.

But, again, the rise of algorithms is not a Section 230 issue, even though Stanger seems to think it is.

Getting rid of the liability shield for all countries operating in the United States would have largely unacknowledged positive implications for national security, as well as the profit margins for US-headquartered companies. Foreign electoral interference is not in the interests of democratic stability, precisely because our enemies benefit from dividing us rather than uniting us. All foreign in origin content could therefore be policed at a higher standard, without violating the first amendment or the privacy rights of US citizens. As the National Security Agency likes to emphasize, the fourth amendment does not apply to foreigners and that has been a driver of surveillance protocols since the birth of the Internet. It is probable that the Supreme Court’s developing first amendment jurisprudence for social media in a post-230 world would embrace the same distinction. At a stroke, the digital fentanyl that TikTok represents in its American version could easily be shut down, and we could through a process of public deliberation leading to new statutory law collectively insist on the same optimization targets for well-being, test scores, and time on the platform that Chinese citizens currently enjoy in the Chinese version of TikTok (Douyin)

Again, this is a word salad that is mostly meaningless.

First of all, none of this has anything to do with Section 230, but rather the First Amendment. And it’s already been noted, clearly, that the First Amendment protects American users of foreign apps.

No one is saying “you can’t ban TikTok because of 230,” they’re saying “you can’t ban TikTok because of the First Amendment.” The Supreme Court isn’t going to magically reinvent long-standing First Amendment doctrine because 230 is repealed. This is nonsense.

And, we were just discussing what utter nonsense it is to claim that TikTok is “digital fentanyl” so I won’t even bother repeating that.

There might also be financial and innovation advantages for American companies with this simple legislative act. Any commercial losses for American companies from additional content moderation burdens would be offset by reputational gains and a rule imposed from without on what constitutes constitutionally acceptable content. Foreign electoral interference through misinformation and manipulation could be shut down as subversive activity directed at the Constitution of the United States, not a particular political party.

This part is particularly frustrating. This is why internet companies already moderate. Stanger’s piece repeatedly seems to complain both about too little moderation (electoral interference! Alex Jones!) and too much moderation (algorithms! dastardly Zuck deciding what I can read!).

She doesn’t even seem to realize that her argument is self-contradictory.

But, here, the supposed “financial and innovation advantages” from American companies being able to get “reputational gains” by stopping “misinformation” already exists. And it only exists because of Section 230. Which Professor Stanger is saying we need to remove to get the very thing it enables, and which would be taken away if it were repealed.

This whole thing makes me want to bang my head on my desk repeatedly.

Companies moderate today to (1) make users’ experience better and (2) to make advertisers happier that they’re not facing brand risk from having ads appear next to awful content. The companies that do better already achieve that “reputational benefit,” and they can do that kind of moderation because they know Section 230 prevents costly, wasteful, vexatious litigation from getting too far.

If you remove Section 230, that goes away. As discussed above, companies then are much more limited in the kinds of moderation they can do, which means users have a worse experience and advertisers have a worse experience, leading to reputational harm.

Today, companies already try to remove or diminish the power of electoral interference. That’s a giant part of trust & safety teams’ efforts. But they can really only do it safely because of 230.

The attention-grooming model fostered by Section 230 leads to stupendous quantities of poor-quality data. While an AI model can tolerate a significant amount of poor-quality data, there is a limit. It is unrealistic to imagine a society mediated by mostly terrible communication where that same society enjoys unmolested, high-quality AI. A society must seek quality as a whole, as a shared cultural value, in order to maximize the benefits of AI. Now is the best time for the tech business to mature and develop business models based on quality.

I’ve read this paragraph multiple times, and I still don’t know what it’s saying. Section 230 does not lead to an “attention-grooming model.” That’s just how society works. And, then, when she says society must seek quality as a whole, given how many people are online, the only way to do that is with algorithms trying to make some sort of call on what is, and what is not, quality.

That’s how this works.

Does she imagine that without Section 230, algorithms will go away, but good quality content will magically rise up? Because that’s not how any of this actually works.

Again, there’s much more in her written testimony, and none of it makes any sense at all.

Her spoken testimony was just as bad. Rep. Bob Latta asked her about the national security claims (some of which were quoted above) and we got this word salad, none of which has anything to do with Section 230:

I think it’s important to realize that our internet is precisely unique because it’s so open and that makes it uniquely vulnerable to all sorts of cyber attacks. Just this week, we saw an extraordinarily complicated plot that is most likely done by China, Russia or North Korea that could have blown up the internet as we know it. If you want to look up XZ Utils, Google that and you’ll find all kinds of details. They’re still sorting out what the intention was. It’s extraordinarily sophisticated though, so I think that the idea that we have a Chinese company where data on American children is being stored and potentially utilized in China, can be used to influence our children. It can be used in any number of ways no matter what they tell you. So I very much support and applaud the legislation to repeal, not to repeal, but to end TikToks operations in the United States.

The national security implications are extraordinary. Where the data is stored is so important and how it can be used to manipulate and influence us is so important. And I think the next frontier that I’ll conclude with this, for warfare, is in cyberspace. It’s where weak countries have huge advantages. They can pour resources into hackers who could really blow up our infrastructure, our hospitals, our universities. They’re even trying to get, as you know, into the House. This House right here. So I think repealing Section 230 is connected to addressing a host of potential harms

Nothing mentioned in there — from supply chain attacks like xz utils, to a potential TikTok ban, to hackers breaking into hospitals — has anything whatsoever to do with Section 230. She just throws it in at the end as if they’re connected.

She also claimed that Eric Schmidt has come out in favor of “repealing Section 230,” which was news to me. It also appears to be absolutely false. I went and looked, and the only thing I can find is a Digiday article which claims he called for reforms (not a repeal). The article never actually quotes him saying anything related to Section 230 at all, so it’s unclear what (if anything) he actually said. Literally the only quotes from Schmidt are old man stuff about how the kids these days just need to learn how to put down their phones, and then something weird about the fairness doctrine. Not 230.

Later, in the hearing, she was asked about the impact on smaller companies (some of which I mentioned above) and again demonstrates a near total ignorance of how this all works:

There is some concern, it’s sometimes expressed from small businesses that they are going to be the subject of frivolous lawsuits, defamation lawsuits, and they can be sued out of business even though they’ve defamed no one. I’m less concerned about that because if we were to repeal section (c)(1) of Section 230 of those 26 words, I think the First Amendment would govern and we would develop the jurisprudence to deal with small business in a more refined way. I think if anything, small businesses are in a better position to control and oversee what’s on their platforms than these monolithic large companies we have today. So with a bit of caution, I think that could be addressed.

The First Amendment always governs. But Section 230 is the “more refined way” that we’ve developed to help protect small businesses. The main function of Section 230 is to get cases, that would be long and costly if you had to defend them under the First Amendment, tossed out much earlier at the motion to dismiss stage. Literally that’s Section 230’s main purpose.

If you had to fight it out under the First Amendment, you’re talking about hundreds of thousands of dollars and a much longer case. And that cost is going to lead companies to (1) refuse to host lots of protected content, because it’s not worth the hassle, and (2) be much more open to pulling down any content that anyone complains about.

This is not speculative. There have been studies on this. Weaker intermediary laws always lead to massive overblocking. If Stanger had done her research, or even understood any of this, she would know this.

So why is she the one testifying before Congress?

I’ll just conclude with this banger, which was her final statement to Congress:

I just want to maybe take you back to the first part of your question to explain that, which I thought was a good one, which is that we have a long history of First Amendment jurisprudence in this country that in effect has been stopped by Section 230. In other words, if you review, if you remove (c)(1), that First Amendment jurisprudence will develop to determine when it is crime fire in a crowded theater, whether there’s defamation, whether there’s libel. We believe in free speech in this country, but even the First Amendment has some limits put on it and those could apply to the platforms. We have a strange situation right now if we take that issue of fentanyl that we were discussing earlier, what we have right now is essentially a system where we can go after the users, we can go after the dealers, but we can’t go after the mules. And I think that’s very problematic. We should hold the mules liable. They’re part of the system.

Yeah. So. She actually went to the whole fire in a crowded theater thing. This is the dead-on giveaway that the person speaking has no clue about the First Amendment. That’s dicta from a case from over 100 years ago, in a case that is no longer considered good law, and hasn’t been in decades. Even worse, that dicta came in a case about jailing war protestors.

She also trots out yet another of Ken “Popehat” White’s (an actual First Amendment expert) most annoying tropes about people opining on the First Amendment without understanding it: because the First Amendment has some limits, this new limit must be okay. That’s not how it works. As Ken and others have pointed out, the exceptions to the First Amendment are an established, known, and almost certainly closed set.

The Supreme Court has no interest in expanding that set. It refused to do so for animal crush videos, so it’s not going to magically do it for whatever awful speech you think it should limit.

Anyway, it was a shame that Congress chose to hold a hearing on Section 230 and only bring in witnesses who hate Section 230. Not a single witness who could explain why Section 230 is so important was brought in. But, even worse, they gave one of the three witness spots to someone who was spewing word salad level nonsense, that didn’t make any sense at all, was often factually incorrect (in hilariously embarrassing ways), and seemed wholly unaware of how any relevant thing worked.

Do better, Congress.

  • ✇Techdirt
  • SCOTUS Needs To Take Up The Texas Age Verification LawsuitMike Masnick
    I think we could witness one of the most important First Amendment legal showdowns ever. The U.S. Supreme Court is being asked to rule on the constitutionality of mandatory age verification for porn websites. If the high court takes up the case, it would queue up a landmark debate pertaining to the First Amendment and privacy rights of millions of people. Free Speech Coalition and the parent companies of the largest adult entertainment websites on the web filed suit in the U.S. District Court fo
     

SCOTUS Needs To Take Up The Texas Age Verification Lawsuit

19. Duben 2024 v 21:52

I think we could witness one of the most important First Amendment legal showdowns ever.

The U.S. Supreme Court is being asked to rule on the constitutionality of mandatory age verification for porn websites. If the high court takes up the case, it would queue up a landmark debate pertaining to the First Amendment and privacy rights of millions of people.

Free Speech Coalition and the parent companies of the largest adult entertainment websites on the web filed suit in the U.S. District Court for the Western District of Texas with the intention to block House Bill (HB) 1181.

HB 1181 requires mandatory age verification for porn websites with users from Texas IP addresses. It also requires pseudoscientific health warnings to be posted on adult websites. Counsel representing the coalition and the porn companies argued that it violated the First Amendment rights of consumers and owners of the websites. This prompted the federal district court to initially enjoin the state of Texas from enforcing the law because its text appeared to be unconstitutional.

Acting Texas Attorney General Angela Colmenero appealed the injunction to the Fifth Circuit Court of Appeals. After a clear demonstration of classic Fifth Circuit tap dancing and the return of Ken Paxton to helm of the Attorney General’s office, Texas was granted permission to enforce the age verification requirements outlined in the law. Luckily, the circuit judges properly applied the Zauderer standard, denying the requirement to post the bogus health warnings.

Soon after this, Paxton announced lawsuits against the parent companies of Pornhub, xHamster, and Stripchat for violations of HB 1181. The penalties total in millions of dollars in damages, per the law. After the lawsuits for HB 1181 violations were announced and filed in circuit courts in Travis County, counsel for the plaintiffs tried to hold enforcement while they petitioned the high court to take up the case for consideration. Justice Samuel Alito, the circuit justice for the Fifth Circuit, has yet to indicate that the case will be taken up by the Supreme Court. There is no reason why they shouldn’t take it up because of how important this case is moving forward, and how this issue is showing up in so many other states.

The case, Free Speech Coalition et al. v. Paxton, is so important that the national affiliate of the American Civil Liberties Union announced they are aiding the plaintiffs and their current counsel, a team from the big law firm Quinn Emanuel, in their case. They will support the petition for writ of certiorari, potential oral arguments, etc. to render House Bill 1181 and all age verification laws as unconstitutional pipedreams.

Plaintiffs accurately argue that this is settled law, referring to the high court’s landmark decision in Reno v. American Civil Liberties Union. This decision found that segregating the content of the internet by age violates the rights of not only adults but for minors. The vast majority of age verification laws as they are structured now do just that.

While the Supreme Court provided for a less restrictive means to filter out minors from viewing age-restricted materials and potentially facing some level of harm, the vehicles of enforcement and some of the options touted in these bills for controlling minors’ web usage are, to the plaintiffs and civil liberties organizations, a violation of the First Amendment. ACLU and Quinn Emanuel attorneys for the plaintiffs present these arguments in their petition for writ of certiorari, which was filed in April 2024. Now, we just need the Supreme Court to take this seriously and not let the Fifth Circuit, the circuit that upheld a ban on drag shows, dictate law for the nation.

Michael McGrady covers the legal and tech side of the online porn business, among other topics.

  • ✇Techdirt
  • Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The RecordMike Masnick
    A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness. I’m not going to review all the reasons it was wrong. You c
     

Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The Record

19. Duben 2024 v 18:26

A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness.

I’m not going to review all the reasons it was wrong. You can go back to my original article for that, though I will note that the argument seemed to suggest that getting rid of Section 230 would both lead to better content moderation and, at the same time, only moderation based on the First Amendment. Both of those points are obviously wrong, but the latter one is incoherent.

Given his long track record of wrongness, I had assumed that much of the article likely came from Lanier. However, I’m going to reassess that in light of Stanger’s recent performance before the House Energy & Commerce Committee. Last week, there was this weird hearing about Section 230, in which the Committee invited three academic critics of Section 230, and not a single person who could counter their arguments and falsehoods. We talked about this hearing a bit in this week’s podcast, with Rebecca MacKinnon from the Wikimedia Foundation.

Stanger was one of the three witnesses. The other two, Mary Anne Franks and Mary Graw Leary, presented some misleading and confused nonsense about Section 230. However, the misleading and confused nonsense about Section 230 at least fits into the normal framework of the debate around Section 230. There is confusion about how (c)(1) and (c)(2) interact, the purpose of Section 230, and (especially) some confusion about CSAM and Section 230 and an apparent unawareness that federal criminal behavior is exempted from Section 230.

But, let’s leave that aside. Because Stanger’s submission was so far off the mark that whoever invited her should be embarrassed. I’ve seen some people testify before Congress without knowing what they’re talking about, but I cannot recall seeing testimony this completely, bafflingly wrong before. Her submitted testimony is wrong in all the ways that the Wired article was wrong and more. There are just blatant factual errors throughout it.

It is impossible to cover all of the nonsense, so we’re just going to pick some gems.

Without Section 230, existing large social media companies would have to adapt. Decentralized Autonomous Organizations, (DAOs) such as BlueSky and Mastodon, would become more attractive. The emergent DAO social media landscape should serve to put further brakes on virality, allowing a more regional social media ecosystem to emerge, thereby creating new demand for local media. In an ideal world, networks of DAOs would comprise a new fediverse (a collection of social networking servers which can communicate with each other, while remaining independently controlled), where users would have greater choice and control over the communities of which they are a part.

So, um. That’s not what DAOs are, professor. You seem to be confusing decentralized social media with decentralized autonomous organizations, which are a wholly different thing. This is kind of like saying “social security benefits” when you mean “social media influencers” because both begin with “social.” They’re not the same thing.

A decentralized social media site is what it says on the tin. It’s a type of social media that isn’t wholly controlled by a single company. Different bits of it can be controlled by others, whether its users or alternative third-party providers. A DAO is an operation, often using mechanisms like cryptocurrency and tokens, to enable a kind of democratic voting, or (possibly) a set of smart contracts, that determine how the loosely defined organization is run. They are not the same.

In theory, a decentralized social media site could be run by a DAO, but I don’t know of any that currently are.

Also, um, decentralized social media can only really exist because of Section 230. “Without Section 230,” you wouldn’t have Bluesky or Mastodon, because they would face ruinous litigation for hosting content that people would sue over. So, no, you would not have either more decentralized social media (which I think is what you meant) or DAOs (which are wholly unrelated). You’d have a lot less, because hosting third-party speech would come with way more liability risk.

Also, there’s nothing inherent to decentralized social media that means you’d “put the brakes on virality.” Mastodon has developed to date in a manner designed to tamp down virality, but Bluesky hasn’t? Nor have other decentralized social media offerings, many of which hope to serve a global conversation where virality is a part of it. And that wouldn’t really change with or without Section 230. Mastodon made that decision because of the types of communities it wanted to foster. And, indeed, its ability to do that is, in part, due to intermediary liability protections like Section 230, that enable the kind of small, more focused community moderation Mastodon embraces already.

It’s really not clear to me that Professor Stanger even knows what Section 230 does.

Non-profits like Wikipedia are concerned that their enterprise could be shut down through gratuitous defamation lawsuits that would bleed them dry until they ceased to exist (such as what happened with Gawker). I am not convinced this is a danger for Wikipedia, since their editing is done by humans who have first amendment rights, and their product is not fodder for virality….

Again, wut? The fact that their editing is “done by humans” has literally no impact on anything here. Why even mention that? Humans get sued for defamation all the time. And, if they’re more likely to get sued for defamation, they’re less likely to even want to edit at all.

And people get mad about their Wikipedia articles all the time, and sometimes they sue over them. Section 230 gets those lawsuits thrown out. Without it, those lawsuits would last longer and be more expensive.

Again, it’s not at all clear if Prof. Stanger even knows what Section 230 is or how it works.

The Facebook Files show that Meta knew that its engagement algorithms had adverse effects on the mental health of teenage girls, yet it has done nothing notable to combat those unintended consequences. Instead, Meta’s lawyers have invoked Section 230 in lawsuits to defend itself against efforts to hold it liable for serious harms

Again, this is just wrong. What the crux of the Facebook Files showed was that Meta was, in fact, doing research to learn about where its algorithms might cause harm in order to try to minimize that harm. However, because of some bad reporting, it now means that companies will be less likely to even do that research, because people like Professor Stanger will misrepresent it, claiming that they did nothing to try to limit the harms. This is just outright false information.

Also, the cases where Meta has invoked Section 230 would be unrelated to the issue being discussed here because 230 is about not being held liable for user content.

The online world brought to life by Section 230 now dehumanizes us by highlighting our own insignificance. Social media and cancel culture make us feel small and vulnerable, where human beings crave feeling large and living lives of meaning, which cannot blossom without a felt sense of personal agency that our laws and institutions are designed to protect. While book publishers today celebrate the creative contributions of their authors, for-profit Internet platforms do not.

I honestly have no idea what’s being said here. “Dehumanizes us by highlighting our own insignificance?” What are you even talking about? People were a lot more “insignificant” pre-internet, when they had no way to speak out. And what does “cancel culture” have to do with literally any of this?

Without Section 230, companies would be liable for the content on their platforms. This would result in an explosion of lawsuits and greater caution in such content moderation, although companies would have to balance such with first amendment rights. Think of all the human jobs that could be generated!

Full employment for tort lawyers! I mean, this is just a modern version of Bastiat’s broken window fallacy. Think of all the economic activity if we just break all the windows in our village!

Again and again, it becomes clear that Stanger has no clue how any of this works. She does not understand Section 230. She does not understand the internet. She does not understand the First Amendment. And she does not understand content moderation. It’s a hell of a thing, considering she is testifying about Section 230 and its impact on social media and the First Amendment.

At a stroke, content moderation for companies would be a vastly simpler proposition. They need only uphold the First Amendment, and the Courts would develop the jurisprudence to help them do that, rather than to put the onus of moderation entirely on companies.

That is… not at all how it would work. They don’t just need to “uphold the First Amendment” (which is not a thing that companies can even do). The First Amendment’s only role is in restricting the government, not companies, from passing laws that infringe on a person’s ability to express themselves.

Instead, as has been detailed repeatedly, companies would face the so-called “moderator’s dilemma.” Because the First Amendment requires distributors to have actual knowledge of content violating the law to be liable, a world without Section 230 would incentivize one of two things, neither of which is “upholding the First Amendment.” They would either let everything go and do as little moderation as possible (so as to avoid the requisite knowledge), or they’d become very aggressive in limiting and removing content to avoid liability (even though this wouldn’t work and they’d still get hit with tons of lawsuits).

We’ve been here before. When government said the American public owned the airwaves, so television broadcasting would be regulated, they put in place regulations that supported the common good. The Internet affects everyone, and our public square is now virtual, so we must put in place measures to ensure that our digital age public dialogue includes everyone. In the television era, the fairness doctrine laid that groundwork. A new lens needs to be developed for the Internet age.

Except, no, that’s just factually wrong. The only reason that the government was able to put regulations on broadcast television was because the government controlled the public spectrum which they licensed to the broadcasters. The Supreme Court made clear in Red Lion that without that, they could not hinder the speech of media companies. So, the idea that you can just apply similar regulations to the internet is just fundamentally clueless. The internet is not publicly owned spectrum licensed to anyone.

While Section 230 perpetuates an illusion that today’s social media companies are common carriers like the phone companies, they are not. Unlike Ma Bell, they curate the content they transmit to users

Again, it appears the Professor is wholly unaware of Section 230 and how it works. The authors of Section 230 made it clear over and over again that they wrote 230 to be the opposite of common carriers. No one who supports Section 230 thinks it makes platforms into common carriers, because it does not. The entire point was to free up companies to choose how to curate content, so as to allow those companies to craft the kinds of communities they wanted. They only people claiming the “illusion” of common carrierness are those who are trying to destroy Section 230.

So there is no “illusion” here, unless you don’t understand what you’re talking about.

The repeal of Section 230 would also be a step in the right direction in addressing what are presently severe power imbalances between government and corporate power in shaping democratic life. It would also shine a spotlight on a globally disturbing fact: the overwhelming majority of global social media is currently in the hands of one man (Mark Zuckerberg), while nearly half the people on earth have a Meta account. How can that be a good thing under any scenario for the free exchange of ideas?

I mean, we agree that it’s bad that Meta is so big. But if you remove Section 230 (as Meta itself has advocated for!), you help Meta get bigger and harm the competition. Meta has a building full of lawyers. They can handle the onslaught of lawsuits that this would bring (as Stanger herself gleefully cheers on). It’s everyone else, the smaller sites, such as the decentralized players (not DAOs) who would get destroyed.

Mastodon admins aren’t going to be able to afford to pay to defend the lawsuits. Bluesky doesn’t have a building full of lawyers. The big winner here would be Meta. The cost to Meta of removing Section 230 is minimal. The cost to everyone trying to eat away at Meta’s marketshare would be massive.

The new speech is governed by the allocation of virality in our virtual public square. People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice. If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm. It is as if the constitution had required citizens to speak through actors or lawyers who answered to the Dutch East India Company, or some other large remote entity. What power should these intermediaries have? When the very logic of speech must shift in order for people to be heard, is that still free speech? This was not a problem foreseen in the law.

I mean, this is just ahistorical nonsense. Historically, most people had no way to get their message out at all. You could talk to your friends, family, co-workers, and neighbors, and that was about it. If you wanted to reach beyond that small group, you required some large gatekeeper (a publisher, a TV or radio producer, a newspaper) to grant you access, which they refused for the vast majority of people.

The internet flipped all that on its head, allowing anyone to effectively speak to anyone. The reason we have algorithms is not “Section 230” and the algorithms aren’t “setting the volume,” they came in to deal with the simple fact that there’s just too much information, and it was flooding the zone. People wanted to find information that was more relevant to them, and with the amount of content available online, the only way to manage that was with some sort of algorithm.

But, again, the rise of algorithms is not a Section 230 issue, even though Stanger seems to think it is.

Getting rid of the liability shield for all countries operating in the United States would have largely unacknowledged positive implications for national security, as well as the profit margins for US-headquartered companies. Foreign electoral interference is not in the interests of democratic stability, precisely because our enemies benefit from dividing us rather than uniting us. All foreign in origin content could therefore be policed at a higher standard, without violating the first amendment or the privacy rights of US citizens. As the National Security Agency likes to emphasize, the fourth amendment does not apply to foreigners and that has been a driver of surveillance protocols since the birth of the Internet. It is probable that the Supreme Court’s developing first amendment jurisprudence for social media in a post-230 world would embrace the same distinction. At a stroke, the digital fentanyl that TikTok represents in its American version could easily be shut down, and we could through a process of public deliberation leading to new statutory law collectively insist on the same optimization targets for well-being, test scores, and time on the platform that Chinese citizens currently enjoy in the Chinese version of TikTok (Douyin)

Again, this is a word salad that is mostly meaningless.

First of all, none of this has anything to do with Section 230, but rather the First Amendment. And it’s already been noted, clearly, that the First Amendment protects American users of foreign apps.

No one is saying “you can’t ban TikTok because of 230,” they’re saying “you can’t ban TikTok because of the First Amendment.” The Supreme Court isn’t going to magically reinvent long-standing First Amendment doctrine because 230 is repealed. This is nonsense.

And, we were just discussing what utter nonsense it is to claim that TikTok is “digital fentanyl” so I won’t even bother repeating that.

There might also be financial and innovation advantages for American companies with this simple legislative act. Any commercial losses for American companies from additional content moderation burdens would be offset by reputational gains and a rule imposed from without on what constitutes constitutionally acceptable content. Foreign electoral interference through misinformation and manipulation could be shut down as subversive activity directed at the Constitution of the United States, not a particular political party.

This part is particularly frustrating. This is why internet companies already moderate. Stanger’s piece repeatedly seems to complain both about too little moderation (electoral interference! Alex Jones!) and too much moderation (algorithms! dastardly Zuck deciding what I can read!).

She doesn’t even seem to realize that her argument is self-contradictory.

But, here, the supposed “financial and innovation advantages” from American companies being able to get “reputational gains” by stopping “misinformation” already exists. And it only exists because of Section 230. Which Professor Stanger is saying we need to remove to get the very thing it enables, and which would be taken away if it were repealed.

This whole thing makes me want to bang my head on my desk repeatedly.

Companies moderate today to (1) make users’ experience better and (2) to make advertisers happier that they’re not facing brand risk from having ads appear next to awful content. The companies that do better already achieve that “reputational benefit,” and they can do that kind of moderation because they know Section 230 prevents costly, wasteful, vexatious litigation from getting too far.

If you remove Section 230, that goes away. As discussed above, companies then are much more limited in the kinds of moderation they can do, which means users have a worse experience and advertisers have a worse experience, leading to reputational harm.

Today, companies already try to remove or diminish the power of electoral interference. That’s a giant part of trust & safety teams’ efforts. But they can really only do it safely because of 230.

The attention-grooming model fostered by Section 230 leads to stupendous quantities of poor-quality data. While an AI model can tolerate a significant amount of poor-quality data, there is a limit. It is unrealistic to imagine a society mediated by mostly terrible communication where that same society enjoys unmolested, high-quality AI. A society must seek quality as a whole, as a shared cultural value, in order to maximize the benefits of AI. Now is the best time for the tech business to mature and develop business models based on quality.

I’ve read this paragraph multiple times, and I still don’t know what it’s saying. Section 230 does not lead to an “attention-grooming model.” That’s just how society works. And, then, when she says society must seek quality as a whole, given how many people are online, the only way to do that is with algorithms trying to make some sort of call on what is, and what is not, quality.

That’s how this works.

Does she imagine that without Section 230, algorithms will go away, but good quality content will magically rise up? Because that’s not how any of this actually works.

Again, there’s much more in her written testimony, and none of it makes any sense at all.

Her spoken testimony was just as bad. Rep. Bob Latta asked her about the national security claims (some of which were quoted above) and we got this word salad, none of which has anything to do with Section 230:

I think it’s important to realize that our internet is precisely unique because it’s so open and that makes it uniquely vulnerable to all sorts of cyber attacks. Just this week, we saw an extraordinarily complicated plot that is most likely done by China, Russia or North Korea that could have blown up the internet as we know it. If you want to look up XZ Utils, Google that and you’ll find all kinds of details. They’re still sorting out what the intention was. It’s extraordinarily sophisticated though, so I think that the idea that we have a Chinese company where data on American children is being stored and potentially utilized in China, can be used to influence our children. It can be used in any number of ways no matter what they tell you. So I very much support and applaud the legislation to repeal, not to repeal, but to end TikToks operations in the United States.

The national security implications are extraordinary. Where the data is stored is so important and how it can be used to manipulate and influence us is so important. And I think the next frontier that I’ll conclude with this, for warfare, is in cyberspace. It’s where weak countries have huge advantages. They can pour resources into hackers who could really blow up our infrastructure, our hospitals, our universities. They’re even trying to get, as you know, into the House. This House right here. So I think repealing Section 230 is connected to addressing a host of potential harms

Nothing mentioned in there — from supply chain attacks like xz utils, to a potential TikTok ban, to hackers breaking into hospitals — has anything whatsoever to do with Section 230. She just throws it in at the end as if they’re connected.

She also claimed that Eric Schmidt has come out in favor of “repealing Section 230,” which was news to me. It also appears to be absolutely false. I went and looked, and the only thing I can find is a Digiday article which claims he called for reforms (not a repeal). The article never actually quotes him saying anything related to Section 230 at all, so it’s unclear what (if anything) he actually said. Literally the only quotes from Schmidt are old man stuff about how the kids these days just need to learn how to put down their phones, and then something weird about the fairness doctrine. Not 230.

Later, in the hearing, she was asked about the impact on smaller companies (some of which I mentioned above) and again demonstrates a near total ignorance of how this all works:

There is some concern, it’s sometimes expressed from small businesses that they are going to be the subject of frivolous lawsuits, defamation lawsuits, and they can be sued out of business even though they’ve defamed no one. I’m less concerned about that because if we were to repeal section (c)(1) of Section 230 of those 26 words, I think the First Amendment would govern and we would develop the jurisprudence to deal with small business in a more refined way. I think if anything, small businesses are in a better position to control and oversee what’s on their platforms than these monolithic large companies we have today. So with a bit of caution, I think that could be addressed.

The First Amendment always governs. But Section 230 is the “more refined way” that we’ve developed to help protect small businesses. The main function of Section 230 is to get cases, that would be long and costly if you had to defend them under the First Amendment, tossed out much earlier at the motion to dismiss stage. Literally that’s Section 230’s main purpose.

If you had to fight it out under the First Amendment, you’re talking about hundreds of thousands of dollars and a much longer case. And that cost is going to lead companies to (1) refuse to host lots of protected content, because it’s not worth the hassle, and (2) be much more open to pulling down any content that anyone complains about.

This is not speculative. There have been studies on this. Weaker intermediary laws always lead to massive overblocking. If Stanger had done her research, or even understood any of this, she would know this.

So why is she the one testifying before Congress?

I’ll just conclude with this banger, which was her final statement to Congress:

I just want to maybe take you back to the first part of your question to explain that, which I thought was a good one, which is that we have a long history of First Amendment jurisprudence in this country that in effect has been stopped by Section 230. In other words, if you review, if you remove (c)(1), that First Amendment jurisprudence will develop to determine when it is crime fire in a crowded theater, whether there’s defamation, whether there’s libel. We believe in free speech in this country, but even the First Amendment has some limits put on it and those could apply to the platforms. We have a strange situation right now if we take that issue of fentanyl that we were discussing earlier, what we have right now is essentially a system where we can go after the users, we can go after the dealers, but we can’t go after the mules. And I think that’s very problematic. We should hold the mules liable. They’re part of the system.

Yeah. So. She actually went to the whole fire in a crowded theater thing. This is the dead-on giveaway that the person speaking has no clue about the First Amendment. That’s dicta from a case from over 100 years ago, in a case that is no longer considered good law, and hasn’t been in decades. Even worse, that dicta came in a case about jailing war protestors.

She also trots out yet another of Ken “Popehat” White’s (an actual First Amendment expert) most annoying tropes about people opining on the First Amendment without understanding it: because the First Amendment has some limits, this new limit must be okay. That’s not how it works. As Ken and others have pointed out, the exceptions to the First Amendment are an established, known, and almost certainly closed set.

The Supreme Court has no interest in expanding that set. It refused to do so for animal crush videos, so it’s not going to magically do it for whatever awful speech you think it should limit.

Anyway, it was a shame that Congress chose to hold a hearing on Section 230 and only bring in witnesses who hate Section 230. Not a single witness who could explain why Section 230 is so important was brought in. But, even worse, they gave one of the three witness spots to someone who was spewing word salad level nonsense, that didn’t make any sense at all, was often factually incorrect (in hilariously embarrassing ways), and seemed wholly unaware of how any relevant thing worked.

Do better, Congress.

  • ✇Techdirt
  • Ridiculous: Journalist Held In Contempt For Not Revealing SourcesMike Masnick
    Going way, way back, we’ve talked about the need for protection of journalistic sources, in particular the need for a federal journalism shield law. I can find stories going back about 15 years of us talking about it here on Techdirt. The issue might not come up that often, but that doesn’t make it any less important. On Thursday, a judge held former CBS journalist Catherine Herridge in contempt for refusing to reveal her sources regarding stories she wrote about scientist Yanping Chen. The rul
     

Ridiculous: Journalist Held In Contempt For Not Revealing Sources

2. Březen 2024 v 00:38

Going way, way back, we’ve talked about the need for protection of journalistic sources, in particular the need for a federal journalism shield law. I can find stories going back about 15 years of us talking about it here on Techdirt. The issue might not come up that often, but that doesn’t make it any less important.

On Thursday, a judge held former CBS journalist Catherine Herridge in contempt for refusing to reveal her sources regarding stories she wrote about scientist Yanping Chen.

The ruling, from U.S. District Court Judge Christopher R. Cooper, will be stayed for 30 days or until Herridge can appeal the ruling.

Cooper ruled that Herridge violated his Aug. 1 order demanding that Herridge reveal how she learned about a federal probe into Chen, who operated a graduate program in Virginia. Herridge, who was recently laid off from CBS News, wrote the stories in question when she worked for Fox News in 2017.

In his ruling, Judge Cooper claims that he’s at least somewhat reluctant about this result, but he still goes forward with it arguing (I believe incorrectly) that he needs to balance the rights of Chen with Herridge’s First Amendment rights.

The Court does not reach this result lightly. It recognizes the paramount importance of a free press in our society and the critical role that confidential sources play in the work of investigative journalists like Herridge. Yet the Court also has its own role to play in upholding the law and safeguarding judicial authority. Applying binding precedent in this Circuit, the Court resolved that Chen’s need for the requested information to vindicate her rights under the Privacy Act overcame Herridge’s qualified First Amendment reporter’s privilege in this case. Herridge and many of her colleagues in the journalism community may disagree with that decision and prefer that a different balance be struck, but she is not permitted to flout a federal court’s order with impunity. Civil contempt is the proper and time-tested remedy to ensure that the Court’s order, and the law underpinning it, are not rendered meaningless.

But the First Amendment is not a balancing test. And if subpoenas or other attempts to reveal sources can be used in this manner, the harm to journalism will be vast. Journalism only works properly when journalists can legitimately promise confidentiality to sources. And that’s even more true for whistleblowers.

Admittedly, this case is a bit of a mess. It appears that the FBI falsely believed that Chen was a Chinese spy and investigated her, but let it go when they couldn’t support that claim. However, someone (likely in the FBI) leaked the info to Herridge, who reported on it. Chen sued the FBI, who won’t reveal who leaked the info. She’s now using lawful discovery to find out who leaked the info as part of the lawsuit. You can understand that Chen has been wronged in this situation, and it’s likely someone in the FBI who did so. And, in theory, there should be a remedy for that.

But, the problem is that this goes beyond just that situation and gets to the heart of what journalism is and why journalists need to be able to protect sources.

If a ruling like this stands, it means that no journalist can promise confidentiality, when a rush to court can force the journalist to cough up the details. And the end result is that fewer whistleblowers will be willing to speak to media, allowing more cover-ups and more corruption. The impact of a ruling like this is immensely problematic.

There’s a reason that, for years, we’ve argued for a federal shield law to make it clear that journalists should never be forced to give up sources. In the past, attempts to pass such laws have often broken down over debates concerning who they should apply to and how to identify “legitimate” journalists vs. those pretending to be journalists to avoid coughing up info.

But there is a simple solution to that: don’t have it protect “journalists,” have the law protect such information if it is obtained in the course of engaging in journalism. That is, if someone wants to make use of the shield law, they need to show that the contact and information obtained from the source was part of a legitimate effort to report a story to the public in some form, and they can present the steps they were taking to do so.

At the very least, the court recognizes that the contempt fees should be immediately stayed so that Herridge can appeal the decision:

The Court will stay that contempt sanction, however, to afford Herridge an opportunity to appeal this decision. Courts in this district and beyond have routinely stayed contempt sanctions to provide journalists ample room to litigate their assertions of privilege fully in the court of appeals before being coerced into compliance….

Hopefully, the appeals court recognizes how problematic this is. But, still, Congress can and should act to get a real shield law in place.

  • ✇Techdirt
  • We Can’t Have Serious Discussions About Section 230 If People Keep Misrepresenting ItMike Masnick
    At the Supreme Court’s oral arguments about Florida and Texas’ social media content moderation laws, there was a fair bit of talk about Section 230. As we noted at the time, a few of the Justices (namely Clarence Thomas and Neil Gorsuch) seemed confused about Section 230 and also about what role (if any) it had regarding these laws. The reality is that the only role for 230 is in preempting those laws. Section 230 has a preemption clause that basically says no state laws can go into effect that
     

We Can’t Have Serious Discussions About Section 230 If People Keep Misrepresenting It

1. Březen 2024 v 18:33

At the Supreme Court’s oral arguments about Florida and Texas’ social media content moderation laws, there was a fair bit of talk about Section 230. As we noted at the time, a few of the Justices (namely Clarence Thomas and Neil Gorsuch) seemed confused about Section 230 and also about what role (if any) it had regarding these laws.

The reality is that the only role for 230 is in preempting those laws. Section 230 has a preemption clause that basically says no state laws can go into effect that contradict Section 230 (in other words: no state laws that dictate how moderation must work). But that wasn’t what the discussion was about. The discussion was mostly about Thomas and Gorsuch’s confusion over 230 and thinking that the argument for Section 230 (that you’re not held liable for third party speech) contradicts the arguments laid out by NetChoice/CCIA in these cases, where they talked about the platforms’ own speech.

Gorsuch and Thomas were mixing up two separate things, as both the lawyers for the platforms and the US made clear. There are multiple kinds of speech at issue here. Section 230 does not hold platforms liable for third-party speech. But the issue with these laws was whether or not it constricted the platforms’ ability to express themselves in the way in which they moderated. That is, the editorial decisions that were being made expressing “this is what type of community we enable” are a form of public expression that the Florida & Texas laws seek to stifle.

That is separate from who is liable for individual speech.

But, as is the way of the world whenever it comes to discussions on Section 230, lots of people are going to get confused.

Today that person is Steven Brill, one of the founders of NewsGuard, a site that seeks to “rate” news organizations, including for their willingness to push misinformation. Brill publishes stories for NewsGuard on a Substack (!?!?) newsletter titled “Reality Check.” Unfortunately, Brill’s piece is chock full of misinformation regarding Section 230. Let’s do some correcting:

February marks the 28th anniversary of the passage of Section 230 of the Telecommunications Act of 1996. Today, Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online. But in February of 1996, this three-paragraph section of a massive telecommunications bill aimed at modernizing regulations related to the nascent cable television and cellular phone industries was an afterthought. Not a word was written about it in mainstream news reports covering the passage of the overall bill.

The article originally claimed it was the 48th anniversary, though it was later corrected (without a correction notice — which is something Newsguard checks on when rating the trustworthiness of publications). That’s not that big a deal, and I don’t think there’s anything wrong with “stealth” corrections for typos and minor errors like that.

But this sentence is just flat out wrong: “Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online.” It’s just not true. Section 230 gives limited exemptions from some forms of liability for third party content that they had no role in creating. That’s quite different than what Brill claims. His formulation suggests they’re not liable for anything they, themselves, put online. That’s false.

Section 230 is all about putting the liability on whichever party created the violation under the law. If a website is just hosting the content, but someone else created the content, the liability should go to the creator of the content, not the host.

Courts have had no problem finding liability on social media platforms for things they themselves post online. We have a string of such cases, covering Roommates, Amazon, HomeAway, InternetBrands, Snap and more. In every one of those cases (contrary to Brill’s claims), the courts have found that Section 230 does not protect things these platforms post online.

Brill gets a lot more wrong. He discusses the Prodigy and CompuServe cases and then says this (though he gives too much credit to CompuServe’s lack of moderation being the reason why the court ruled that way):

That’s why those who introduced Section 230 called it the “Protection for Good Samaritans” Act. However, nothing in Section 230 required screening for harmful content, only that those who did screen and, importantly, those who did not screen would be equally immune. And, as we now know, when social media replaced these dial-up services and opened its platforms to billions of people who did not have to pay to post anything, their executives and engineers became anything but good Samaritans. Instead of using the protection of Section 230 to exercise editorial discretion, they used it to be immune from liability when their algorithms deliberately steered people to inflammatory conspiracy theories, misinformation, state-sponsored disinformation, and other harmful content. As then-Federal Communications Commission Chairman Reed Hundt told me 25 years later, “We saw the internet as a way to break up the dominance of the big networks, newspapers, and magazines who we thought had the capacity to manipulate public opinion. We never dreamed that Section 230 would be a protection mechanism for a new group of manipulators — the social media companies with their algorithms. Those companies didn’t exist then.”

This is both wrong and misleading. First of all, nothing in Section 230 could “require” screening for harmful content, because both the First and Fourth Amendments would forbid that. So the complaint that it did not require such screening is not just misplaced, it’s silly.

We’ve gone over this multiple times. Pre-230, the understanding was that, under the First Amendment, liability of a distributor was dependent on whether or not the distributor had clear knowledge of the violative nature of the content. As the court in Smith v. California made clear, it would make no sense to hold someone liable without knowledge:

For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature.

That’s the First Amendment problem. But, we can take that a step further as well. If the state now requires scanning, you have a Fourth Amendment problem. Specifically, as soon as the government makes scanning mandatory, none of the content found during such scanning can ever be admissible in court, because no warrant was issued upon probable cause. As we again described a couple years ago:

The Fourth Amendment prohibits unreasonable searches and seizures by the government. Like the rest of the Bill of Rights, the Fourth Amendment doesn’t apply to private entities—except where the private entity gets treated like a government actor in certain circumstances. Here’s how that happens: The government may not make a private actor do a search the government could not lawfully do itself. (Otherwise, the Fourth Amendment wouldn’t mean much, because the government could just do an end-run around it by dragooning private citizens.) When a private entity conducts a search because the government wants it to, not primarily on its own initiative, then the otherwise-private entity becomes an agent of the government with respect to the search. (This is a simplistic summary of “government agent” jurisprudence; for details, see the Kosseff paper.) And government searches typically require a warrant to be reasonable. Without one, whatever evidence the search turns up can be suppressed in court under the so-called exclusionary rule because it was obtained unconstitutionally. If that evidence led to additional evidence, that’ll be excluded too, because it’s “the fruit of the poisonous tree.”

All of that seems kinda important?

Yet Brill rushes headlong on the assumption that 230 could have and should have required mandatory scanning for “harmful” content.

Also, most harmful content remains entirely protected by the First Amendment, making this idea even more ridiculous. There would be no liability for it.

Brill seems especially confused about how 230 and the First Amendment work together, suggesting (incorrectly) that 230 gives them some sort of extra editorial benefit that it does not convey:

With Section 230 in place, the platforms will not only have a First Amendment right to edit, but also have the right to do the kind of slipshod editing — or even the deliberate algorithmic promotion of harmful content — that has done so much to destabilize the world.  

Again, this is incorrect on multiple levels. The First Amendment gives them the right to edit. It also gives them the right to slipshod editing. And the right to promote harmful content via algorithms. That has nothing to do with Section 230.

The idea that “algorithmic promotion of harmful content… has done so much to destabilize the world” is a myth that has mostly been debunked. Some early algorithms weren’t great, but most have gotten much better over time. There’s little to no supporting evidence that “algorithms” have been particularly harmful over the long run.

Indeed, what we’ve seen is that while there were some bad algorithms a decade or so ago, pressure from the market has pushed the companies to improve. Users, advertisers, the media, have all pressured the companies to improve their algorithms and it seems to work.

Either way, those algorithms still have nothing to do with Section 230. The First Amendment lets companies use algorithms to recommend things, because algorithms are, themselves, expressions of opinion (“we think you would like this thing more than the next thing”) and nothing in there would trigger legal liability even if you dropped Section 230 altogether.

It’s a best (or worst) of both worlds, enjoyed by no other media companies.

This is simply false. Outright false. EVERY company that has a website that allows third-party content is protected by Section 230 for that third-party content. No company is protected for first-party content, online or off.

For example, last year, Fox News was held liable to the tune of $787 million for defaming Dominion Voting Systems by putting on guests meant to pander to its audience by claiming voter fraud in the 2020 election. The social media platforms’ algorithms performed the same audience-pleasing editing with the same or worse defamatory claims. But their executives and shareholders were protected by Section 230. 

Except… that’s not how any of this works, even without Section 230. Fox News was held liable because the content was produced by Fox News. All of the depositions and transcripts were… Fox News executives and staff. Because they created the defamatory content.

The social media apps didn’t create the content.

This is the right outcome. The blame should always go to the party who violated the law in creating the content.

And Fox News is equally as protected by Section 230 if there is defamation created by someone else but posted in a comment to a Fox News story (something that seems likely to happen frequently).

This whole column is misleading in the extreme, and simply wrong at other points. NewsGuard shouldn’t be publishing misinformation itself given that the company claims it’s promoting accuracy in news and pushing back against misinformation.

  • ✇Boing Boing
  • Justice Alito thinks bigots belong on juriesJason Weisberger
    Justice Samuel Alito believes the First Amendment gives bigots the right to serve on juries. Supreme something or other, Justice Alito protested the court's refusal to take up a case wherein people with an admitted bias against a litigant were denied seats on a jury. — Read the rest The post Justice Alito thinks bigots belong on juries appeared first on Boing Boing.
     

Justice Alito thinks bigots belong on juries

21. Únor 2024 v 17:09

Justice Samuel Alito believes the First Amendment gives bigots the right to serve on juries.

Supreme something or other, Justice Alito protested the court's refusal to take up a case wherein people with an admitted bias against a litigant were denied seats on a jury. — Read the rest

The post Justice Alito thinks bigots belong on juries appeared first on Boing Boing.

  • ✇Techdirt
  • In SCOTUS NetChoice Cases, Texas’s And Florida’s Worst Enemy Is (Checks Notes) Elon Musk.Mike Masnick
    Next week, the Supreme Court will hear oral argument in NetChoice v. Paxton and Moody v. NetChoice. The cases are about a pair of laws, enacted by Texas and Florida, that attempt to force large social media platforms such as YouTube, Instagram, and X to host large amounts of speech against their will. (Think neo-Nazi rants, anti-vax conspiracies, and depictions of self-harm.) The states’ effort to co-opt social media companies’ editorial policies blatantly violates the First Amendment. Since the
     

In SCOTUS NetChoice Cases, Texas’s And Florida’s Worst Enemy Is (Checks Notes) Elon Musk.

21. Únor 2024 v 20:57

Next week, the Supreme Court will hear oral argument in NetChoice v. Paxton and Moody v. NetChoice. The cases are about a pair of laws, enacted by Texas and Florida, that attempt to force large social media platforms such as YouTube, Instagram, and X to host large amounts of speech against their will. (Think neo-Nazi rants, anti-vax conspiracies, and depictions of self-harm.) The states’ effort to co-opt social media companies’ editorial policies blatantly violates the First Amendment.

Since the laws are constitutional trainwrecks, it’s no surprise that Texas’s and Florida’s legal theories are weak. They rely heavily on the notion that what social media companies do is not really editing — and thus is not expressive. Editors, Texas says in a brief, are “reputationally responsible” for the content they reproduce. And yet, the state continues, “no reasonable observer associates” social media companies with the speech they disseminate.

This claim is absurd on its face. Everyone holds social media companies “reputationally responsible” for their content moderation. Users do, because most of them don’t like using a product full of hate speech and harassment. Advertisers do, out of a concern for their “brand safety.” Journalists do. Civil rights groups do. Even the Republican politicians who enacted this pair of bad laws do — that’s why they yell about how “Big Tech oligarchs” engage in so-called censorship.

That the Texas and Florida GOP are openly contemptuous of the First Amendment, and incompetent to boot, isn’t exactly news. So let’s turn instead to some delicious ironies. 

Consider that the right’s favorite social media addict, robber baron, and troll Elon Musk has single-handedly destroyed Texas’s and Florida’s case.

After the two states’ laws were enacted, Elon Musk conducted something of a natural experiment in content moderation—one that has wrecked those laws’ underlying premise. Musk purchased Twitter, transformed it into X, and greatly reduced content moderation on the service. As tech reporter Alex Kantrowitz remarks, the new approach “privileges” extreme content from “edgelords.”

This, in turn, forces users to work harder to find quality content, and to tolerate being exposed to noxious content. But users don’t have to put up with this — and they haven’t. “Since Musk bought Twitter in October 2022,” Kantrowitz finds, “it’s lost approximately 13 percent of its app’s daily active users.” Clearly, users “associate” social-media companies with the speech they host!

It gets better. Last November, Media Matters announced that, searching X, it had found several iconic brands’ advertisements displayed next to neo-Nazi posts. Did Musk say, “Whatever, dudes, racist content being placed next to advertisements on our site doesn’t affect X’s reputation”? No. He had X sue Media Matters.

In its complaint, X asserts that it “invests heavily” in efforts to keep “fringe content” away from advertisers’ posts. The company also alleges that Media Matters gave the world a “false impression” about what content tends to get “pair[ed]” on the platform. These statements make sense only if people care — and X cares that people care — about how X arranges content on X.

X even states that Media Matters has tried to “tarnish X’s reputation by associating [X] with racist content.” It would be hard to admit more explicitly that social-media companies are “reputationally responsible” for, because they are “associated” with, the content they disseminate.

Consider also that Texas ran to Musk’s defense. Oblivious to how Musk’s vendetta hurts Texas’s case at the Supreme Court, Ken Paxton, the state’s attorney general, opened a fraud investigation against Media Matters (the basic truth of whose report Musk’s lawsuit does not dispute).

Consider finally how Texas’s last-ditch defense gets mowed down by the right’s favorite Supreme Court justice. According to Texas, social-media companies can scrub the reputational harm from spreading abhorrent content simply by “disavowing” that content. But none other than Justice Clarence Thomas has blown this argument apart. If, Thomas writes, a state could force speech on an entity merely by letting that entity “disassociate” from the speech with a “disclaimer,” that “would justify any law compelling speech.”

Only the government can “censor” speech. Texas and Florida are the true censors here, as they seek to restrict the expressive editorial judgment of social-media companies. That conduct is expressive. Just ask Elon Musk. And that expressiveness is fatal to Texas’s and Florida’s laws. Just ask Clarence Thomas. Texas’s and Florida’s social-media speech codes aren’t just unconstitutional, they can’t even be defended coherently.

Corbin Barthold is internet policy counsel at TechFreedom.

  • ✇Techdirt
  • Prominent MAGA Supporter Is Worried New KOSA Won’t Suppress Enough LGBTQ SpeechMike Masnick
    By now you know that Senator Richard Blumenthal has released a new version of KOSA, the misleadingly named Kids Online Safety Act, that he pretends fixes all the problems. It doesn’t. It still represents a real threat to speech online, and in particular speech from LGBTQ users. This is why Blumenthal, a prominent Democrat, is putting out press releases including supportive quotes from infamous anti-LGBTQ groups like the Institute for Family Studies and the “American Principles Project” (one of t
     

Prominent MAGA Supporter Is Worried New KOSA Won’t Suppress Enough LGBTQ Speech

21. Únor 2024 v 18:27

By now you know that Senator Richard Blumenthal has released a new version of KOSA, the misleadingly named Kids Online Safety Act, that he pretends fixes all the problems. It doesn’t. It still represents a real threat to speech online, and in particular speech from LGBTQ users. This is why Blumenthal, a prominent Democrat, is putting out press releases including supportive quotes from infamous anti-LGBTQ groups like the Institute for Family Studies and the “American Principles Project” (one of the leading forces behind anti-trans bills across the US). Incredibly, it also has an approving quote from NCOSE, formerly known as “Morality in Media,” a bunch of prudish busybodies who believe all pornography should be banned, and who began life trying to get “salacious” magazines banned.

When a bill is getting supportive quotes from NCOSE, an organization whose entire formation story is based around an attempt to ban books, you know that bill is not good for speech.

Why is a Democratic Senator like Blumenthal lining up with such regressive, censorial, far right nonsense peddlers? Well, because he doesn’t give a shit that KOSA is going to do real harm to LGBTQ kids or violate the Constitution he swore an oath to protect: he just wants to get a headline or two claiming he’s protecting children, with not a single care about how much damage it will actually do.

Of course, as we noted, the latest bill does make it marginally more difficult to directly suppress LGBTQ content. It removed the ability of state Attorneys General to enforce one provision, the duty of care provision, though still allows them to enforce other provisions and to sue social media companies if those state AGs feel the companies aren’t complying with the law.

Still, at least some of the MAGA crowd feel that this move, making it marginally more difficult for state AGs to try to force LGBTQ content offline means the bill is no longer worth supporting. Here’s Charlie Kirk, a leading MAGA nonsense peddler who founded and runs Turning Point USA, whining that the bill is no longer okay, since it won’t be used to silence LGBTQ folks as easily:

Image

If you can’t read that, it’s Charlie saying:

The Senate is considering the Kids Online Safety Act (KOSA), a bill that looks to protect underage children from groomers, pornographers, and other predators online.

But the bill ran into trouble because LGBT groups were worried it would make it too easy for red state AGs to target predators who try to groom children into mutilating themselves or destroying themselves with hormones and puberty blockers.

So now, the bill has been overhauled to take away power from from state AGs (since some of them might be conservatives who care about children) and instead give almost all power to the FTC, currently read by ultra-left ideologue Lina Khan. Sure enough, LGBT groups have dropped all their concerns.

We’ve seen this pattern before. What are the odds that this bill does zero to protect children but a lot to vaguely enhance the power of Washington bureaucrats to destroy whoever they want, for any reason?

If you can get past his ridiculous language, you can see that he’s (once again, like the Heritage Foundation and KOSA co-sponsor Senator Marsha Blackburn before him) admitting that the reason the MAGA crowd supports KOSA is to silence LGBTQ voices, which he falsely attacks as “groomers, pornographers, and other predators.”

He’s wrong that the bill can’t still be used for this, but he’s correct that the bill now gives tremendous power to whoever is in charge of the FTC, whether its Lina Khan… or whatever MAGA incel could be put in place if Trump wins.

Meanwhile, if Kirk is so concerned about child predators and groomers, it’s odd you never see him call out the Catholic church. Or, his former employee who was recently sentenced to years in jail for his “collection” of child sexual abuse videos. Or the organization that teamed up with Turning Point USA to sponsor an event, even though the CEO was convicted of “coercing and enticing” a minor. It’s quite interesting that Kirk is so quick to accuse LGBTQ folks of “grooming” and “predation,” when he keeps finding actual such people around himself, and he never says a word.

Either way, I’m curious if watching groups like TPUSA freak out about this bill not being censorial enough of LGBTQ content will lead Republicans to get cold feet on supporting this bill.

At the very least, though, it’s a confirmation that Republican support for this bill is based on their strong belief that it will censor and suppress LGBTQ content.

❌
❌