FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Digital License Plates And The Deal That Never Had A Chance

Location and surveillance technology permeates the driving experience. Setting aside external technology like license plate readers, there is some form of internet-connected service or surveillance capability built into or on many cars, from GPS tracking to oil-change notices. This is already a dangerous situation for many drivers and passengers, and a bill in California requiring GPS-tracking in digital license plates would put us further down this troubling path. 

In 2022, EFF fought along with other privacy groups, domestic violence organizations, and LGBTQ+ rights organizations to prevent the use of GPS-enabled technology in digital license plates. A.B. 984, authored by State Assemblymember Lori Wilson and sponsored by digital license plate company Reviver, originally would have allowed for GPS trackers to be placed in the digital license plates of personal vehicles. As we have said many times, location data is very sensitive information, because where we go can also reveal things we’d rather keep private even from others in our household. Ultimately, advocates struck a deal with the author to prohibit location tracking in passenger cars, and this troubling flaw was removed. Governor Newsom signed A.B. 984 into law. 

Now, not even two years later, the state’s digital license plate vendor, Reviver, and Assemblymember Wilson have filed A.B. 3138, which directly undoes the deal from 2022 and explicitly calls for location tracking in digital license plates for passenger cars. 

To best protect consumers, EFF urges the legislature to not approve A.B. 3138. 

Consumers Could Face Serious Concerns If A.B. 3138 Becomes Law

In fact, our concerns about trackers in digital plates are stronger than ever. Recent developments have made location data even more ripe for misuse.

  • People traveling to California from a state that criminalizes abortions may be unaware that the rideshare car they are in is tracking their trip to a Planned Parenthood via its digital license plate. This trip may generate location data that can be used against them in a state where abortion is criminalized.
  • Unsupportive parents of queer youth could use GPS-loaded plates to monitor or track whether teens are going to local support centers or events.
  • U.S. Immigration and Customs Enforcement (ICE) could use GPS surveillance technology to locate immigrants, as it has done by exploiting ALPR location data exchange between local police departments and ICE to track immigrants’ movements.  The invasiveness of vehicle location technology is part of a large range of surveillance technology that is at the hands of ICE to fortify their ever-growing “virtual wall.” 
  • There are also serious implications in domestic violence situations, where GPS tracking has been investigated and found to be used as a tool of abuse and coercion by abusive partners. Most recently, two Kansas City families are jointly suing the company Spytec GPS after its technology was used in a double-murder suicide, in which a man used GPS trackers to find and kill his ex-girlfriend, her current boyfriend, and then himself. The families say the lawsuit is, in part, to raise awareness about the danger of making this technology and location information more easily available. There’s no reason to make tracking any easier by embedding it in state-issued plates. 

We Urge the Legislature to Reject A.B. 3138  

Shortly after California approved Reviver to provide digital license plates to commercial vehicles under A.B. 984, the company experienced a security breach where it was possible for hackers to use GPS in real time to track vehicles with a Reviver digital license plate. Privacy issues aside,  this summer, the state of Michigan also terminated their two-year old contract with Reviver for the company’s failure to follow state law and its contractual obligations. This has forced 1,700 Michigan drivers to go back to a traditional metal license plate.

Reviver is the only company that currently has state authorization to sell digital plates in California, and is the primary advocate for allowing tracking in passenger vehicle plates. The company says its goal is to modernize personalization and safety with digital license plate technology for passenger vehicles. But they haven’t proven themselves up to the responsibility of protecting this data. 

A.B. 3138 functionally gives drivers one choice for a digital license plate vendor, and that vendor failed once to competently secure the location data collected by its products. It has now failed to meet basic contractual obligations with a state agency. California lawmakers should think carefully about the clear dangers of vehicle location tracking, and whether we can trust this company to protect the sensitive location information for vulnerable populations, or for any Californian.  

Reposted from the EFF’s Deeplinks blog.

Jimmy Kimmel’s Use Of George Santos’ Cameo Videos Found To Be Fair Use

Would you believe that Disney’s famously copyright-maximalist lawyers have just brought us a nice victory for fair use?

Earlier this year, we wrote about disgraced former Congressman George Santos suing Disney and Jimmy Kimmel after Kimmel used some of Santos’ Cameo videos (that Kimmel had secretly requested) in a, well, somewhat trollish fashion. Santos, who was drummed out of Congress after facing a bunch of charges regarding questionable handling of campaign funds, started promoting that he would record Cameo videos for between $350 and $500 a video.

At one point, he had bragged about how many people had paid him for Cameo videos. This resulted in Kimmel having his staff purchase some Cameo videos, which made Santos look silly. The videos were then played on Kimmel’s ABC late night show. Santos claimed that the videos were purchased under a “personal” use license, which was a lot less expensive than a commercial use license.

The argument was that the videos were used “commercially,” which meant that the license had been violated, and the videos infringed on Santos’ copyright. In my initial write-up of the case, I pointed out that Kimmel had a very strong fair use claim. Some commentators felt that Santos’ argument was a bit stronger than I made out, but it appears the judge in the case, Denise Cote, who has been involved in a number of high-profile copyright cases, agreed with me that it was fair use.

Notably, she granted Disney and Kimmel’s motion to dismiss on fair use grounds. That’s important because some people believe that fair use shouldn’t be decided so early in a case. Either it should go to the summary judgment stage or (much worse) is an issue for a jury to decide.

However, Cote says here that Kimmel’s use was pretty obviously fair use. She notes that the Second Circuit has said that in obvious fair use cases, you can find fair use at the motion to dismiss stage:

The Second Circuit has specifically acknowledged “the possibility of fair use being so clearly established by a complaint as to support dismissal of a copyright infringement claim.”

As in most fair use cases, the court went through the basic four factors test required to determine fair use. The court awarded the first factor (purpose of the use) to Disney/Kimmel because it was clearly about commentary on Santos:

In short, a reasonable observer would understand that JKL showed the Videos to comment on the willingness of Santos — a public figure who had recently been expelled from Congress for allegedly fraudulent activity including enriching himself through a fraudulent contribution scheme — to say absurd things for money. Thus, the Videos were used for political commentary and criticism, purposes that do not supersede the “objects” of the original Videos.

The fact that the use of the videos made Santos look bad doesn’t matter:

Santos’s argument that the defendants should not be able to “seek refuge in the fair use concept of transformation that they themselves manufactured through deceit” finds no support in copyright law. Defendants’ conduct may have been deceptive and unkind, but the Supreme Court in Warhol emphasized that whether a work is transformative turns on neither the “subjective intent of the user,” 598 U.S. at 544, nor the “stated or perceived intent of the artist.” Id. at 545 (citation omitted). A court must instead conduct “an objective inquiry into what use was made, i.e., what the user does with the original work.” Id. Here, the purpose of the defendants’ use was clearly for criticism and commentary of the Videos themselves and their author.

While not always true, the first factor is often the key to winning fair use. The fact that it was found to be favoring Kimmel here basically makes the rest of the analysis less important, but even so, the rest of the factors either favor no one or Kimmel anyway.

On the second factor, the nature of the work, the court says this is mostly neutral, but perhaps favors Kimmel/Disney slightly. The third factor, how much of the work was used, is also deemed to be neutral. It did use the entirety of the work, as other courts have found, but that’s fine if you need to use the entirety of the work for the fair use at hand.

Copying “the entirety of a work is sometimes necessary to make a fair use.” Swatch Group Management Services Ltd. v. Bloomberg L.P., 756 F.3d 73, 90 (2d Cir. 2014). The “ultimate question under this factor is whether the quantity and value of the materials used are reasonable in relation to the purpose of the copying.”….

…. The use of the Videos to criticize and comment on a public figure would have been undermined by showing less than the entirety of the Videos, because the audience would not know whether Santos had indeed said everything in the requests.

Then there’s the fourth factor: the effect on the market. As I had noted in my original piece about the complaint, it would be difficult to argue that Kimmel’s use would harm the market. And, indeed, that’s what the court found as well:

Santos argues that defendants’ use devalued the market for Cameo videos, including Santos’s, by “undermining the integrity” of the Cameo.com platform. Santos does not explain how any impact on the popularity of the Cameo platform — which is entirely speculative — impacts more specifically the public interest in the creative production of new expression. Moreover, the FAC identifies no harm to the potential or existing market for the Videos that Santos created for the defendants, other than the “very use at bar.” Swatch, 756 F.3d at 91 (citation omitted). Thus, this factor weighs in favor of fair use.

Put it all together and you have two mostly neutral factors and two that weigh towards fair use, and thus: fair use.

Taking all four factors into consideration, the defense of fair use is clearly established by the FAC and documents integral to it. The defendants’ use of the Videos was transformative; “transformative uses tend to favor a fair use finding because a transformative use is one that communicates something new and different from the original or expands its utility, thus serving copyright’s overall objective of contributing to public knowledge.”

The court also rejects the breach of contract claims, saying that those are basically arguing the same thing as the copyright claims, and are thus pre-empted.

It wouldn’t surprise me if Santos appeals, but this is a good clean, fair use win. Disney’s copyright lawyers aren’t regularly known for arguing on behalf of fair use, but in this case they were right to, and it’s nice to see the court agree.

Update: And, yup, Santos has already told the court that he’s appealing.

Ad Revenue On ExTwitter Still In Free Fall In Second Year Of Elon’s Reign

Turns out that when you tell advertisers to go fuck themselves, sue the advertisers who did so, and then promise you won’t do anything to stop the worst people in the world from spewing hate and bigotry on your platform, it might not be great for business.

Who knew? Elon, apparently.

Last week we noted that ad execs were saying that Elon’s latest antics were only making them even less interested in advertising on ExTwitter, but there hasn’t been as much talk lately about the financial situation the company is in.

In the first year after Elon took over, there were a number of reports suggesting ad revenue dropped somewhere between 50% and 70%. Elon has admitted that the company’s overall valuation of the company is probably down by nearly 60%.

But most of that was all talking about where it was in that first year post Elon. Since then, there’s been little data on how things were actually going. Linda Yaccarino has insisted that many of the advertisers who left came back, though when people looked at the details, it looked like a few that had come back only dipped their toes in the ExTwitter waters, rather than fully coming back.

And indeed, all we’ve been hearing this year is that Musk and Yaccarino are trying to woo back advertisers. Again. And again. Though, suing them isn’t doing them any favors.

However, buried in a recent Fortune article is the first time I’ve seen any data showing how badly the second year of Elon has gone. While the main focus of the article is on how Elon may have to sell some more of his Tesla stock to fund ExTwitter, it notes that ad revenue has continued to drop and was 53% lower than it was in 2023 (i.e., already after Elon had taken over, and many advertisers had bailed).

And the article says that ad revenue is down an astounding 84% from when Elon took over, based on an analysis by Bradford Ferguson, the chief investment officer at an asset management firm:

Ferguson based his assessment on internal second-quarter figures recently obtained by the New York Times. According to this report, X booked $114 million worth of revenue in the U.S., its largest market by far. This represented a 25% drop over the preceding three months and a 53% drop over the year-ago period.

That already sounds bad. But it gets worse. The last publicly available figures prior to Musk’s acquisition, from Q2 of 2022, had revenue at $661 million. After you account for inflation, revenue has actually collapsed by 84%, in today’s dollars.

Ouch.

A separate report from Quartz (pulling from MediaRadar research) suggests the numbers aren’t quite that dire, but they still see a 24% decline in 2024 compared to 2023. And when the 24% decline is the better report, you know you’re in serious trouble.

Advertisers apparently spent almost $744 million on X, formerly known as Twitter, during the first six months of 2024. That’s about 24% lower than the more than $982 million advertisers dropped on the platform in the first half of 2023, according to ad-tracking company MediaRadar.

No matter how you look at it, it appears that in the second year of Elon’s control, advertising revenue remains in free fall.

No wonder he’s resorted to suing. Platforming more awful people and undermining each deal that Yaccarino brings in hasn’t magically helped turn things around.

Anyway, for no reason at all, I’ll just remind people that Elon’s pitch to investors to help fund some of the $44 billion takeover of Twitter was that he would increase revenue to $26.4 billion by 2028. And, yes, the plan was to diversify that revenue, but his pitch deck said that ad revenue would generate $12 billion by 2028. This would mean basically doubling the ~$6 billion in ad revenue the company was making at the time Elon purchased it. But now that’s been cut to maybe $1.5 billion and probably less.

I’m guessing that Elon and Linda might fall a wee bit short of their target here.

Judge O’Connor Says It’s Crazy To Think Tesla Is Connected To ExTwitter; Forces Media Matters To Pay Elon’s Fees

Apparently, Judge Reed O’Connor doesn’t think that owning a massive amount of Tesla stock constitutes a conflict of interest when it comes to judging Elon Musk’s legal battles.

Last week, we were briefly surprised when infamously partisan Judge Reed O’Connor recused himself from Elon’s nonsense SLAPP suit against GARM and some advertisers.

As we had reported back in June, Media Matters had raised the issue that Judge O’Connor owns a ton of Tesla stock, which arguably is a conflict of interest in ExTwitter’s lawsuit against Media Matters (which Judge O’Connor had refused to dismiss despite its obvious problems). That matter had still been pending last week when O’Connor surprised lots of people (almost certainly including Elon’s lawyers) by recusing himself from the GARM suit.

We had wondered if it was a sign that Judge O’Connor realized how bad it looked for him to hold Tesla stock while repeatedly ruling on behalf of Elon. But, no, it quickly came out that the issue was almost certainly that O’Connor also owned stock in Unilever, one of the firms that Elon was suing in the case.

And then, just days later, it was made clear that Judge O’Connor sees no conflict in owning Tesla stock. He not only rejected Media Matters’ request that Elon be forced to list Tesla as an interested party, but he also made Media Matters pay Elon’s legal fees over this matter.

Judge O’Connor insists it’s just crazy to suggest that Tesla is somehow an interested party:

First, there is no evidence that shows Tesla has a direct financial interest in the outcome of this case. Tesla neither directly nor indirectly holds equity in X, Tesla is not a director or advisor, and it does not participate in the affairs of X. In other words, there is no indication that Tesla has any control over X or any financial ties to X, and Defendants do not claim as much. The question for disclosure is whether Tesla has a “legal or equitable interest” in X. Defendants merely point to news articles that report some blurred lines between Tesla and X that do not rise to the level of financial interest. These articles do not amount to evidence of a financial interest. Tesla is a publicly traded company, with tens of thousands of stockholders, its own board of directors, and external auditors. X is a privately owned company. The mere assertions that Musk owns a constellation of companies, some former Tesla employees now work at X, and that Tesla leased workspace from X do not support a finding that Tesla and X are not separate legal entities or that they share a financial interest.

Later, in response to points about Tesla stock fluctuating in direct connection to Elon doing stupid shit on ExTwitter, O’Connor notes in a footnote: “Musk, who is neither a plaintiff nor defendant in this suit.”

C’mon. None of us were born yesterday.

Elon is totally driving this lawsuit. He was the one who announced that this lawsuit would be coming based on a tweet that he saw. And, obviously, Tesla’s stock is tied to nonsense going on at ExTwitter. He sold a ton of Tesla stock to do the deal, and there are constant reports that he’s almost certainly going to need sell more to keep ExTwitter afloat.

None of this is that big of a surprise, though. Talking to lawyers, I didn’t find one who thought that O’Connor would buy this argument (which is partly why his recusal in the GARM case took me by surprise, before it was revealed that that was due to the Unilever stock).

But just to add absolute insult to injury, O’Connor said that Media Matters, a small non-profit, has to cover ExTwitter’s legal fees over this motion, despite it being owned by the world’s richest man:

Defendants’ Motion to Compel does not have a basis in law. It lacks a reasonable basis in law because the motion is motivated by an effort to force recusal, as opposed to disclosure of unknown information. Compelling disclosure is proper only when a party lacks necessary information. Carr, 2024 WL 1675185, at *1 (compelling disclosure of unknown LLC members); Steel Erectors, 312 F.R.D. at 677 (compelling disclosure of an unknown parent corporation). The information Defendants seek to compel was not unknown to them.

A Motion to Compel Corrected Certificate of Interested Persons when that information was known appears to be unprecedented. Indeed, Defendants did not cite a single case in which a motion to compel a corrected certificate of interested person was brought under this posture, much less one in which the effort was successful under the Local Rule 3.1 “financially interested” standard. Additionally, Defendants’ motion has no basis in fact. Local Rule 3.1’s clear incorporation of “financial interest” requires “legal or equitable interest.” 28 U.S.C. § 455(d)(4). Defendants failed to show facts that X’s alleged connection to Tesla meets this standard. Instead, it appears Defendants seek to force a backdoor recusal through their Motion to Compel. Gamesmanship of this sort is inappropriate and contrary to the rules of the Northern District of Texas.10 Accordingly, Defendants’ Motion is not substantially justified and attorney’s fees are appropriate.

Already, Judge O’Connor’s ridiculous decision to order discovery in this case, rather than properly dismissing it for lack of jurisdiction (among many other reasons), has resulted in layoffs at the non-profit. The impact of this ruling and the fee shifting is likely to do even more damage.

From the beginning, it has been clear that this was a SLAPP suit by Elon, angry that Media Matters (accurately) called out how ads on ExTwitter were appearing next to literal neo-Nazi content. The complaint from ExTwitter admitted that Media Matters accurately reported what it found. The only purpose of this lawsuit is to try to bleed Media Matters dry and to warn away other critics from doing similar reporting.

There’s a reason that most anti-SLAPP laws include fee shifting going the other way (making the wealthy vexatious plaintiffs pay the legal fees of the weaker defendants). Seeing O’Connor basically flip the script here is yet another reason why anti-SLAPP laws are so important.

Having Judge O’Connor let the case move forward to discovery in the first place was already a travesty. Now awarding fee shifting over Media Matters for calling out the potential conflict regarding Tesla’s connection to the case just feels like O’Connor, somewhat gleefully, twisting the knife that Elon plunged into the non-profit.

Neil Gorsuch Highlights Aaron Swartz As An Example Of Overreach In Criminal Law

Well, here’s something unexpected. Apparently Supreme Court Justice Neil Gorsuch has a new book coming out this week called “Over Ruled: The Human Toll of Too Much Law.” And, one of the examples in the book is about the ridiculous criminal case against Aaron Swartz and his eventual tragic decision to take his own life while facing the possibility of decades in prison for the “crime” of downloading too many research papers while on a college campus that had an unlimited subscription to those research papers.

At the time, we wrote about the travesty of the case and the tragedy of how it all ended.

But it’s still somewhat surprising to find out that the case has been wedged in Gorsuch’s mind as an example of prosecutorial overreach and over-criminalization.

David French has an interview with Gorsuch about the book in the NY Times, and the Swartz case is the first example Gorsuch brings up:

French: This was an interesting element of the book to me and something that people who are not familiar with your jurisprudence might not know — it’s that you’ve long been a champion of the rights of criminal defendants. It struck me that some of the stories here in the book, of the way in which the complexity of criminal law has impacted people, are among the most potent in making the point. Is there a particular story about the abuse of criminal law that stands out to you as you’re reflecting back on the work?

Gorsuch: I would say Aaron Swartz’s story in the book might be one example. Here’s a young man, a young internet entrepreneur, who has a passion for public access to materials that he thinks should be in the public domain. And he downloads a bunch of old articles from JSTOR.

His lawyer says it included articles from the 1942 edition of the Journal of Botany. Now, he probably shouldn’t have done that, OK?

But JSTOR and he negotiated a solution, and they were happy. And state officials first brought criminal charges but then dropped them. Federal prosecutors nonetheless charged him with several felonies. And when he refused to plea bargain — they offered him four to six months in prison, and he didn’t think that was right — he wanted to go to trial.

What did they do?

They added a whole bunch of additional charges, which exposed him to decades in federal prison. And faced with that, he lost his money, all of his money, paying for lawyers’ fees, as everybody does when they encounter our legal system. And ultimately, he killed himself shortly before trial. And that’s part of what our system has become, that when we now have, I believe, if I remember correctly from the book, more people now serving life sentences in our prison system than we had serving any prison sentence in 1970. And today — one more little item I point out — one out of 47 Americans is subject to some form of correctional supervision (as of 2020).

I disagree with Gorsuch on many, many things. On the two big internet cases from this last term, Gorsuch joined the Lalaland takes of Justices Alito and Thomas (in both the Moody and the Murthy case Gorsuch was a third vote besides Alito and Thomas towards nonsense). So, it seems a bit shocking for Gorsuch to be somewhat on the side of Swartz, who would have eviscerated Gorsuch’s position in both of those cases.

Of course, Gorsuch is also wrong that Swartz “probably shouldn’t have done that.” MIT had a site license that enabled anyone on campus to download as many articles from JSTOR as they wanted. It didn’t say “unless you download too many.”

But, at least he recognizes how ridiculous the criminal lawsuit that Swartz faced a dozen years ago is. For well over a decade, we’ve been highlighting how dangerous the CFAA is as a law. It is so easily abused by prosecutors that it’s been dubbed “the law that sticks.” It sticks because when there is no real criminal prosecution under other laws, prosecutors will often cook up a CFAA violation, as they did with Aaron. And it remains ridiculous that, to this day, nothing has ever been done to prevent another Aaron Swartz-type scenario from happening again.

Perhaps, with Gorsuch bringing it up again in his book and in this interview, it can renew some of the interest that showed up in the months following Aaron’s untimely death to make real changes to the laws that caused it. Having a Justice like Gorsuch calling out the terrible and ridiculous situation the CFAA caused seems like a good reason for Congress to revisit that law, rather than cooking up new nonsense like KOSA.

Elon Musk’s SuperPAC Is Misleading (Some) Voters Into Thinking They Registered To Vote; Collecting Tons Of Data

If you’re a swing state voter who thought AmericaPAC was helping you register to vote, think again. The Elon Musk-backed SuperPAC seems more interested in your personal info than your civic participation. It appears to be misleading visitors in order to collect all sorts of data, specifically on swing state voters, according to an incredible CNBC report.

I tend to think that the discourse around all three of the following things is overblown: the impact of SuperPACs on elections, concerns about “dark patterns,” and how much the owner of a social media platform can influence an election. But Elon Musk sure seems to be working overtime to change my mind on all three things.

Remember how Musk took over Twitter because he thought its leadership was managing the company in too political a manner? Indeed, while he was in the process of trying to buy the site, one of the things he said is that the site needed to be “politically neutral” to “deserve public trust.”

Image

Since taking over Twitter, renaming it to X, and reinstating the worst people in the world, Elon continues to fall deeper and deeper into MAGA-fueled fantasyland, leading to his official endorsement of Donald Trump and turning ExTwitter into an all-day, every-day promotional campaign for the former President.

Around the same time that he endorsed Trump, it was announced that Musk would be donating to a SuperPAC that was created to support Trump. Some of the details have been disputed, but Musk admits he created a SuperPAC to support Trump.

“Now what I have done is that I have created a Pac or Super Pac or whatever you want to call it,” he said. “It is called the America Pac.”

Now, the whole thing with SuperPACs is that they’re supposed to be independent from the campaign. This is a convenient lie for everyone, so it’s rarely enforced. But, earlier this year, the Federal Election Commission said that the independence is really only around advertising. It said they can coordinate on canvassing.

That’s a very big deal, because these days canvassing and “get out the vote” campaigns appear to be the keys to winning elections.

And that brings us to the CNBC article, which notes that AmericaPAC has been running sketchy ads that then push users to a site where it claims it will help register them to vote. But how it handles users depends on where they live:

The website says it will help the viewer register to vote. But once a user clicks “Register to Vote,” the experience he or she will have can be very different, depending on where they live.

If a user lives in a state that is not considered competitive in the presidential election, like California or Wyoming for example, they’ll be prompted to enter their email addresses and ZIP code and then directed quickly to a voter registration page for their state, or back to the original sign-up section.

But for users who enter a ZIP code that indicates they live in a battleground state, like Pennsylvania or Georgia, the process is very different.

Rather than be directed to their state’s voter registration page, they instead are directed to a highly detailed personal information form, prompted to enter their address, cellphone number and age.

If they agree to submit all that, the system still does not steer them to a voter registration page. Instead, it shows them a “thank you” page.

So that person who wanted help registering to vote? In the end, they got no help at all registering. But they did hand over priceless personal data to a political operation.

This is… not normal. Yes, political campaigns do all sorts of things to collect data on potential voters, but that’s not supposed to involve actively misleading them. And targeting the enhanced data collection in swing states suggests that the PAC could seek to focus on activating likely Trump voters, while decreasing turnout of likely Harris voters.

Now, I’ve pointed out before that people freak out too much over claims of everything being “election interference,” but it kinda does seem that collecting a ton of personal data on someone, telling them that you’re helping to register them to vote, and then not actually registering them to vote… is a form of fraud, doesn’t it?

The report notes that people who end up on this page are not given any indication that the site they’re on is designed to support Trump. Instead, it’s made to look like a generic form to help you register to vote.

The PAC’s website offers no indication one way or another what the group’s political leaning is. But in its federal filings, the group discloses that all of its work is designed to either help Trump or hurt his opponent.

When you put all of this together, it’s fairly concerning. The PAC is not upfront with visitors, and then is potentially fraudulently suggesting that it’s helping them register to vote, when it’s actually just collecting a ton of valuable information on people in important swing states (while not actually registering them to vote). Combine that with the fact that the SuperPAC has been engaging in canvassing activities (where it can coordinate with the campaign) and the whole thing seems quite sketchy:

“What makes America PAC more unique: it is a billionaire-backed super PAC focused on door-to-door canvassing, which it can conduct in coordination with a presidential campaign,” Fischer said.

No wonder Elon is so quick to insist that others are engaged in attempts at election interference.

Justice Alito Almost Messed Up The Internet; Then He Threw A Temper Tantrum

It turns out the internet was one Sam Alito petulant tantrum away from being a total disaster. In two key First Amendment cases, Alito was given the majority opinion to write. And, in both of them, his insistence on obliterating the old boundaries of the First Amendment caused other Justices to switch sides – and Alito to act like a spoiled brat.

This year, the Supreme Court session ran later than usual. Usually, they finish up by the end of June, but this year it extended the term over to July 1st. There were, obviously, a bunch of “big” decisions (Presidential immunity! Chevron deference!) that were held to the very end, including the two big internet cases: the NetChoice cases and the Murthy case.

As people awaited the decisions, there was a fair bit of SCOTUSology as court experts (and non-experts) speculated based on the number of decisions written by each Justice (and which months the cases were heard in) as to which Justice would have the majority decisions in remaining cases. I heard from quite a few such experts who expected that Alito would have the majority decision in the NetChoice cases, given that the other Justices all seemed to have majority opinions from February cases, and Alito’s name seemed to be missing.

Some people were surprised because in basically all of the internet cases oral arguments, Alito seemed quite out of step with the rest of the Court (and reality). When the decision finally came out, saying that the lower courts didn’t do the proper analysis for a “facial challenge,” it sent the cases back to the lower courts for a redo. But the majority opinion included some very important commentary about how the First Amendment still applies to social media editorial discretion. The overall ruling was technically a unanimous decision, but some noted that Justice Alito’s “concurrence” read like it had been written to be the majority opinion. It delves deeper into the facts of the case than a concurrence normally would (the majority opinion normally handles that).

Oh, and one other weird thing: in that final week of June, people were confused by Justice Alito not showing up to a couple of decision days, and his absence was never explained. Until now.

CNN now has quite an incredible insider’s tale of how Justice Alito had, in fact, been given the job of writing the majority opinion in the NetChoice cases, but lost it because he tried to push the decision too far into saying that states could regulate content moderation.

Alito, while receptive to the 5th Circuit’s opinion minimizing the companies’ speech interests, emphasized the incompleteness of the record and the need to remand the cases. Joining him were fellow conservatives Clarence Thomas and Neil Gorsuch and, to some extent, Barrett and Jackson.

On the other side was Kagan, leaning toward the 11th Circuit’s approach. She wanted to clarify the First Amendment implications when states try to control how platforms filter messages and videos posted by their users. She was generally joined by Chief Justice John Roberts and Justices Sonia Sotomayor and Brett Kavanaugh.

Alito began writing the court’s opinion for the dominant five-member bloc, and Kagan for the remaining four.

It’s also interesting that Justice Jackson was siding with Alito. During oral arguments, Justice Jackson asked some… odd questions, leading some to worry about how she might come down. The CNN report suggests those fears were legitimate.

Either way, Alito pushed his views too far and caused both Barrett and Jackson to bail out.

But when Alito sent his draft opinion around to colleagues several weeks later, his majority began to crumble. He questioned whether any of the platforms’ content-moderation could be considered “expressive” activity under the First Amendment.

Barrett, a crucial vote as the case played out, believed some choices regarding content indeed reflected editorial judgments protected by the First Amendment. She became persuaded by Kagan, but she also wanted to draw lines between the varying types of algorithms platforms use.

“A function qualifies for First Amendment protection only if it is inherently expressive,” Barrett wrote in a concurring statement, asserting that if platform employees create an algorithm that identifies and deletes information, the First Amendment protects that exercise of editorial judgment. That might not be the situation, Barrett said, for algorithms that automatically present content aimed at users’ preferences.

Kagan added a footnote to her majority opinion buttressing that point and reinforcing Barrett’s view. Kagan wrote that the court was not dealing “with feeds whose algorithms respond solely to how users act online – giving them the content they appear to want, without any regard to independent content standards.”

Barrett’s concerns have been worrying to some, as it suggests that algorithmic recommendations may not be protected by the First Amendment. This would upset a bunch of what people thought was settled law regarding things like search engine recommendations. However, the hope is that if such a case comes before the Court (which it almost certainly will…), that a fuller briefing on the record would clarify that algorithmic recommendations are still speech.

As we noted, Alito’s concurrence reads pretty petulant. It declares the majority’s “First Amendment applies to social media” explanation as “nonbinding dicta.” CNN details that this was him being angry that he lost the majority on that case.

But the key reason he lost control over the decision seems to be that he, unlike the eventual majority, would have sided a lot more with the Fifth Circuit’s ruling, which upended a century’s worth of First Amendment law.

Alito had the backing of only two justices in the end, Thomas and Gorsuch. He expressed sympathy for state efforts to restrict what, in an earlier phase of the Texas case Alito called “the power of dominant social media corporations to shape public discussion of the important issues of the day.”

In his separate July 1 opinion for a minority, Alito pointed up why states might want to regulate how platforms filter content: “Deleting the account of an elected official or candidate for public office may seriously impair that individual’s efforts to reach constituents or voters, as well as the ability of voters to make a fully informed electoral choice. And what platforms call ‘content moderation’ of the news or user comments on public affairs can have a substantial effect on popular views.”

Like Oldham, Alito took jabs at the “sophisticated counsel” who challenged the state regulations.

The same article notes that Alito also lost the majority on another “Fifth Circuit misunderstands the First Amendment” case. The one involving Sylvia Gonzalez, who was retaliated against by the mayor for her efforts to shake up the local government. The Fifth Circuit originally said this was totally fine. Eventually, the Supreme Court sent the case back to the Fifth Circuit to try again.

But again, Alito tried to go too far:

When the justices voted on the case in March, the majority agreed that the 5th Circuit erred in the standard it used. Alito was assigned the opinion.

But as he began writing, he went further than the other justices in his review of Gonzalez’s case. Alito and his colleagues realized he couldn’t “hold five,” as the expression goes, for a majority.

A new majority agreed to dispatch the case with a limited rationale in unsigned opinion. Rejecting the 5th Circuit’s reasoning, the Supreme Court said the 5th Circuit had applied an “overly cramped view” of the court’s precedent for when people may sue for First Amendment retaliation claims. The high court noted that Gonzalez could not show evidence of whether officers handled similar situations differently because her situation, involving the alleged removal of a document, was exceedingly rare.

Alito also wrote a concurrence for that case, but here he went on a long rant basically explaining why even if the Fifth Circuit used the wrong standard, there were lots of reasons why Gonzalez should have lost her case. Basically, if he had written the majority opinion, all of this would have qualified as “nonbinding dicta” under Alito’s own standard. Now, at least, it’s just a concurrence.

But, apparently, because Alito was ticked off that he couldn’t “hold five” in either of these cases, it caused him to take his ball and go home (i.e., just not show up at the Court on decision days):

On June 20, when the chief justice announced the opinion in Gonzalez v. Trevino, Alito’s chair at the bench was empty. Alito missed that day, as a total four opinions were handed down, and the next, June 21, when the justices released five other opinions.

Justices sometimes skip one of these final days of the annual session, but usually there’s an obvious reason for the absence, such as travel to a previously scheduled speech. Court officials declined to provide any explanation.

Alito returned for the final four announcement days of the term, yet sometimes appeared preoccupied. On the last day, when Kagan announced the decision in the NetChoice case, Alito was reading through material he had brought along to the bench.

Poor baby.

In both cases, Alito’s view of the First Amendment seems disconnected from reality and history. And, in both cases, he still had a chance to write the majority opinion (sending both cases down on what is, effectively, technicalities). But, in both cases, he was unable to write a reasonable opinion, causing his colleagues on the bench to jump ship to more reasonable rulings.

And, in response, he decided to just sulk like a teenager who didn’t get his way. In the end, that left us with a much better, more First Amendment supportive majority decision (in both cases). But it’s truly incredible how close we came to bad decisions in each, and how both of those flipped due to Alito’s insistence on pushing his terrible, unsupported ideas about free speech.

Jim Jordan Demands Advertisers Explain Why They Don’t Advertise On MAGA Media Sites

Remember last month when ExTwitter excitedly “rejoined GARM” (the Global Alliance for Responsible Media, an advertising consortium focused on brand safety)? And then, a week later, after Rep. Jim Jordan released a misleading report about GARM, Elon Musk said he was going to sue GARM and hoped criminal investigations would be opened?

Unsurprisingly, Jordan has now ratcheted things up a notch by sending investigative demands to a long list of top advertisers associated with GARM. The letter effectively accuses these advertisers of antitrust violations for choosing not to advertise on conservative media sites, based on GARM’s recommendations on how to best protect brand safety.

The link there shows all the letters, but we’ll just stick with the first one, to Adidas. The letter doesn’t make any demands specifically about ExTwitter, but does name the GOP’s favorite media sites, and demands to know whether any of these advertisers agreed not to advertise on those properties. In short, this is an elected official demanding to know why a private company chose not to give money to media sites that support that elected official:

Was Adidas Group aware of the coordinated actions taken by GARM toward news outlets and podcasts such as The Joe Rogan Experience, The Daily Wire, Breitbart News, or Fox News, or other conservative media? Does Adidas Group support GARM’s coordinated actions toward these news outlets and podcasts?

Jordan is also demanding all sorts of documents and answers to questions. He is suggesting strongly that GARM’s actions (presenting ways that advertisers might avoid, say, having their brands show up next to neo-Nazi content) were a violation of antitrust law.

This is all nonsense. First of all, choosing not to advertise somewhere is protected by the First Amendment. And there are good fucking reasons not to advertise on media properties most closely associated with nonsense peddling, extremist culture wars, and just general stupidity.

Even more ridiculous is that the letter cites NAACP v. Claiborne Hardware, which is literally the Supreme Court case that establishes that group boycotts are protected speech. It’s the case that says not supporting a business for the purpose of protest, while economic activity, is still protected speech and can’t be regulated by the government (and it’s arguable that what does GARM does is even a boycott at all).

As the Court noted, in holding that organizing a boycott was protected by the First Amendment:

The First Amendment similarly restricts the ability of the State to impose liability on an individual solely because of his association with another.

But, of course, one person who is quite excited is Elon Musk. He quote tweeted (they’re still tweets, right?) the House Judiciary’s announcement of the demands with a popcorn emoji:

Image

So, yeah. Mr. “Free Speech Absolutist,” who claims the Twitter files show unfair attempts by governments to influence speech, now supports the government trying to pressure brands into advertising on certain media properties. It’s funny how the “free speech absolutist” keeps throwing the basic, fundamental principles of free speech out the window the second he doesn’t like the results.

That’s not supporting free speech at all. But, then again, for Elon to support free speech, he’d first have to learn what it means, and he’s shown no inclination of ever doing that.

First Mover Advantage Shows How Copyright Isn’t Necessary To Protect Innovative Creativity

One of the arguments sometimes made in defense of copyright is that without it, creators would be unable to compete with the hordes of copycats that would spring up as soon as their works became popular. Copyright is needed, supporters say, to prevent less innovative creators from producing works that are closely based on new, successful ideas.

However, this approach has led to constant arguments and court cases over how close a “closely based” work can be before it infringes on the copyright of others. A good example of this is the 2022 lawsuit involving Ed Sheeran, where is was argued that using just four notes of a scale constituted copyright infringement of someone else’s song employing the same tiny motif. A fascinating new paper looks at things from a different angle. It draws on the idea of “first-mover advantage”, the fact that:

individuals that move to a new market niche early on (“first movers”) obtain advantages that may lead to larger success, compared to those who move to this niche later. First movers enjoy a temporary near-monopoly: since they enter a niche early, they have little to no competition, and so they can charge larger prices and spend more time building a loyal customer base.

The paper explores the idea in detail for the world of music. Here, first-mover advantage means:

The artists and music producers who recognize the hidden potential of a new artistic technique, genre, or style, have bigger chances of reaching success. Having an artistic innovation that your competitors do not have or cannot quickly acquire may become advantageous on the winner-take-all artistic market.

Analyzing nearly 700,000 songs across 110 different musical genres, the researchers found evidence that first-mover advantage was present in 91 of the genres. The authors point out that there is also anecdotal evidence of first-mover advantage in other arts:

For example, Agatha Christie—one of the recognized founders of “classical” detective novel—is also one of the best-selling authors ever. Similarly, William Gibson’s novel Neuromancer—a canonical work in the genre of cyberpunk—is also one of the earliest books in this strand of science fiction. In films, the cult classic The Blair Witch Project is the first recognized member of the highly successful genre of found-footage horror fiction.

Although copyright may be present, first-mover advantage does not require it to operate – it is simply a function of being early with a new idea, which means that competition is scarce or non-existent. If further research confirms the wider presence of first-mover advantage in the creative world – for example, even where sharing-friendly CC licenses are used – it will knock down yet another flimsy defense of copyright’s flawed and outdated intellectual monopoly.

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

Free Speech Absolutist Elon Musk Removes Tweets Revealing Ted Cruz Fundraising Notes

If ExTwitter is the bastion of free speech, you would think that it would allow for the publishing of newsworthy documents revealing a politician’s funding briefings, right? Apparently not when that politician is politically aligned with Elon Musk, whose commitment to open discourse appears to be about as floppy as the Cybertruck’s giant windshield wiper.

It’s been a little while since we’ve had one of these posts, but it remains important: Elon Musk claims to be a “free speech absolutist” and repeatedly insists that he bought Twitter and turned it into ExTwitter to “bring back” free speech. However, over and over again we see him delete speech, often on ideological grounds.

Remember, Musk claims to be a “free speech absolutist.”

Image

He also claims, bizarrely, that free speech means “that which matches the law” (which seems to contradict his claims above about disobeying orders from governments to block certain speech).

Image

He also repeatedly claims that ExTwitter “will fight for your freedom to speak” and that “Freedom of speech is the bedrock of democracy. Without it, America ends.”

Image

But, of course, the second someone semi-powerful whom Elon agrees with is aggrieved, well, down go the tweets. Witness the situation faced by reporter Pablo Manriquez, who ended up with the briefing notes that some poor schlub of a Ted Cruz staffer accidentally left somewhere to be picked up.

Manriquez went to ExTwitter, home of “free speech,” to report on what he got his hands on in a nice thread of posts with images of all the documents. Or maybe not:

Image

Yeah, that image is the current entirety of Pablo’s nine post thread. Only the first and last tweets are shown, and all seven in the middle — the ones that at one point showed the documents in question — have been removed because, the screenshot shows, “This Post violated the X Rules.”

I went through “The X Rules” and couldn’t find anything this credibly violated. The closest would have to be the rule against publishing “private information.” But that rule describes private information as things like “home phone number and address.”

Going through the more detailed policy on private information, I still don’t see anything that could possibly qualify with this data dump. It also claims that the company takes into account what type of info is being shared, who is sharing it, and why. All of those would suggest this did not violate the policy, as it’s information in the public interest, being shared in a reporting fashion, in a manner that does not really violate anyone’s privacy, nor put anyone at risk (except of embarrassment).

Now, it is true that a few of the documents show the phone number of the Cruz staffer who will be tagging along for the meetings. So, arguably, you could say that would trigger a privacy violation as well. But not all of the removed tweets had that. And I just did a quick search on the staffer’s name and “phone number”, and the top Google result shows the exact same phone number. So it’s not exactly “private” information. Some of the docs also show some other phone numbers, or the names of family members, but nothing that seems particularly sensitive. Indeed, much of it appears to be copied from public bios that mention the family members.

But, fine, if Musk/Cruz defenders want to insist that this is obviously still a violation of the policy on private information then… wouldn’t the same be true of the contents of Hunter Biden’s laptop?

I can see no world in which the information from Hunter Biden’s laptop is not more private than some briefing notes regarding Ted Cruz being told to ask Ron Lauder to donate the maximum possible, a combined $119,200 to his various campaign and political PACs. It’s a nice way to “legally” donate way more than what the public believes are the official limits on individual campaign donations.

Anyway, Musk’s attempt to block these tweets from being shared didn’t work very well. The ThreadReader app captured them all, and I’ll include them below as well (though I’ll blur out some info to be nice, not because I think it needs to be blurred). Meanwhile, both Newsweek and Business Insider reported on the details of the documents, highlighting how newsworthy they are.

To be clear: there’s nothing nefarious in these docs. I can guarantee that every Senator has similar briefing notes revealing similar requests for money. It is, however, revealing to the public how the fundraising game is played, as the Business Insider piece notes. And that makes it extremely newsworthy.

Publishing these docs may be embarrassing, but they break no laws. So, Musk’s claims of his definition of free speech matching the laws is already shown to be bullshit.

And, of course, as we’ve always said, it’s Musk’s platform. He is absolutely free to have whatever rules in place he wants and to delete whatever content he thinks should be deleted. That’s part of his own free speech rights.

But the same was true of Twitter before Musk took it over. It wasn’t an “attack on free speech” when Twitter removed some content that violated its rules, nor is it one when Musk does it.

It would just be nice if Musk and/or his fans would recognize that he’s no more of a “free speech” warrior than the old Twitter was. Indeed, as we’ve highlighted, the old Twitter was actually willing to stand up to more government demands and push back on real attacks on free speech way more often than Musk’s ExTwitter has.

‘Today We Save Our Children’ Says Governor Hochul, Signing Bill That Will Not Save Anyone

New York Governor Kathy Hochul’s response to the horrifying shootings in Buffalo in 2022 was not to look for ways to limit access to guns or improve mental health care. It was not to look into why law enforcement ignored the threats that the shooter had made, which they were aware of. It was not to figure out why the 911 dispatcher who answered the first call about the shooting hung up on the caller after getting mad at them for whispering.

No, it was to blame the internet.

Blaming the internet is a very convenient scapegoat for politicians who are in over their heads with societal-level problems.

On Thursday, Hochul became the living embodiment of the “won’t someone please think of the children” meme. She gleefully signed an easily unconstitutional bill that will not protect children, and which will likely do real harm. She signed the SAFE For Kids Act, which bans algorithmic feeds for kids. In signing the bill she literally said:

“Today, we save our children.”

There are just a few problems with this, all of which Hochul’s office (and the sponsors of this bill) have been told about, only to be dismissed as “talking points from big tech.”

Problem 1: There remains no study showing that algorithmic feeds are somehow “addictive” or even a problem. It’s all based on vibes (and adults who seem unable to put down their own phones).

Problem 2: What actual studies show is that if you force chronological feeds on people, a few things happen, none of which “save our children.” First, users get annoyed because they see less of the stuff they go to social media for. This doesn’t make them use less social media, it just makes them switch to other social media. It also exposes those on the chronological feed to more untrustworthy content and disinformation. I’m not sure why Kathy Hochul thinks that exposing kids to more disinformation is “saving our children,” but someone should ask her.

Problem 3: This bill requires age verification, which has already been ruled to be unconstitutional by multiple courts. It is also a privacy nightmare, as has been described multiple times in the past. Creating a world that puts kids’ private data at risk is not “saving our children.”

Problem 4: The requirement about how websites can order content is just a blatantly obvious First Amendment infringement. I mean, just imagine if the NY legislature told a newspaper that it could no longer prioritize some headlines over others and had to lay out the newspaper in the order the stories were written? Everyone would immediately recognize the First Amendment problems with such a law. But this is no different.

Problem 5: Algorithms are a hugely important tool in keeping kids safe online, by minimizing or hiding more harmful or problematic content. And Hochul and the NY legislature are telling social media companies that such tools must be removed from their arsenal.

Hochul told a reporter, “we’ve checked to make sure, we believe it’s constitutional.” And, that’s just laughable. Checked with whom? Every attempt I saw to call out these concerns was brushed off as “just spewing big tech’s talking points.”

The Constitution is not a “big tech talking point.” What the actual research shows is not a “big tech talking point.”

I’m not against chronological feeds as a general concept. They’re great for those that want them. Lots of services already offer them as an option. But mandating them, and especially mandating them for certain ages (necessitating dangerous age verification), doesn’t solve any legitimate problem and makes it harder for trust & safety teams to actually help protect kids.

I recognize that this signing happened the same day that Hochul’s approval ratings and favorability hit all-time lows. So, it’s no surprise that she’s trying populist nonsense and embracing moral panics. But perhaps she should try actually doing things to actually help, rather than things already proven harmful?

Elon Sued His Critics, But Reporters Keep Exposing How He’s Monetizing Hate

There’s a type of marginally frustrating reporting where a reporter searches social media for [insert bad thing], finds some examples of said [bad thing], and writes a story about “This Platform Allows [Bad Thing]” followed by lots of public commentary about how the platforms don’t care/don’t do enough, etc. etc.

Let me let you in on a little secret: there are more [bad things] on the internet than you can reasonably think of. If you come up with a big enough list of [bad things] to block, people will just come up with more [bad things] you haven’t thought of. People are creative that way.

These stories are a mixed bag. They are accurate but not particularly enlightening. In our latest Ctrl-Alt-Speech, former Twitter Head of Trust & Safety Yoel Roth and I discussed these kinds of stories a little bit. He noted companies should do more internal red teaming, but solely to prevent such negative PR hits, rather than as an actual trust & safety strategy.

However, I’m reporting on the latest from NBC because it’s about ExTwitter allowing ads on hateful hashtags like #whitepower, #whitepride, and #unitethewhite.

Elon Musk’s social media app X has been placing advertisements in the search results for at least 20 hashtags used to promote racist and antisemitic extremism, including #whitepower, according to a review of the platform. 

NBC News found the advertisements by searching various hashtags used to promote racism and antisemitism, and by browsing X accounts that often post racial or religious hatred. The hashtags vary from obvious slogans such as #whitepride and #unitethewhite to more fringe and coded words such as #groyper (a movement of online white nationalists) and #kalergi (a debunked theory alleging a conspiracy to eliminate white people from Europe).

Elon could make a reasonable response: that while this looks bad, the simple reality is that it is simply impossible to figure out every possible awful hashtag and prevent ads from running against them.

It’s easy to see a few hashtags and say “gosh, that’s awful, how could that happen,” without realizing that millions of hashtags are used every day. Even if ExTwitter came up with a blocklist of “bad” hashtags, some would still get through and eventually some reporter would find it and report on it.

But Elon or ExTwitter never gives that response, as it would involve admitting the truth about how content moderation works. Musk and his supporters have long denied this truth as part of their willful misunderstanding of trust & safety work.

In this case, it’s still noteworthy, given that Elon has publicly promised that no “negative/hate tweets” will be monetized.

Image

Even worse, when organizations like the Center for Countering Digital Hate and Media Matters for America pointed out similar failures to live up to that policy, Musk sued both of those organizations. This now means that whenever anyone else reports on such things, it’s worth calling it out, because the clear intent of Musk suing CCDH and MMfA was to scare off more reporting.

That said, suing small non-profits with limited resources is one thing, but taking on NBC (where ExTwitter’s “official” CEO used to work) is another. NBC had called out similar failings months ago and ExTwitter didn’t sue then. So, either Musk is learning, or someone at the company realizes NBC might be tougher to sue.

Some of this style of reporting is a bit silly and show-offy, but if Elon promises no such ads and sues those who point out it’s still happening, no one should be surprised that more reporters call this out and highlight Musk’s failures.

Judge Experiments With ChatGPT, And It’s Not As Crazy As It Sounds

Would you freak out if you found out a judge was asking ChatGPT a question to help decide a case? Would you think that it was absurd and a problem? Well, one appeals court judge felt the same way… until he started exploring the issue in one of the most thoughtful explorations of LLMs I’ve seen (while also being one of the most amusing concurrences I’ve seen).

I recognize that the use of generative AI tools in lots of places raises a lot of controversy, though I think the biggest complaint comes from the ridiculously bad and poorly thought out uses of the technology (usually involving over relying on the tech, when it is not at all reliable).

Back in April, I wrote about how I use LLMs at Techdirt, not to replace anyone or to do any writing, but as a brainstorming tool or a soundboard for ideas. I continue to find it useful in that manner, mainly as an additional tool (beyond my existing editors) to push me to really think through the arguments I’m making and how I’m making them.

So I found it somewhat interesting to see Judge Kevin Newsom, of the 11th Circuit, recently issue a concurrence in a case, solely for the point of explaining how he used generative AI tools in thinking about the case, and how courts might want to think (carefully!) about using the tech in the future.

The case itself isn’t all that interesting. It’s a dispute over whether an insurance provider is required under its agreement to cover a trampoline injury case after the landscaper who installed the trampoline was sued. The lower court and the appeals court both say that the insurance agreement doesn’t cover this particular scenario, and therefore, the insurance company has no duty to defend the landscaper.

But Newsom’s concurrence is about his use of generative AI, which he openly admits may be controversial, and begs for people to consider his entire argument:

I concur in the Court’s judgment and join its opinion in full. I write separately (and I’ll confess this is a little unusual) simply to pull back the curtain on the process by which I thought through one of the issues in this case—and using my own experience here as backdrop, to make a modest proposal regarding courts’ interpretations of the words and phrases used in legal instruments.

Here’s the proposal, which I suspect many will reflexively condemn as heresy, but which I promise to unpack if given the chance: Those, like me, who believe that “ordinary meaning” is the foundational rule for the evaluation of legal texts should consider—consider—whether and how AI-powered large language models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude might—might—inform the interpretive analysis. There, having thought the unthinkable, I’ve said the unsayable.

Now let me explain myself.

As Judge Newsom notes, a part of the case involved determining what the common understanding of the term “landscaping” meant, as it was not clearly defined in the contract. He also says that, due to a quirk of Alabama law, the final disposition of the case didn’t actually depend on this definitional issue, in part because of the landscaper’s insurance application, where he denied doing any work on recreational equipment.

But that allows Newsom the chance to explore how AI might be useful here, in a case where it wasn’t necessary. And that allows him to be somewhat more informal than you might expect from a judge (though, of course, we all have our favorite examples of judges letting their hair down a bit in opinions).

Importantly, though, that off-ramp wasn’t always obviously available to us—or at least as I saw things, to me. Accordingly, I spent hours and hours (and hours) laboring over the question whether Snell’s trampoline-installation project qualified as “landscaping” as that term is ordinarily understood. And it was midway along that journey that I had the disconcerting thought that underlies this separate writing: Is it absurd to think that ChatGPT might be able to shed some light on what the term “landscaping” means? Initially, I answered my own question in the affirmative: Yes, Kevin, that is positively absurd. But the longer and more deeply I considered it, the less absurd it seemed.

I kind of appreciate the thoroughness with which he admits that there are good reasons to think he’s absurd here — he even thought it himself! — before explaining how he changed his mind.

He admits that he did “the usual” thing when courts try to determine the ordinary meaning of a word, which often involves… looking up what the dictionary or other such reference materials say. So he did a run-through of dictionaries and looked at their definitions of “landscaping.” But he noted that it didn’t really help all that much in determining if the trampoline was landscaping.

Then, he also looked at the pictures associated with the case:

After languishing in definitional purgatory for a while, I decided to look at the case from a different perspective—and I do mean look. The record contains a series of photographs of Snell’s trampoline-related project. Here’s one, which shows his prep work—in particular, the empty sand pit and the below-ground retaining wall that reinforced its borders:

Image

And another, which depicts the finished product, including both the polypropylene mat (the fun part) and the decorative wooden “cap”:

Image

I’m not particularly proud of it, but I’ll confess that the photos affected the way I thought about the case. Nothing in them really struck me as particularly “landscaping”-y. The problem, of course, wasthat I couldn’t articulate why. And visceral, gut-instinct decisionmaking has always given me the willies—I definitely didn’t want to be that guy. So in a way, I felt like I was back to square one.

I swear, this is the “bloggiest” Appeals Court concurrence I’ve ever read. And it only gets more bloggy:

And that’s when things got weird. Perhaps in a fit of frustration, and most definitely on what can only be described as a lark, I said to one of my clerks, “I wonder what ChatGPT thinks about all this.” So he ran a query: “What is the ordinary meaning of ‘landscaping’?” Here’s what ChatGPT said in response:

“Landscaping” refers to the process of altering the visible features of an area of land, typically a yard, garden or outdoor space, for aesthetic or practical purposes. This can include activities such as planting trees, shrubs, flowers, or grass, as well as installing paths, fences, water features, and other elements to enhance the appearance and functionality of the outdoor space.

Interesting, for two reasons. First, ChatGPT’s explanation seemed more sensible than I had thought it might—and definitely less nutty than I had feared. Second, it squared with my own impression— informed by my own experience writing, reading, speaking, and listening to American English in the real world—that ordinary people might well use the word “landscaping” (1) to include more than just botanical and other natural improvements and (2) to cover both aesthetic and functional objectives. In fact, several of the examples that ChatGPT flagged—“paths, fences, [and] water features”—jibed with the sorts of things that had sprung to mind when I first started thinking about the case.

But, don’t worry. He didn’t just go with that because it confirmed his priors. He (rightly) recognized that’s not how any of this should work. Again, this reads like a blog post, not a judicial concurrence, but that’s what makes it fun.

Suffice it to say, my interest was piqued. But I definitely didn’t want to fall into the trap of embracing ChatGPT’s definition just because it aligned with my priors. (Bad.) So, in what might have been a mistake—more on that later—we went ahead and asked it the ultimate question: “Is installing an in-ground trampoline ‘landscaping’”? ChatGPT responded as follows:

Yes, installing an in-ground trampoline can be considered a part of landscaping. Landscaping involves altering the visible features of an outdoor area for aesthetic or practical purposes, and adding an in-ground trampoline would modify the appearance and function of the space. It’s a deliberate change to the outdoor environment, often aimed at enhancing the overall landscape and usability of the area.

For good measure, I posed the same questions to Google’s Bard (since replaced by Gemini). The precise details aren’t particularly important, but the upshot is that both models’ answers indicated that the trampoline-related work Snell had performed—the excavation of the pit, the construction of the retaining wall, the installation of the mat, and the addition of the decorative wooden cap—just might be landscaping.

Apparently, it was around this point that he realized the aforementioned “off-ramp” made by Alabama law, such that this didn’t matter. But he was intrigued that his experiments here had moved him out of the “that’s absurd” category into the “huh, this might be useful… somehow?”

So, he then uses more of the concurrence to explore the pros and cons. I won’t repost all of it, but the strongest argument in favor of considering this is that if the goal is to understand the “common” way in which a word or phrase is used, LLMs trained on the grand corpus of human knowledge might actually provide a better take on the common usage and understanding of such words and phrases.

The ordinary-meaning rule’s foundation in the common speech of common people matters here because LLMs are quite literally “taught” using data that aim to reflect and capture how individuals use language in their everyday lives. Specifically, the models train on a mind-bogglingly enormous amount of raw data taken from the internet—GPT-3.5 Turbo, for example, trained on between 400 and 500 billion words—and at least as I understand LLM design, those data run the gamut from the highest-minded to the lowest, from Hemmingway novels and Ph.D. dissertations to gossip rags and comment threads. Because they cast their nets so widely, LLMs can provide useful statistical predictions about how, in the main, ordinary people ordinarily use words and phrases in ordinary life. So, for instance, and as relevant here, LLMs can be expected to offer meaningful insight into the ordinary meaning of the term “landscaping” because the internet data on which they train contain so many uses of that term, from so many different sources—e.g., professional webpages, DIY sites, news stories, advertisements, government records, blog posts, and general online chatter about the topic.

He’s quick to admit that there are potential problems with this. There are questions about what LLMs trained on, how representative they might be. There might also be other questions about usage changes over time, for example. There are plenty of reasons why these results shouldn’t be automatically relied on.

But as I noted in my own explanation of how I’m using LLMs, the key point is to use them as a way to help you think through issues, not to rely on them as some sort of godlike answer machine. And Judge Newsom seems to recognize that. At the very least, it’s possible that an LLM might give you better (or, at the very least, different) insight into “common usage” of a word or phrase than a dictionary editor.

So far as I can tell, researchers powering the AI revolution have created, and are continuing to develop, increasingly sophisticated ways to convert language (and I’m not making this up) into math that computers can “understand.”… The combination of the massive datasets used for training and this cutting-edge “mathematization” of language enables LLMs to absorb and assess the use of terminology in context and empowers them to detect language patterns at a granular level. So, for instance, modern LLMs can easily discern the difference—and distinguish—between the flying-mammal “bat” that uses echolocation and may or may not be living in your attic, on the one hand, and the wooden “bat” that Shohei Otani uses to hit dingers, on the other. See id. And that, as I understand it, is just the tip of the iceberg. LLM predictions about how we use words and phrases have gotten so sophisticated that they can (for better or worse) produce full-blown conversations, write essays and computer code, draft emails to co-workers, etc. And as anyone who has used them can attest, modern LLMs’ results are often sensible—so sensible, in fact, that they can border on the creepy. Now let’s be clear, LLMs aren’t perfect—and again, we’ll discuss their shortcomings in due course. But let’s be equally clear about what they are: high-octane language-prediction machines capable of probabilistically mapping, among other things, how ordinary people use words and phrases in context.

And, he points out, dictionaries may be very good at proffering definitions, but they are still influenced by the team that puts together that dictionary:

First, although we tend to take dictionaries for granted, as if delivered by a prophet, the precise details of their construction aren’t always self-evident. Who exactly compiles them, and by what criteria do the compilers choose and order the definitions within any given entry? To be sure, we’re not totally in the dark; the online version of Merriam-Webster’s, for instance, provides a useful primer explaining “[h]ow . . . a word get[s] into” that dictionary. It describes a process by which human editors spend a couple of hours a day “reading a cross section of published material” and looking for new words, usages, and spellings, which they then mark for inclusion (along with surrounding context) in a “searchable text database” that totals “more than 70 million words drawn from a great variety of sources”—followed, as I understand things, by a step in which a “definer” consults the available evidence and exercises his or her judgment to “decide[] . . . the best course of action by reading through the citations and using the evidence in them to adjust entries or create new ones.”

Such explainers aside, Justice Scalia and Bryan Garner famously warned against “an uncritical approach to dictionaries.” Antonin Scalia & Bryan A. Garner, A Note on the Use of Dictionaries, 16 Green Bag 2d 419, 420 (2013). They highlighted as risks, for instance, that a volume could “have been hastily put together by two editors on short notice, and very much on the cheap,” and that without “consult[ing] the prefatory material” one might not be able “to understand the principles on which the dictionary [was] assembled” or the “ordering of [the] senses” of a particular term.

Judge Newsom wants you to know that he is not trying to slag the dictionaries here (nor to overly praise LLMs). He’s just pointing out some realities about both:

To be clear, I’m neither a nihilist nor a conspiracy theorist, but I do think that we textualists need to acknowledge (and guard against the fact) that dictionary definitions present a few known unknowns…. And while I certainly appreciate that we also lack perfect knowledge about the training data used by cuttingedge LLMs, many of which are proprietary in nature, see supra notes 6 & 8, I think it’s fair to say that we do know both (1) what LLMs are learning from—namely, tons and tons of internet data— and (2) one of the things that makes LLMs so useful—namely, their ability to accurately predict how normal people use language in their everyday lives.

[….]

Anyway, I don’t mean to paint either too grim a picture of our current, dictionary-centric practice—my own opinions are chock full of dictionary definitions, I hope to good effect—or too rosy a picture of the LLMs’ potentiality. My point is simply that I don’t think using LLMs entails any more opacity or involves any more discretion than is already inherent in interpretive practices that we currently take for granted—and in fact, that on both scores it might actually involve less.

And, of course, he has another long section on all the reasons to remain worried about LLMs in this context. He’s not a blind optimist, and he’s not one of those lawyers we’ve written about too often who just ChatGPT’d their way to useful and totally fake citations. He knows they hallucinate. But, he points, if “hallucinating” is misrepresenting things, lawyers already do that themselves:

LLMs can “hallucinate.” First, the elephant in the room: What about LLMs’ now-infamous “hallucinations”? Put simply, an LLM “hallucinates” when, in response to a user’s query, it generates facts that, well, just aren’t true—or at least not quite true. See, e.g., Arbel & Hoffman, supra, at 48–50. Remember the lawyer who got caught using ChatGPT to draft a brief when it ad-libbed case citations—which is to say cited precedents that didn’t exist? See, e.g., Benjamin Weiser, Here’s What Happens When Your Lawyer Uses ChatGPT, N.Y. Times (May 29, 2023). To me, this is among the most serious objections to using LLMs in the search for ordinary meaning. Even so, I don’t think it’s a conversationstopper. For one thing, LLM technology is improving at breakneck speed, and there’s every reason to believe that hallucinations will become fewer and farther between. Moreover, hallucinations would seem to be most worrisome when asking a specific question that has a specific answer—less so, it seems to me, when more generally seeking the “ordinary meaning” of some word or phrase. Finally, let’s shoot straight: Flesh-and-blood lawyers hallucinate too. Sometimes, their hallucinations are good-faith mistakes. But all too often, I’m afraid, they’re quite intentional—in their zeal, attorneys sometimes shade facts, finesse (and even omit altogether) adverse authorities, etc. So at worst, the “hallucination” problem counsels against blind-faith reliance on LLM outputs—in exactly the same way that no conscientious judge would blind-faith rely on a lawyer’s representations.

He also goes deep on some other downsides, including some we already discussed regarding what data the LLMs are trained on. If it’s only online speech, does that leave out speech that is common offline? Does it leave out communities who have less access to the internet? Basically, it’s part of the well-known “alignment problem” in generative AI, around the inevitability of some level of bias that is simply unavoidable. But that doesn’t mean you just shrug and accept things unquestioned.

He even considers that lawyers might try to shop around for different AIs that agree with them the most or, worse, try to “poison” an LLM to get it to agree with a preferred understanding. But, he notes, that seems unlikely to be all that effective.

There’s also this fun bit about the dystopian threat of “robo lawyers,” which I especially appreciate given that we once created a game, called HAL of Justice, for a legal academic conference that involved turning everyone involved into futuristic AI judges handling court cases.

Would the consideration of LLM outputs in interpreting legal texts inevitably put us on some dystopian path toward “robo judges” algorithmically resolving human disputes? I don’t think so. As Chief Justice Roberts recently observed, the law will always require “gray area[]” decisionmaking that entails the “application of human judgment.” Chief Justice John G. Roberts, Jr., 2023 Year-End Report on the Federal Judiciary 6 (Dec. 31, 2023). And I hope it’s clear by this point that I am not—not, not, not—suggesting that any judge should ever query an LLM concerning the ordinary meaning of some word (say, “landscaping”) and then mechanistically apply it to her facts and render judgment. My only proposal—and, again, I think it’s a pretty modest one—is that we consider whether LLMs might provide additional datapoints to be used alongside dictionaries, canons, and syntactical context in the assessment of terms’ ordinary meaning. That’s all; that’s it.

And with that, he closes with an interesting provocation. If you’ve come around to his idea that we should be considering this form of algorithmically-assisted brainstorming, what are the key things we should think about? He highlights that prompt construction will matter a lot. How do you create the “right” prompt? Should you try multiple prompts? Should you use multiple LLMs? Should there be some indication of how “confident” an LLM is in any particular answer? And, as noted earlier, how do you handle issues of words having meanings change over time, if the standard should be at the relevant time of the contract.

And he closes in the most blog-like fashion imaginable.

Just my two cents.

I find this whole discussion fascinating. As I highlighted in my own post about how we use LLMs for brainstorming, I recognize that some people hate the idea outright, while others are too utopian about “AI in everything” without thinking through the potential downsides. It’s nice for some to recognize that there is a reasonable middle path: that they have utility in certain, specific scenarios, if used properly, and not relied on as a final arbiter of anything.

Also, it’s just kind of fun to read through this quite thoughtful exploration of the topic and how Judge Newsom is considering these issues (fwiw, Newsom has been the author of opinions we’ve agreed with strongly, as well as ones we’ve disagreed with strongly, so it’s not as though I feel one way or the other about this based on his jurisprudence — it’s just a really interesting discussion).

I also appreciate that, unlike so many conversations on tech like generative AI these days, he’s not taking the extremist approach of it being “all good” or “all bad,” and is actually willing to explore the tradeoffs, nuances, and open questions related to the issues. It would be nice if the world saw more of that, just in general.

Clearing Rights For A ‘Non-Infringing’ Collection Of AI Training Media Is Hard

In response to a number of copyright lawsuits about AI training datasets, we are starting to see efforts to build ‘non-infringing’ collections of media for training AI. While I continue to believe that most AI training is covered by fair use in the US and therefore inherently ‘non-infringing’, I think these efforts to build ‘safe’ or ‘clean’ or whatever other word one might use data sets are quite interesting. One reason they are interesting is that they can help illustrate why trying to build such a data set at scale is such a challenge.

That’s why I was excited to read about Source.Plus (via a post from Open Future). Source.Plus is a tool from Spawning that purports to aggregate over 37 million “public domain and CC0 images integrated from dozens of libraries and museums.” That’s a lot less than are used to train current generative models, but still a lot of images that could be used for all sorts of useful things.

However, it didn’t take too much poking around on the site to find an illustration of why accurately aggregating nominally openly licensed images at scale can be such a challenge.

The site has plenty of OpenGLAM images that are clearly old enough to be in the public domain. It also has a number of newer images (like photographs) that are said to be licensed under CC0. Curious, I clicked on the first photograph I found on the Source.Plus home page:

photograph of a library reading room full of patrons shot from above

According to the image page on Source.Plus, the image was from Wikimedia Commons and licensed under a CC0 public domain dedication. It listed the creator as Pixabay and the uploader (to Wikimedia) as Philipslearning.

Clicking through to the wikimedia page reveals that the original source for the image was Pixabay, and that it was uploaded on March 9, 2023 by Philipslearning (an account that appears to no longer exist, for whatever that is worth). The file metadata says that the image itself was taken on May 18, 2016.

Clicking through to the Pixabay page for the image reveals that the image is available under the Pixabay Content License. That license is fairly permissive, but does state:

  • You cannot sell or distribute Content (either in digital or physical form) on a Standalone basis. Standalone means where no creative effort has been applied to the Content and it remains in substantially the same form as it exists on our website.
  • If Content contains any recognisable trademarks, logos or brands, you cannot use that Content for commercial purposes in relation to goods and services. In particular, you cannot print that Content on merchandise or other physical products for sale.
  • You cannot use Content in any immoral or illegal way, especially Content which features recognisable people.
  • You cannot use Content in a misleading or deceptive way.
  • You cannot use any of the Content as part of a trade-mark, design-mark, trade-name, business name or service mark.

Which is to say, not CC0.

However, further investigation into the Pixabay Wikipedia page suggests that images uploaded to Pixabay before January 9, 2019 are actually released under CC0. Section 4 of the Pixabay terms confirms that. The additional information on the image’s Pixabay page confirms that it was uploaded on May 17, 2016 (which matches the metadata added by the unknown Philipslearning on the image’s wikimedia page).

All of which means that this image is, in all likelihood, available under a CC0 public domain dedication. Which is great! Everything was right!

At the same time, the accuracy of that status feels a bit fragile. This fragility works in the context of wikipedia, or if you are looking for a handful of openly-licensed images. Is it likely to hold up at training set scale across tens of millions of images? Maybe? What does it mean to be ‘good enough’ in this case? If trainers do require permission from rightsholders to train, and one relied on Source.Plus/wikimedia for the CC0 status of a work, and that status turned out to be incorrect, should the fact that they thought they were using a CC0 image be relevant to their liability?

Michael Weingberg is the Executive Director of NYU’s Engelberg Center for Innovation Law and Policy. This post is republished from his blog under its CC BY-SA 4.0 license. Hero Image: Interieur van de Bodleian Library te Oxford

Automattic’s Turkish Delight: A Rare Win Against Erdogan’s Censorship

The real fight for free speech means more than just doing “that which matches the law.” It means being willing to stand up to extremist authoritarian bullies, even when the odds are stacked against you. Challenging regimes where a single satirical post, a meme, or a critical blog can put someone behind bars requires bravery. But sometimes people have to fight, because it’s the right thing to do.

And every once in a while you win.

The notoriously thin-skinned authoritarian Turkish President Recep Tayyip Erdogan has sued thousands of people for the crime of “insulting” him (or comparing him to Gollum).

Image

He has jailed journalists for criticizing his government and claims that social media (not his own authoritarian rule) is a “threat to democracy” for allowing his critics to speak.

It won’t surprise you to find out that his government is frequently looking to silence people online.

Elon Musk complied, but the makers of WordPress, Automattic (which also host Techdirt), fought back. Like ExTwitter, Turkey regularly demands Automattic remove content critical of Erdogan. After a demand to remove a critical blog in 2015, Automattic went to court. And while it lost initially, basically a decade later it has prevailed:

With the support of the blogger, we swiftly appealed the First Instance Court’s decision on the basis that such a restriction was an undue interference in freedom of expression. Unfortunately (but not surprisingly), this argument was rejected.

At Automattic, we firmly believe in the importance of freedom of expression—and we weren’t about to let this clear attempt at political censorship go by without a fight. Given the nature of the allegations involved, we decided to strike back, and petitioned the Turkish Constitutional Court. While the prospects of success seemed low, we were fully prepared to take the case all the way to the European Court of Human Rights in Strasbourg if necessary.

Eight years after we submitted our original appeal, we finally received word that the Constitutional Court had accepted our arguments, and unanimously concluded that both the user’s freedom of expression (as provided for under Article 26 of the Turkish Constitution) and their right to an effective remedy (as provided for under Article 40) had been violated. 

According to Automattic, this is a rare and surprising outcome. Turkish courts have rejected similar attempts by the company, but the company hasn’t stopped fighting these fights and, at least in this case, succeeding.

Do not underestimate the significance of this outcome. Victories of this kind in Turkey are rare, and prior to this case, we had spent almost $100,000 USD appealing 14 different takedown orders, without any success.

At Tech Policy Press, Burak Haylamaz explores how Turkey’s “Internet Law” has been widely abused:

…the Turkish government has employed various tactics over the last decade, including content or website access blocking and removal, bandwidth restrictions, and internet throttling to censor critical media and quell government criticism. By the end of 2022, a total of 712,558 websites and domain names, access to 150,000 URL addresses, 9,800 Twitter accounts, 55,000 tweets, 16,585 YouTube videos, 12,000 Facebook posts, and 11,150 Instagram posts were blocked in Türkiye. These decisions are imposed by various authorities, most effectively through recourse mechanisms before the criminal judgeships of peace, which are carefully framed within the legal system.

It’s especially notable that the main law Turkey relies on for this broad censorship was directly modeled on similar “internet regulations” in Europe (especially Germany’s NetzDG law, which partially inspired the DSA across the EU).

This ruling in favor of Automattic is significant because it puts at least some guardrails on the government’s abuse of the law. However, there are limits. As Haylamaz explains, the Constitutional Court had called out the censorial problems with the law years ago, but left it up to the Turkish Parliament to address, which it did not do.

Finally, with no progress, the Constitutional Court again stepped up to call out how these laws conflict with free expression and to declare them unconstitutional, though for some reason the law stays in place until October.

As Haylamaz further explains, this ruling on the law hasn’t stopped Turkish officials from issuing more blocking orders:

One might assume that the criminal judgeships of peace would cease issuing access-blocking and/or content removal decisions based on Article 9 of the Internet Law, or at least consider the interests of online platforms and content authors, especially after the article was deemed unconstitutional. However, this is simply not the case in Turkish politics and courtrooms. The criminal judgeships of peace continue to issue access-blocking and/or content removal decisions based on Article 9 of the Internet Law, despite its unconstitutional status. This comes as no surprise to many, especially after President Recep Tayyip Erdoğan expressed his discomfort with the Constitutional Court’s internet-related decisions and announced his intention to closely monitor them.

It’s good to see Automattic taking on the impossible task of fighting censorial, authoritarian governments and winning. It would be nice to see more companies follow suit.

When Humanity Gets Messy, Sometimes the Best Tech Solution Is To Do Nothing

Give people ways to share images and videos with each other, and people will quickly push the limits. It’s what people do.

There’s been a slightly amusing story making the rounds these past few days: a digital “video portal” was set up to allow people in New York and Dublin to communicate with each other. And people… did exactly what you’d expect some people to do when given a spot to, um, express themselves:

However, it has also attracted a lot of unwanted attention. Some people on the Dublin side have been putting up pornographic images to the camera while one person posted video footage of the Twin Towers on fire during 9/11.

The problems have not been confined to the Dublin side. An OnlyFans model showed her breasts to onlookers in Dublin and then posted it on TikTok and Instagram. The New York portal was closed down for a time as a result.

The portal has now been closed so officials can “figure out” what to do about the fact that, sometimes, people will do wacky, crazy, or awful things if given a platform to do them.

I tend to side with Katie Notopoulos, whose take is to suck it up and open the portal back up and just revel in human absurdities.

This is terrible. The portal should reopen! In fact, we should have portals all over the country, all over the world — connecting two random places. We should have a portal between Miami and Tokyo, Florence and Dubai, Delhi and Stockholm. Currently, there’s a portal between cities in Lithuania and Poland, but let’s dream even bigger.

As she notes, at a time when people think tech is just awful, this was just fun, even if some people were perhaps less than elegant in how they used the portal.

But the portal is a case of technology that’s just pure joy.

It’s simple, there’s nothing too deep to think about. It’s not even “new” tech — video streaming between two locations is not exactly novel, although I suppose “it’s really big” differentiates it from, say, FaceTime. The situation is what makes it different — video chatting technology is usually personal, used at home or in your office conference room. Putting it in a public space, with other strangers — that makes it fun and special.

It is pure and human to be curious about strangers in another country, to be excited about the idea of seeing someone else across the screen, knowing they can see you, too. It’s fun. It’s delightful.

I mean, the story does remind me of the simple fact that if you allow people to communicate, you have to consider that some of them are going to do disturbing and awful things. And anyone managing a system that lets people communicate needs to at least consider what to do about that.

The weirdest part of this story is that it appears the people who set up the portal didn’t consider this or think about how they were going to handle these kinds of scenarios. It’s amazing that they seem to have been taken by surprise by all of this.

But sometimes (perhaps even most of the time), the answer on how you deal with the messiness of humanity communicating can simply be: nothing. Do nothing. Recognize that sometimes people are going to be people, and get on with your life.

Sure, there may be the occasional offensive image or inappropriate behavior. But that’s life. People can be weird, wild, and sometimes downright unpleasant. However, the vast majority of interactions are likely to be positive, fun, and enriching. Connecting with strangers across the globe, even briefly, can expand our horizons and remind us of our shared humanity. So let’s embrace the chaos, the silliness, and the serendipity. Open the portals and let people be people.

Yet Another Study Finds That Internet Usage Is Correlated With GREATER Wellbeing, Not Less

You’ve all heard the reports about how the internet, social media, and phones are apparently destroying everyone’s well being and mental health. Hell there’s a best selling book and its author making the rounds basically everywhere, insisting that the internet and phones are literally “rewiring” kids minds to be depressed. We’ve pointed out over and over again that the research does not appear to support this finding.

And, really, if the data supported such a finding, you’d think that a new study looking at nearly 2 and a half million people across 168 countries would… maybe… find such an impact?

Instead, the research seems to suggest much more complex relationships, in which for many people, this ability to connect with others and with information are largely beneficial. For many others, it’s basically neutral. And for a small percentage of people, there does appear to be a negative relationship, which we should take seriously. However, it often appears that that negative relationship is one where those who are already dealing with mental health or other struggles, turn to the internet when they have no where else to go, and may do so in less than helpful ways.

The Oxford Internet Institute has just released another new study by Andrew Przybylski and Matti Vuorre, showing that there appears to be a general positive association between internet usage and wellbeing. You can read the full study here, given that it has been published as open access (and under a CC BY 4.0 license). We’ve also embedded it below if you just want to read it there.

As with previous studies done by Vuorre and Przbylski, this one involves looking at pretty massive datasets, rather than very narrow studies of small sample sizes.

We examined whether having (mobile) internet access or actively using the internet predicted eight well-being outcomes from 2006 to 2021 among 2,414,294 individuals across 168 countries. We first queried the extent to which well-being varied as a function of internet connectivity. Then, we examined these associations’ robustness in a multiverse of 33,792 analysis specifications. Of these, 84.9% resulted in positive and statistically significant associations between internet connectivity and well-being. These results indicate that internet access and use predict well-being positively and independently from a set of plausible alternatives.

Now, it’s important to be clear here, as we have been with studies cited for the opposite conclusion: this is a correlational study, and is not suggesting a direct causal relationship between having internet access and wellbeing. But, if (as folks on the other side claim) internet access was truly rewiring brains and making everyone depressed, it’s difficult to see how then we would see these kinds of outcomes.

People like Jonathan Haidt have argued that these kinds of studies obscure the harm done to teens (and especially teenaged girls) as his way of dismissing these sorts of studies. However, it’s nice to see the researchers here try to tease out possible explanations, to make sure such things weren’t hidden in the data:

Because of the large number of predictors, outcomes, subgroups to analyze, and potentially important covariates that might theoretically explain observed associations, we sought out a method of analysis to transparently present all the analytical choices we made and the uncertainty in the resulting analyses. Multiverse analysis (Steegen et al., 2016) was initially proposed to examine and transparently present variability in findings across heterogeneous ways of treating data before modeling them (see also Simonsohn et al., 2020). We therefore conducted a series of multiverse analyses where we repeatedly fitted a similar model to potentially different subgroups of the data using potentially different predictors, outcomes, and covariates.

That allowed them to explore questions regarding different subgroups. And while they did find one “negative association” among young women, it was not in the way you might have heard or would have thought of. There was a “negative association” between “community well-being” and internet access:

We did, however, observe a notable group of negative associations between internet use and community well-being. These negative associations were specific to young (15–24-year-old) women’s reports of community well-being. They occurred across the full spectrum of covariate specifications and were thereby not likely driven by a particular model specification. Although not an identified causal relation, this finding is concordant with previous reports of increased cyberbullying (Przybylski & Bowes, 2017) and more negative associations between social media use and depressive symptoms (Kelly et al., 2018; but see Kreski et al., 2021). Further research should investigate whether low community well-being drives engagement with the internet or vice versa.

This took me a moment to understand, but after reading the details, it’s showing that (1) if you were a 15 to 24-year old woman and (2) if you said in the survey that you really liked where you live (3) you were less likely to have accessed the internet over the past seven days. That was the only significant finding of that nature. That same cohort did not show a negative correlation for other areas of well being around fulfilment and such.

To be even more explicit: the “negative association” was only with young women who answered that they strongly agree with the statement “the city or area where you live is a perfect place for you” and then answered the question “have you used the internet in the past seven days.” There were many other questions regarding well-being that didn’t have such a negative association. This included things like rating how their life was from “best” to “worst” on a 10 point scale, and whether or not respondents “like what you do every day.”

So, what this actually appears to do is support is the idea that if you are happy with where you live (happy in your community) than you may be less focused on the internet. But, for just about every other measure of well-being it’s strongly correlated in a positive way with internet access. There are a few possible explanations for this, but at the very least it might support the theory that the studies of those who are both facing mental health problems and excessive internet usage may stem from problems outside of the internet, leading them to turn to the internet for a lack of other places to turn.

The authors are careful to note the limitations of their findings, and recognize that human beings are complex:

Nevertheless, our conclusions are qualified by a number of factors. First, we compared individuals to each other. There are likely myriad other features of the human condition that are associated with both the uptake of internet technologies and well-being in such a manner that they might cause spurious associations or mask true associations. For example, because a certain level of income is required to access the internet and income itself is associated with well-being, any simple association between internet use and well-being should account for potential differences in income levels. While we attempted to adjust for such features by including various covariates in our models, the data and theory to guide model selection were both limited.

Second, while between-person data such as we studied can inform inferences about average causal effects, longitudinal studies that track individuals and their internet use over time would be more informative in understanding the contexts of how and why an individual might be affected by internet technologies and platforms (Rohrer & Murayama, 2021).

Third, while the constructs that we studied represent the general gamut of well-being outcomes that are typically studied in connection to digital media and technology, they do not capture everything, nor are they standard and methodically validated measures otherwise found in the psychological literature. That is, the GWP data that we used represent a uniquely valuable resource in terms of its scope both over time and space. But the measurement quality of its items and scales might not be sufficient to capture the targeted constructs in the detailed manner that we would hope for. It is therefore possible that there are other features of well-being that are differently affected by internet technologies and that our estimates might be noisier than would be found using psychometrically validated instruments. Future work in this area would do well in adopting a set of common validated measures of well-being (Elson et al., 2023).

On the whole it’s great to see more research and more data here, suggesting that, yes, there is a very complex relationship between internet access and wellbeing, but it should be increasingly difficult to claim that internet access is an overall negative and harmful, no matter what the popular media and politicians tell you.

We Looked At All The Recent Evidence On Mobile Phone Bans In Schools – This Is What We Found

The Conversation

Mobile phones are currently banned in all Australian state schools and many Catholic and independent schools around the country. This is part of a global trend over more than a decade to restrict phone use in schools.

Australian governments say banning mobile phones will reduce distractions in class, allow students to focus on learning, improve student wellbeing and reduce cyberbullying.

But previous research has shown there is little evidence on whether the bans actually achieve these aims.

Many places that restricted phones in schools before Australia did have now reversed their decisions. For example, several school districts in Canada implemented outright bans then revoked them as they were too hard to maintain. They now allow teachers to make decisions that suit their own classrooms.

A ban was similarly revoked in New York City, partly because bans made it harder for parents to stay in contact with their children.

What does recent research say about phone bans in schools?

Our study

We conducted a “scoping review” of all published and unpublished global evidence for and against banning mobile phones in schools.

Our review, which is pending publication, aims to shed light on whether mobile phones in schools impact academic achievement (including paying attention and distraction), students’ mental health and wellbeing, and the incidence of cyberbullying.

A scoping review is done when researchers know there aren’t many studies on a particular topic. This means researchers cast a very inclusive net, to gather as much evidence as possible.

Our team screened 1,317 articles and reports as well as dissertations from masters and PhD students. We identified 22 studies that examined schools before and after phone bans. There was a mix of study types. Some looked at multiple schools and jurisdictions, some looked at a small number of schools, some collected quantitative data, others sought qualitative views.

In a sign of just how little research there is on this topic, 12 of the studies we identified were done by masters and doctoral students. This means they are not peer-reviewed but done by research students under supervision by an academic in the field.

But in a sign of how fresh this evidence is, almost half the studies we identified were published or completed since 2020.

The studies looked at schools in Bermuda, China, the Czech Republic, Ghana, Malawi, Norway, South Africa, Spain, Sweden, Thailand, the United Kingdom and the United States. None of them looked at schools in Australia.

Academic achievement

Our research found four studies that identified a slight improvement in academic achievement when phones were banned in schools. However, two of these studies found this improvement only applied to disadvantaged or low-achieving students.

Some studies compared schools where there were partial bans against schools with complete bans. This is a problem because it confuses the issue.

But three studies found no differences in academic achievement, whether there were mobile phone bans or not. Two of these studies used very large samples. This masters thesis looked at 30% of all schools in Norway. Another study used a nationwide cohort in Sweden. This means we can be reasonably confident in these results.

Mental health and wellbeing

Two studies in our review, including this doctoral thesis, reported mobile phone bans had positive effects on students’ mental health. However, both studies used teachers’ and parents’ perceptions of students’ wellbeing (the students were not asked themselves).

Two other studies showed no differences in psychological wellbeing following mobile phone bans. However, three studies reported more harm to students’ mental health and wellbeing when they were subjected to phone bans.

The students reported they felt more anxious without being able to use their phone. This was especially evident in one doctoral thesis carried out when students were returning to school after the pandemic, having been very reliant on their devices during lockdown.

So the evidence for banning mobile phones for the mental health and wellbeing of student is inconclusive and based only on anecdotes or perceptions, rather than the recorded incidence of mental illness.

Bullying and cyberbullying

Four studies reported a small reduction in bullying in schools following phone bans, especially among older students. However, the studies did not specify whether or not they were talking about cyberbullying.

Teachers in two other studies, including this doctoral thesis, reported they believed having mobile phones in schools increased cyberbullying.

But two other studies showed the number of incidents of online victimisation and harassment was greater in schools with mobile phone bans compared with those without bans. The study didn’t collect data on whether the online harassment was happening inside or outside school hours.

The authors suggested this might be because students saw the phone bans as punitive, which made the school climate less egalitarian and less positive. Other research has linked a positive school climate with fewer incidents of bullying.

There is no research evidence that students do or don’t use other devices to bully each other if there are phone bans. But it is of course possible for students to use laptops, tablets, smartwatches or library computers to conduct cyberbullying.

Even if phone bans were effective, they would not address the bulk of school bullying. A 2019 Australian study found 99% of students who were cyberbullied were also bullied face-to-face.

What does this tell us?

Overall, our study suggests the evidence for banning mobile phones in schools is weak and inconclusive.

As Australian education academic Neil Selwyn argued in 2021, the impetus for mobile phone bans says more about MPs responding to community concerns rather than research evidence.

Politicians should leave this decision to individual schools, which have direct experience of the pros or cons of a ban in their particular community. For example, a community in remote Queensland could have different needs and priorities from a school in central Brisbane.

Mobile phones are an integral part of our lives. We need to be teaching children about appropriate use of phones, rather than simply banning them. This will help students learn how to use their phones safely and responsibly at school, at home and beyond.

Marilyn Campbell, Professor, School of Early Childhood & Inclusive Education, Queensland University of Technology and Elizabeth J Edwards, Associate Professor in Education, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights

“Do you now, or have you ever, worked with TikTok to help defend its rights?”

That McCarthyism-esque question is apparently being asked by members of Congress to organizations that have been working with TikTok to defend its Constitutional rights.

Does anyone think it’s right for Congress to threaten to punish organizations from working with TikTok? Does that sound like a First Amendment violation to you? Because it sure does to me.

Over the last year or so, we’ve been hearing a lot of talk out of Congress on two specific issues: the supposed horrors of government officials suppressing speech and, at the same time, the supposed horrors of a successful social media app that has ties to China.

Would it surprise you to find that there are some hypocrites in Congress about all of this? Shocking, I know.

We already highlighted how a bunch of members of Congress both signed an amicus brief in the Murthy case saying that governments should never, ever, interfere with speech and also voted to ban TikTok. But, would those same members of Congress who are so worried about “jawboning” by government officials to suppress speech also then use the power of Congress to silence voices trying to defend TikTok?

Yeah, you know where this is going.

NetChoice has been the main trade group that has been defending against all the terrible laws being thrust upon the internet over the last few years. Often people dismiss NetChoice as “big tech” or “the tech industry,” but in my experience they’ve been solidly standing up for good and important internet speech policies. NetChoice has been structured to be independent of its members (i.e., they get to decide what cases they take on, not their members, which sometimes means their members dislike the causes and cases NetChoice takes on).

On Wednesday of this week, NetChoice’s membership roster looked like this:

Image

I highlighted TikTok in particular, because on Thursday, NetChoice’s membership roster looked like this:

Image

TikTok is missing.

Why? Well, because members of Congress threatened to investigate NetChoice if it didn’t drop TikTok from its roster. Politico had some of this story last night, claiming that there was pressure from Congress to drop TikTok:

“The Select Committee’s brazen efforts to intimidate private organizations for associating with a company with 170 million American users is a clear abuse of power that smacks of McCarthyism,” TikTok spokesperson Alex Haurek said in a statement, referring to the House China panel. “It’s a sad day when Members of Congress single out individual companies without evidence while trampling on constitutional rights and the democratic process,” Haurek added. A spokesperson for NetChoice didn’t respond to a request for comment.

The two people told Daniel that NetChoice faced pressure from the office of House Majority Leader Steve Scalise (R-La.) to dump TikTok. A third person said that while no threat was made, NetChoice was told that the Select Committee on China would be investigating groups associated with TikTok and decided to sever ties as a result.

I’ve heard that the claim there was “no threat” is not accurate. As the rest of that paragraph makes clear, there was very much an implied threat that Congress would investigate organizations working with TikTok to defend its rights. I’m also hearing that others, like PR agencies and lobbying organizations that work with TikTok, are now facing similar threats from Congress.

Indeed, despite the “denial” of any threat, Politico gets the “House Select Committee on the CCP” to admit that it will launch an investigation into any organization that helps TikTok defend its rights:

“Significant bipartisan majorities in both the House and the Senate deemed TikTok a grave national security threat and the President signed a bill into law requiring them to divest from the CCP,” a Scalise spokesperson told PI. “It should not come as a surprise to those representing TikTok that as long as TikTok remains connected to the CCP, Congress will continue its rigorous oversight efforts to safeguard Americans from foreign threats.”

Guys, that’s not “rigorous oversight” or “safeguarding Americans.” That’s using the threats of bogus costly investigations to force companies to stop working with TikTok and helping it defend its rights under the Constitution. That seems to be a hell of a lot more like “jawboning” and a much bigger First Amendment problem than the Biden administration complaining publicly that they didn’t like how Facebook was handling COVID misinformation.

Remember, this is what the GOP Congressional folks said when they filed their amicus in the Murthy case:

Wielding threats of intervention, the executive branch of the federal government has engaged in a sustained effort to coerce private parties into censoring speech on matters of public concern. On issue after issue, the Biden Administration has distorted the free marketplace of ideas promised by the First Amendment, bringing the weight of federal authority to bear on any speech it dislikes

Isn’t that… exactly what these Congressional committees are now doing themselves? Except, much worse? Because the threats are much more direct, and the punitive nature of not obeying is even clearer and more directly tied to the speech at issue?

This sure seems to be exactly unconstitutional “jawboning.”

Whether or not you believe that there are real risks from China, it seems absolutely ridiculous that Congress is now basically following an authoritarian playbook, threatening companies for merely associating with and/or defending the rights of a company.

It undermines the principles of free speech and association, allowing governmental entities to dictate what organizations can and cannot support. This overreach of power directly chills advocacy efforts and hinders the protection of fundamental rights.

We Looked At All The Recent Evidence On Mobile Phone Bans In Schools – This Is What We Found

The Conversation

Mobile phones are currently banned in all Australian state schools and many Catholic and independent schools around the country. This is part of a global trend over more than a decade to restrict phone use in schools.

Australian governments say banning mobile phones will reduce distractions in class, allow students to focus on learning, improve student wellbeing and reduce cyberbullying.

But previous research has shown there is little evidence on whether the bans actually achieve these aims.

Many places that restricted phones in schools before Australia did have now reversed their decisions. For example, several school districts in Canada implemented outright bans then revoked them as they were too hard to maintain. They now allow teachers to make decisions that suit their own classrooms.

A ban was similarly revoked in New York City, partly because bans made it harder for parents to stay in contact with their children.

What does recent research say about phone bans in schools?

Our study

We conducted a “scoping review” of all published and unpublished global evidence for and against banning mobile phones in schools.

Our review, which is pending publication, aims to shed light on whether mobile phones in schools impact academic achievement (including paying attention and distraction), students’ mental health and wellbeing, and the incidence of cyberbullying.

A scoping review is done when researchers know there aren’t many studies on a particular topic. This means researchers cast a very inclusive net, to gather as much evidence as possible.

Our team screened 1,317 articles and reports as well as dissertations from masters and PhD students. We identified 22 studies that examined schools before and after phone bans. There was a mix of study types. Some looked at multiple schools and jurisdictions, some looked at a small number of schools, some collected quantitative data, others sought qualitative views.

In a sign of just how little research there is on this topic, 12 of the studies we identified were done by masters and doctoral students. This means they are not peer-reviewed but done by research students under supervision by an academic in the field.

But in a sign of how fresh this evidence is, almost half the studies we identified were published or completed since 2020.

The studies looked at schools in Bermuda, China, the Czech Republic, Ghana, Malawi, Norway, South Africa, Spain, Sweden, Thailand, the United Kingdom and the United States. None of them looked at schools in Australia.

Academic achievement

Our research found four studies that identified a slight improvement in academic achievement when phones were banned in schools. However, two of these studies found this improvement only applied to disadvantaged or low-achieving students.

Some studies compared schools where there were partial bans against schools with complete bans. This is a problem because it confuses the issue.

But three studies found no differences in academic achievement, whether there were mobile phone bans or not. Two of these studies used very large samples. This masters thesis looked at 30% of all schools in Norway. Another study used a nationwide cohort in Sweden. This means we can be reasonably confident in these results.

Mental health and wellbeing

Two studies in our review, including this doctoral thesis, reported mobile phone bans had positive effects on students’ mental health. However, both studies used teachers’ and parents’ perceptions of students’ wellbeing (the students were not asked themselves).

Two other studies showed no differences in psychological wellbeing following mobile phone bans. However, three studies reported more harm to students’ mental health and wellbeing when they were subjected to phone bans.

The students reported they felt more anxious without being able to use their phone. This was especially evident in one doctoral thesis carried out when students were returning to school after the pandemic, having been very reliant on their devices during lockdown.

So the evidence for banning mobile phones for the mental health and wellbeing of student is inconclusive and based only on anecdotes or perceptions, rather than the recorded incidence of mental illness.

Bullying and cyberbullying

Four studies reported a small reduction in bullying in schools following phone bans, especially among older students. However, the studies did not specify whether or not they were talking about cyberbullying.

Teachers in two other studies, including this doctoral thesis, reported they believed having mobile phones in schools increased cyberbullying.

But two other studies showed the number of incidents of online victimisation and harassment was greater in schools with mobile phone bans compared with those without bans. The study didn’t collect data on whether the online harassment was happening inside or outside school hours.

The authors suggested this might be because students saw the phone bans as punitive, which made the school climate less egalitarian and less positive. Other research has linked a positive school climate with fewer incidents of bullying.

There is no research evidence that students do or don’t use other devices to bully each other if there are phone bans. But it is of course possible for students to use laptops, tablets, smartwatches or library computers to conduct cyberbullying.

Even if phone bans were effective, they would not address the bulk of school bullying. A 2019 Australian study found 99% of students who were cyberbullied were also bullied face-to-face.

What does this tell us?

Overall, our study suggests the evidence for banning mobile phones in schools is weak and inconclusive.

As Australian education academic Neil Selwyn argued in 2021, the impetus for mobile phone bans says more about MPs responding to community concerns rather than research evidence.

Politicians should leave this decision to individual schools, which have direct experience of the pros or cons of a ban in their particular community. For example, a community in remote Queensland could have different needs and priorities from a school in central Brisbane.

Mobile phones are an integral part of our lives. We need to be teaching children about appropriate use of phones, rather than simply banning them. This will help students learn how to use their phones safely and responsibly at school, at home and beyond.

Marilyn Campbell, Professor, School of Early Childhood & Inclusive Education, Queensland University of Technology and Elizabeth J Edwards, Associate Professor in Education, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights

“Do you now, or have you ever, worked with TikTok to help defend its rights?”

That McCarthyism-esque question is apparently being asked by members of Congress to organizations that have been working with TikTok to defend its Constitutional rights.

Does anyone think it’s right for Congress to threaten to punish organizations from working with TikTok? Does that sound like a First Amendment violation to you? Because it sure does to me.

Over the last year or so, we’ve been hearing a lot of talk out of Congress on two specific issues: the supposed horrors of government officials suppressing speech and, at the same time, the supposed horrors of a successful social media app that has ties to China.

Would it surprise you to find that there are some hypocrites in Congress about all of this? Shocking, I know.

We already highlighted how a bunch of members of Congress both signed an amicus brief in the Murthy case saying that governments should never, ever, interfere with speech and also voted to ban TikTok. But, would those same members of Congress who are so worried about “jawboning” by government officials to suppress speech also then use the power of Congress to silence voices trying to defend TikTok?

Yeah, you know where this is going.

NetChoice has been the main trade group that has been defending against all the terrible laws being thrust upon the internet over the last few years. Often people dismiss NetChoice as “big tech” or “the tech industry,” but in my experience they’ve been solidly standing up for good and important internet speech policies. NetChoice has been structured to be independent of its members (i.e., they get to decide what cases they take on, not their members, which sometimes means their members dislike the causes and cases NetChoice takes on).

On Wednesday of this week, NetChoice’s membership roster looked like this:

Image

I highlighted TikTok in particular, because on Thursday, NetChoice’s membership roster looked like this:

Image

TikTok is missing.

Why? Well, because members of Congress threatened to investigate NetChoice if it didn’t drop TikTok from its roster. Politico had some of this story last night, claiming that there was pressure from Congress to drop TikTok:

“The Select Committee’s brazen efforts to intimidate private organizations for associating with a company with 170 million American users is a clear abuse of power that smacks of McCarthyism,” TikTok spokesperson Alex Haurek said in a statement, referring to the House China panel. “It’s a sad day when Members of Congress single out individual companies without evidence while trampling on constitutional rights and the democratic process,” Haurek added. A spokesperson for NetChoice didn’t respond to a request for comment.

The two people told Daniel that NetChoice faced pressure from the office of House Majority Leader Steve Scalise (R-La.) to dump TikTok. A third person said that while no threat was made, NetChoice was told that the Select Committee on China would be investigating groups associated with TikTok and decided to sever ties as a result.

I’ve heard that the claim there was “no threat” is not accurate. As the rest of that paragraph makes clear, there was very much an implied threat that Congress would investigate organizations working with TikTok to defend its rights. I’m also hearing that others, like PR agencies and lobbying organizations that work with TikTok, are now facing similar threats from Congress.

Indeed, despite the “denial” of any threat, Politico gets the “House Select Committee on the CCP” to admit that it will launch an investigation into any organization that helps TikTok defend its rights:

“Significant bipartisan majorities in both the House and the Senate deemed TikTok a grave national security threat and the President signed a bill into law requiring them to divest from the CCP,” a Scalise spokesperson told PI. “It should not come as a surprise to those representing TikTok that as long as TikTok remains connected to the CCP, Congress will continue its rigorous oversight efforts to safeguard Americans from foreign threats.”

Guys, that’s not “rigorous oversight” or “safeguarding Americans.” That’s using the threats of bogus costly investigations to force companies to stop working with TikTok and helping it defend its rights under the Constitution. That seems to be a hell of a lot more like “jawboning” and a much bigger First Amendment problem than the Biden administration complaining publicly that they didn’t like how Facebook was handling COVID misinformation.

Remember, this is what the GOP Congressional folks said when they filed their amicus in the Murthy case:

Wielding threats of intervention, the executive branch of the federal government has engaged in a sustained effort to coerce private parties into censoring speech on matters of public concern. On issue after issue, the Biden Administration has distorted the free marketplace of ideas promised by the First Amendment, bringing the weight of federal authority to bear on any speech it dislikes

Isn’t that… exactly what these Congressional committees are now doing themselves? Except, much worse? Because the threats are much more direct, and the punitive nature of not obeying is even clearer and more directly tied to the speech at issue?

This sure seems to be exactly unconstitutional “jawboning.”

Whether or not you believe that there are real risks from China, it seems absolutely ridiculous that Congress is now basically following an authoritarian playbook, threatening companies for merely associating with and/or defending the rights of a company.

It undermines the principles of free speech and association, allowing governmental entities to dictate what organizations can and cannot support. This overreach of power directly chills advocacy efforts and hinders the protection of fundamental rights.

We Looked At All The Recent Evidence On Mobile Phone Bans In Schools – This Is What We Found

The Conversation

Mobile phones are currently banned in all Australian state schools and many Catholic and independent schools around the country. This is part of a global trend over more than a decade to restrict phone use in schools.

Australian governments say banning mobile phones will reduce distractions in class, allow students to focus on learning, improve student wellbeing and reduce cyberbullying.

But previous research has shown there is little evidence on whether the bans actually achieve these aims.

Many places that restricted phones in schools before Australia did have now reversed their decisions. For example, several school districts in Canada implemented outright bans then revoked them as they were too hard to maintain. They now allow teachers to make decisions that suit their own classrooms.

A ban was similarly revoked in New York City, partly because bans made it harder for parents to stay in contact with their children.

What does recent research say about phone bans in schools?

Our study

We conducted a “scoping review” of all published and unpublished global evidence for and against banning mobile phones in schools.

Our review, which is pending publication, aims to shed light on whether mobile phones in schools impact academic achievement (including paying attention and distraction), students’ mental health and wellbeing, and the incidence of cyberbullying.

A scoping review is done when researchers know there aren’t many studies on a particular topic. This means researchers cast a very inclusive net, to gather as much evidence as possible.

Our team screened 1,317 articles and reports as well as dissertations from masters and PhD students. We identified 22 studies that examined schools before and after phone bans. There was a mix of study types. Some looked at multiple schools and jurisdictions, some looked at a small number of schools, some collected quantitative data, others sought qualitative views.

In a sign of just how little research there is on this topic, 12 of the studies we identified were done by masters and doctoral students. This means they are not peer-reviewed but done by research students under supervision by an academic in the field.

But in a sign of how fresh this evidence is, almost half the studies we identified were published or completed since 2020.

The studies looked at schools in Bermuda, China, the Czech Republic, Ghana, Malawi, Norway, South Africa, Spain, Sweden, Thailand, the United Kingdom and the United States. None of them looked at schools in Australia.

Academic achievement

Our research found four studies that identified a slight improvement in academic achievement when phones were banned in schools. However, two of these studies found this improvement only applied to disadvantaged or low-achieving students.

Some studies compared schools where there were partial bans against schools with complete bans. This is a problem because it confuses the issue.

But three studies found no differences in academic achievement, whether there were mobile phone bans or not. Two of these studies used very large samples. This masters thesis looked at 30% of all schools in Norway. Another study used a nationwide cohort in Sweden. This means we can be reasonably confident in these results.

Mental health and wellbeing

Two studies in our review, including this doctoral thesis, reported mobile phone bans had positive effects on students’ mental health. However, both studies used teachers’ and parents’ perceptions of students’ wellbeing (the students were not asked themselves).

Two other studies showed no differences in psychological wellbeing following mobile phone bans. However, three studies reported more harm to students’ mental health and wellbeing when they were subjected to phone bans.

The students reported they felt more anxious without being able to use their phone. This was especially evident in one doctoral thesis carried out when students were returning to school after the pandemic, having been very reliant on their devices during lockdown.

So the evidence for banning mobile phones for the mental health and wellbeing of student is inconclusive and based only on anecdotes or perceptions, rather than the recorded incidence of mental illness.

Bullying and cyberbullying

Four studies reported a small reduction in bullying in schools following phone bans, especially among older students. However, the studies did not specify whether or not they were talking about cyberbullying.

Teachers in two other studies, including this doctoral thesis, reported they believed having mobile phones in schools increased cyberbullying.

But two other studies showed the number of incidents of online victimisation and harassment was greater in schools with mobile phone bans compared with those without bans. The study didn’t collect data on whether the online harassment was happening inside or outside school hours.

The authors suggested this might be because students saw the phone bans as punitive, which made the school climate less egalitarian and less positive. Other research has linked a positive school climate with fewer incidents of bullying.

There is no research evidence that students do or don’t use other devices to bully each other if there are phone bans. But it is of course possible for students to use laptops, tablets, smartwatches or library computers to conduct cyberbullying.

Even if phone bans were effective, they would not address the bulk of school bullying. A 2019 Australian study found 99% of students who were cyberbullied were also bullied face-to-face.

What does this tell us?

Overall, our study suggests the evidence for banning mobile phones in schools is weak and inconclusive.

As Australian education academic Neil Selwyn argued in 2021, the impetus for mobile phone bans says more about MPs responding to community concerns rather than research evidence.

Politicians should leave this decision to individual schools, which have direct experience of the pros or cons of a ban in their particular community. For example, a community in remote Queensland could have different needs and priorities from a school in central Brisbane.

Mobile phones are an integral part of our lives. We need to be teaching children about appropriate use of phones, rather than simply banning them. This will help students learn how to use their phones safely and responsibly at school, at home and beyond.

Marilyn Campbell, Professor, School of Early Childhood & Inclusive Education, Queensland University of Technology and Elizabeth J Edwards, Associate Professor in Education, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights

“Do you now, or have you ever, worked with TikTok to help defend its rights?”

That McCarthyism-esque question is apparently being asked by members of Congress to organizations that have been working with TikTok to defend its Constitutional rights.

Does anyone think it’s right for Congress to threaten to punish organizations from working with TikTok? Does that sound like a First Amendment violation to you? Because it sure does to me.

Over the last year or so, we’ve been hearing a lot of talk out of Congress on two specific issues: the supposed horrors of government officials suppressing speech and, at the same time, the supposed horrors of a successful social media app that has ties to China.

Would it surprise you to find that there are some hypocrites in Congress about all of this? Shocking, I know.

We already highlighted how a bunch of members of Congress both signed an amicus brief in the Murthy case saying that governments should never, ever, interfere with speech and also voted to ban TikTok. But, would those same members of Congress who are so worried about “jawboning” by government officials to suppress speech also then use the power of Congress to silence voices trying to defend TikTok?

Yeah, you know where this is going.

NetChoice has been the main trade group that has been defending against all the terrible laws being thrust upon the internet over the last few years. Often people dismiss NetChoice as “big tech” or “the tech industry,” but in my experience they’ve been solidly standing up for good and important internet speech policies. NetChoice has been structured to be independent of its members (i.e., they get to decide what cases they take on, not their members, which sometimes means their members dislike the causes and cases NetChoice takes on).

On Wednesday of this week, NetChoice’s membership roster looked like this:

Image

I highlighted TikTok in particular, because on Thursday, NetChoice’s membership roster looked like this:

Image

TikTok is missing.

Why? Well, because members of Congress threatened to investigate NetChoice if it didn’t drop TikTok from its roster. Politico had some of this story last night, claiming that there was pressure from Congress to drop TikTok:

“The Select Committee’s brazen efforts to intimidate private organizations for associating with a company with 170 million American users is a clear abuse of power that smacks of McCarthyism,” TikTok spokesperson Alex Haurek said in a statement, referring to the House China panel. “It’s a sad day when Members of Congress single out individual companies without evidence while trampling on constitutional rights and the democratic process,” Haurek added. A spokesperson for NetChoice didn’t respond to a request for comment.

The two people told Daniel that NetChoice faced pressure from the office of House Majority Leader Steve Scalise (R-La.) to dump TikTok. A third person said that while no threat was made, NetChoice was told that the Select Committee on China would be investigating groups associated with TikTok and decided to sever ties as a result.

I’ve heard that the claim there was “no threat” is not accurate. As the rest of that paragraph makes clear, there was very much an implied threat that Congress would investigate organizations working with TikTok to defend its rights. I’m also hearing that others, like PR agencies and lobbying organizations that work with TikTok, are now facing similar threats from Congress.

Indeed, despite the “denial” of any threat, Politico gets the “House Select Committee on the CCP” to admit that it will launch an investigation into any organization that helps TikTok defend its rights:

“Significant bipartisan majorities in both the House and the Senate deemed TikTok a grave national security threat and the President signed a bill into law requiring them to divest from the CCP,” a Scalise spokesperson told PI. “It should not come as a surprise to those representing TikTok that as long as TikTok remains connected to the CCP, Congress will continue its rigorous oversight efforts to safeguard Americans from foreign threats.”

Guys, that’s not “rigorous oversight” or “safeguarding Americans.” That’s using the threats of bogus costly investigations to force companies to stop working with TikTok and helping it defend its rights under the Constitution. That seems to be a hell of a lot more like “jawboning” and a much bigger First Amendment problem than the Biden administration complaining publicly that they didn’t like how Facebook was handling COVID misinformation.

Remember, this is what the GOP Congressional folks said when they filed their amicus in the Murthy case:

Wielding threats of intervention, the executive branch of the federal government has engaged in a sustained effort to coerce private parties into censoring speech on matters of public concern. On issue after issue, the Biden Administration has distorted the free marketplace of ideas promised by the First Amendment, bringing the weight of federal authority to bear on any speech it dislikes

Isn’t that… exactly what these Congressional committees are now doing themselves? Except, much worse? Because the threats are much more direct, and the punitive nature of not obeying is even clearer and more directly tied to the speech at issue?

This sure seems to be exactly unconstitutional “jawboning.”

Whether or not you believe that there are real risks from China, it seems absolutely ridiculous that Congress is now basically following an authoritarian playbook, threatening companies for merely associating with and/or defending the rights of a company.

It undermines the principles of free speech and association, allowing governmental entities to dictate what organizations can and cannot support. This overreach of power directly chills advocacy efforts and hinders the protection of fundamental rights.

Link Taxes Backfire: Canadian News Outlets Lose Out, Meta Unscathed

As California (and possibly Congress) are, again, revisiting instituting link taxes in the US, it’s worth highlighting that our prediction about the Canadian link tax has now been shown to be correct. It didn’t harm Meta one bit to remove news.

The entire premise behind these link taxes/bargaining codes is that social media gets “so much free value” from news orgs, that they must pay up. Indeed, a ridiculously bad study that came out last fall, and was widely passed around, that argued that Google and Meta had stripped $14 billion worth of value from news orgs and should offer to pay up that amount.

$14 billion. With a “b.”

No one, who understands anything, believes that’s true. Again, social media is not taking value away from news orgs. It’s giving them free distribution and free circulation, things that, historically, cost media organizations a ton of money.

But, now a study, in Canada is proving that social media companies get basically zero value from news links. Meta, somewhat famously, blocked links to news in Canada in response to that country’s link tax. This sent many observers into a tizzy, claiming that it was somehow both unfair for Meta to link to news orgs AND to not link to news orgs.

Yes, media organizations are struggling. Yes, the problems facing the news industry are incredibly important to solve to help protect democracy. Yes, we should be thinking and talking about creative solutions for funding.

But, taxing links to force internet companies to pay media companies is simply a terrible solution.

Thanks to Meta , not giving in to Canada and blocking links to news, we now have some data on what happens when a link tax approach is put in place. Some new research from McGill University and the University of Toronto has found that Meta didn’t lose very much engagement from a lack of news links. But media orgs lost out big time.

Laura Hazard Owen has a good summary at Nieman Lab.

“We expected the disappearance of news on Meta platforms to have caused a major shock to the Canadian information ecosystem,” the paper’s authors — Sara Parker, Saewon Park, Zeynep Pehlivan, Alexei Abrahams, Mika Desblancs, Taylor Owen, Jennie Phillips, and Aengus Bridgman — write. But the shock appears to have been one-sided. While “the ban has significantly impacted Canadian news outlets,” the authors write, “Meta has deprived users of the affordance of news sharing without suffering any loss in engagement of their user base.”

What the researchers found is that users are still using Meta platforms just as much, and still getting news from those platforms. They’re just no longer following links back to the sources. This has done particular harm to smaller local news organizations:

This remarkable stability in Meta platform users’ continued use of the platforms for politics and current affairs anticipates the findings from the detailed investigation into engagement and posting behaviour of Canadians. We find that the ban has significantly impacted Canadian news outlets but had little impact on Canadian user behaviour. Consistent with the ban’s goal, we find a precipitous decline in engagement with Canadian news and consequently the posting of news content by Canadian news outlets. The effect is particularly acute for local news outlets, while some news outlets with national or international scope have been able to make a partial recovery after a few months. Additionally, posting by and engagement with alternative sources of information about Canadian current affairs appears unmoved by the ban. We further find that Groups focused on Canadian politics enjoy the same frequency of posting and diversity of engagement after the ban as before. While link sharing declines, we document a complementary uptick in the sharing of screenshots of Canadian news in a sample of these political Groups, and confirm by reviewing a number of such posts where users deliberately circumvented the link-sharing ban by posting screenshots. Although the screenshots do not compensate for the total loss of link sharing, the screenshots themselves garner the same total amount of engagement as news links previously had.

I feel like I need to keep pointing this out, but: when you tax something, you get less of it. Canada has decided to tax news links, so they get fewer news links. But users still want to talk about news, so they’re replacing the links with screenshots and discussions. And it’s pretty impressive how quickly users switched over:

Image

Meaning the only one losing out here are the news publishers themselves who claimed to have wanted this law so badly.

The impact on Canadian news orgs appears to be quite dramatic:

Image

But the activity on Meta platform groups dedicated to news doesn’t seem to have changed that much:

Image

If “news links” were so valuable to Meta, then, um, wouldn’t that have declined once Meta blocked links?

One somewhat incredible finding in the paper is that “misinformation” links also declined after Meta banned news links:

Surprisingly, the number of misinformation links in political and local community Groups decreased after the ban.

Political Groups:

  • Prior to the ban: 2.8% of links (5612 out of 198,587 links) were misinformation links
  • After the ban: 1.4% of links (5306 out of 379,202 links) were misinformation links

Though the paper admits that this could just be a function of users recognizing they can’t share links.

This is still quite early research, but it is notable, especially given that the US continues to push for this kind of law as well. Maybe, just maybe, we should take a step back and recognize that taxing links is not helpful for news orgs and misunderstands the overall issue.

It’s becoming increasingly clear that the approach taken by Canada and other countries to force platforms like Meta to pay for news links is misguided and counterproductive. These laws are reducing the reach and engagement of news organizations while doing little to address the underlying challenges facing the industry. Instead of helping news organizations, these laws are having the opposite effect. Policymakers need to take a more nuanced and evidence-based approach that recognizes the complex dynamics of the online news ecosystem.

Bipartisan Group Of Senators Introduce New Terrible ‘Protect The Kids Online’ Bill

Apparently, the world needs even more terrible bills that let ignorant senators grandstand to the media about how they’re “protecting the kids online.” There’s nothing more serious to work on than that. The latest bill comes from Senators Brian Schatz and Ted Cruz (with assists from Senators Chris Murphy, Katie Britt, Peter Welch, Ted Budd, John Fetterman, Angus King, and Mark Warner). This one is called the “The Kids Off Social Media Act” (KOSMA) and it’s an unconstitutional mess built on a long list of debunked and faulty premises.

It’s especially disappointing to see this from Schatz. A few years back, I know his staffers would regularly reach out to smart people on tech policy issues in trying to understand the potential pitfalls of the regulations he was pushing. Either he’s no longer doing this, or he is deliberately ignoring their expert advice. I don’t know which one would be worse.

The crux of the bill is pretty straightforward: it would be an outright ban on social media accounts for anyone under the age of 13. As many people will recognize, we kinda already have a “soft” version of that because of COPPA, which puts much stricter rules on sites directed at those under 13. Because most sites don’t want to deal with those stricter rules, they officially limit account creation to those over the age of 13.

In practice, this has been a giant mess. Years and years ago, Danah Boyd pointed this out, talking about how the “age 13” bit is a disaster for kids, parents, and educators. Her research showed that all this generally did was to have parents teach kids that “it’s okay to lie,” as parents wanted kids to use social media tools to communicate with grandparents. Making that “soft” ban a hard ban is going to create a much bigger mess and prevent all sorts of useful and important communications (which, yeah, is a 1st Amendment issue).

Schatz’s reasons put forth for the bill are just… wrong.

No age demographic is more affected by the ongoing mental health crisis in the United States than kids, especially young girls. The Centers for Disease Control and Prevention’s Youth Risk Behavior Survey found that 57 percent of high school girls and 29 percent of high school boys felt persistently sad or hopeless in 2021, with 22 percent of all high school students—and nearly a third of high school girls—reporting they had seriously considered attempting suicide in the preceding year.

Gosh. What was happening in 2021 with kids that might have made them feel hopeless? Did Schatz and crew simply forget about the fact that most kids were under lockdown and physically isolated from friends for much of 2021? And that there were plenty of other stresses, including millions of people, including family members, dying? Noooooo. Must be social media!

Studies have shown a strong relationship between social media use and poor mental health, especially among children.

Note the careful word choice here: “strong relationship.” They won’t say a causal relationship because studies have not shown that. Indeed, as the leading researcher in the space has noted, there continues to be no real evidence of any causal relationship. The relationship appears to work the other way: kids who are dealing with poor mental health and who are desperate for help turn to the internet and social media because they’re not getting help elsewhere.

Maybe offer a bill that helps kids get access to more resources that help them with their mental health, rather than taking away the one place they feel comfortable going? Maybe?

From 2019 to 2021, overall screen use among teens and tweens (ages 8 to 12) increased by 17 percent, with tweens using screens for five hours and 33 minutes per day and teens using screens for eight hours and 39 minutes.

I mean, come on Schatz. Are you trolling everyone? Again, look at those dates. WHY DO YOU THINK that screen time might have increased 17% for kids from 2019 to 2021? COULD IT POSSIBLY BE that most kids had to do school via computers and devices at home, because there was a deadly pandemic making the rounds?

Maybe?

Did Schatz forget that? I recognize that lots of folks would like to forget the pandemic lockdowns, but this seems like a weird way to manifest that.

I mean, what a weird choice of dates to choose. I’m honestly kind of shocked that the increase was only 17%.

Also, note that the data presented here isn’t about an increase in social media use. It could very well be that the 17% increase was Zoom classes.

Based on the clear and growing evidence, the U.S. Surgeon General issued an advisory last year, calling for new policies to set and enforce age minimums and highlighting the importance of limiting the use of features, like algorithms, that attempt to maximize time, attention, and engagement.

Wait. You mean the same Surgeon General’s report that denied any causal link between social media and mental health (which you falsely claim has been proved) and noted just how useful and important social media is to many young people?

From that report, which Schatz misrepresents:

Social media can provide benefits for some youth by providing positive community and connection with others who share identities, abilities, and interests. It can provide access to important information and create a space for self-expression. The ability to form and maintain friendships online and develop social connections are among the positive effects of social media use for youth. , These relationships can afford opportunities to have positive interactions with more diverse peer groups than are available to them offline and can provide important social support to youth. The buffering effects against stress that online social support from peers may provide can be especially important for youth who are often marginalized, including racial, ethnic, and sexual and gender minorities. , For example, studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support. Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.

Did Schatz’s staffers just, you know, skip over that part of the report or nah?

The bill also says that companies need to not allow algorithmic targeting of content to anyone under 17. This is also based on a widely believed myth that algorithmic content is somehow problematic. No studies have legitimately shown that of current algorithms. Indeed, a recent study showed that removing algorithmic targeting leads to people being exposed to more disinformation.

Is this bill designed to force more disinformation on kids? Why would that be a good idea?

Yes, some algorithms can be problematic! About a decade ago, algorithms that tried to optimize solely for “engagement” definitely created some bad outcomes. But it’s been a decade since most such algorithms have been designed that way. On most social media platforms, the algorithms are designed in other ways, taking into account a variety of different factors, because they know that optimizing just on engagement leads to bad outcomes.

Then the bill tacks on Cruz’s bill to require schools to block social media. There’s an amusing bit when reading the text of that part of the law. It says that you have to block social media on “federally funded networks and devices” but also notes that it does not prohibit “a teacher from using a social media platform in the classroom for educational purposes.”

But… how are they going to access those if the school is required by law to block access to such sites? Most schools are going to do a blanket ban, and teachers are going to be left to do what? Show kids useful YouTube science videos on their phones? Or maybe some schools will implement a special teacher code that lets them bypass the block. And by the end of the first week of school half the kids in the school will likely know that password.

What are we even doing here?

Schatz has a separate page hyping up the bill, and it’s even dumber than the first one above. It repeats some of the points above, though this time linking to Jonathan Haidt, whose work has been trashed left, right, and center by actual experts in this field. And then it gets even dumber:

Big Tech knows it’s complicit – but refuses to do anything about it…. Moreover, the platforms know about their central role in turbocharging the youth mental health crisis. According to Meta’s own internal study, “thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” It concluded, “teens blame Instagram for increases in the rate of anxiety and depression.”

This is not just misleading, it’s practically fraudulent misrepresentation. The study Schatz is citing is one that was revealed by Frances Haugen. As we’ve discussed, it was done because Meta was trying to understand how to do better. Indeed, the whole point of that study was to see how teens felt about using social media in 12 different categories. Meta found that most boys felt neutral or better about themselves in all 12 categories. For girls, it was 11 out of 12. It was only in one category, body image, where the split was more pronounced. 32% of girls said that it made them feel worse. Basically the same percentage said it had no impact, or that it made them feel better.

Image

Also, look at that slide’s title. The whole point of this study was to figure out if they were making kids feel worse in order to look into how to stop doing that. And now, because grandstanders like Schatz are falsely claiming that this proves they were “complicit” and “refuse to do anything about it,” no social media company will ever do this kind of research again.

Because, rather than proactively looking to see if they’re creating any problems that they need to try to fix, Schatz and crew are saying “simply researching this is proof that you’re complicit and refuse to act.”

Statements like this basically ensure that social media companies stick their heads in the sand, rather than try to figure out where harm might be caused and take steps to stop that harm.

Why would Schatz want to do that?

That page then also falsely claims that the bill does not require age verification. This is a silly two-step that lying politicians claim every time they do this. Does it directly mandate age verification? No. But, by making the penalties super serious and costly for failing to stop kids from accessing social media that will obviously drive companies to introduce stronger age verification measures that are inherently dangerous and an attack on privacy.

Perhaps Schatz doesn’t understand this, but it’s been widely discussed by many of the experts his staff used to talk to. So, really, he has no excuse.

The FAQ also claims that the bill will pass constitutional muster, while at the same time admitting that they know there will be lawsuits challenging it:

Yes. As, for example, First Amendment expert Neil Richards explains, “[i]nstead of censoring the protected expression present on these platforms, the act takes aim at the procedures and permissions that determine the time, place and manner of speech for underage consumers.” The Supreme Court has long held that the government has the right to regulate products to protect children, including by, for instance, restricting the sale of obscene content to minors. As Richards explains: “[i]n the same way a crowded bar or nightclub is no place for a child on their own”—or in the way every state in the country requires parental consent if it allows a minor to get a tattoo—“this rule would set a reasonable minimum age and maturity limitation for social media customers.” 

While we expect legal challenges to any bill aimed at regulating social media companies, we are confident that this content-neutral bill will pass constitutional muster given the government interests at play.

There are many reasons why this is garbage under the law, but rather than breaking them all down (we’ll wait for judges to explain it in detail), I’ll just point out the major tell is in the law itself. In the definition of what a “social media platform” is in the law, there is a long list of exceptions of what the law does not cover. It includes a few “moral panics of yesteryear” that gullible politicians tried to ban and were found to have violated the First Amendment in the process.

It explicitly carves out video games and content that is professionally produced, rather than user-generated:

Image

Remember the moral panics about video games and TV destroying kids’ minds? Yeah. So this child protection bill is hasty to say “but we’re not banning that kind of content!” Because whoever drafted the bill recognized that the Supreme Court has already made it clear that politicians can’t do that for video games or TV.

So, instead, they have to pretend that social media content is somehow on a whole different level.

But it’s not. It’s still the government restricting access to content. They’re going to pretend that there’s something unique and different about social media, and that they’re not banning the “content” but rather the “place” and “manner” of accessing that content. Except that’s laughable on its face.

You can see that in the quote above where Schatz does the fun dance where he first says “it’s okay to ban obscene content to minors” and then pretends that’s the same as restrictions on access to a bar (it’s not). One is about the content, and one is about a physical place. Social media is all about the content, and it’s not obscene content (which is already an exception to the First Amendment).

And, the “parental consent” for tattoos… I mean, what the fuck? Literally 4 questions above in the FAQ where that appears Schatz insists that his bill has nothing about parental consent. And then he tries to defend it by claiming it’s no different than parental consent laws?

The FAQ also claims this:

This bill does not prevent LGBTQ+ youth from accessing relevant resources online and we have worked closely with LGBTQ+ groups while crafting this legislation to ensure that this bill will not negatively impact that community.

I mean, it’s good you talked to some experts, but I note that most of the LGBTQ+ groups I’m aware of are not listed on your list of “groups supporting the bill” on the very same page. That absence stands out.

And, again, the Surgeon General’s report that you misleadingly cited elsewhere highlights how helpful social media can be to many LGBTQ+ youth. You can’t just say “nah, it won’t harm them” without explaining why all those benefits that have been shown in multiple studies, including the Surgeon General’s report, somehow don’t get impacted.

There’s a lot more, but this is just a terrible bill that would create a mess. And, I’m already hearing from folks in DC that Schatz is trying to get this bill added to the latest Christmas tree of a bill to reauthorize the FAA.

It would be nice if we had politicians looking to deal with the actual challenges facing kids these days, including the lack of mental health support for those who really need it. Instead, we get unconstitutional grandstanding nonsense bills like this.

Everyone associated with this bill should feel ashamed.

Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?

There’s a fascinating new lawsuit against Meta that includes a surprisingly novel interpretation of Section 230. If the court buys it, this interpretation could make the open web a lot more open, while chipping away at the centralized control of the biggest tech companies. And, yes, that could mean that the law (Section 230) that is wrongly called “a gift to big tech” might be a tool that undermines the dominance of some of those companies. But the lawsuit could be tripped up for any number of reasons, including a potentially consequential typo in the law that has been ignored for years.

Buckle in, this is a bit of a wild ride.

You would think with how much attention has been paid to Section 230 over the last few years (there’s an entire excellent book about it!), and how short the law is, that there would be little happening with the existing law that would take me by surprise. But the new Zuckerman v. Meta case filed on behalf of Ethan Zuckerman by the Knight First Amendment Institute has got my attention.

It’s presenting a fairly novel argument about a part of Section 230 that almost never comes up in lawsuits, but could create an interesting opportunity to enable all kinds of adversarial interoperability and middleware to do interesting (and hopefully useful) things that the big platforms have been using legal threats to shut down.

If the argument works, it may reveal a surprising and fascinating trojan horse for a more open internet, hidden in Section 230 for the past 28 years without anyone noticing.

Of course, it could also have much wider ramifications that a bunch of folks need to start thinking through. This is the kind of thing that happens when someone discovers something new in a law that no one really noticed before.

But there’s also a very good chance this lawsuit flops for a variety of other reasons without ever really exploring the nature of this possible trojan horse. There are a wide variety of possible outcomes here.

But first, some background.

For years, we’ve talked about the importance of tools and systems that give end users more control over their own experiences online, rather than leaving it entirely up to the centralized website owners. This has come up in a variety of different contexts in different ways, from “Protocols, not Platforms” to “adversarial interoperability,” to “magic APIs” to “middleware.” These are not all exactly the same thing, but they’re all directionally strongly related, and conceivably could work well together in interesting ways.

But there are always questions about how to get there, and what might stand in the way. One of the biggest things standing in the way over the last decade or so has been interpretations of various laws that effectively allow social media companies to threaten and/or bring lawsuits against companies trying to provide these kinds of additional services. This can take the form of a DMCA 1201 claim for “circumventing” a technological block. Or, more commonly, it has taken the form of a civil (Computer Fraud & Abuse Act) CFAA claim.

The most representative example of where this goes wrong is when Facebook sued Power Ventures years ago. Power was trying to build a unified dashboard across multiple social media properties. Users could provide Power with their own logins to social media sites. This would allow Power to log in to retrieve and post data, so that someone could interact with their Facebook community without having to personally go into Facebook.

This was a potentially powerful tool in limiting Facebook’s ability to become a walled-off garden with too much power. And Facebook realized that too. That’s why it sued Power, claiming that it violated the CFAA’s prohibition on “unauthorized access.”

The CFAA was designed (poorly and vaguely) as an “anti-hacking” law. And you can see where “unauthorized access” could happen as a result of hacking. But Facebook (and others) have claimed that “unauthorized access” can also be “because we don’t want you to do that with your own login.”

And the courts have agreed to Facebook’s interpretation, with a few limitations (that don’t make that big of a difference).

I still believe that this ability to block interoperability/middleware with law has been a major (perhaps the most major) reason “big tech” is so big. They’re able to use these laws to block out the kinds of companies who would make the market more competitive and pull down some the walls of walled gardens.

That brings us to this lawsuit.

Ethan Zuckerman has spent years trying to make the internet a better, more open space (partially, I think, in penance for creating the world’s first pop-up internet ad). He’s been doing some amazing work on reimagining the digital public infrastructure, which I keep meaning to write about, but never quite find the time to get to.

According to the lawsuit, he wants to build a tool called “Unfollow Everything 2.0.” The tool is based on a similar tool, also called Unfollow Everything, that was built by Louis Barclay a few years ago and did what it says on the tin: let you automatically unfollow everything on Facebook. Facebook sent Barclay a legal threat letter and banned him for life from the site.

Zuckerman wants to recreate the tool with some added features enabling users to opt-in to provide some data to researchers about the impact of not following anyone on social media. But he’s concerned that he’d face legal threats from Meta, given what happened with Barclay.

Using Unfollow Everything 2.0, Professor Zuckerman plans to conduct an academic research study of how turning off the newsfeed affects users’ Facebook experience. The study is opt-in—users may use the tool without participating in the study. Those who choose to participate will donate limited and anonymized data about their Facebook usage. The purpose of the study is to generate insights into the impact of the newsfeed on user behavior and well-being: for example, how does accessing Facebook without the newsfeed change users’ experience? Do users experience Facebook as less “addictive”? Do they spend less time on the platform? Do they encounter a greater variety of other users on the platform? Answering these questions will help Professor Zuckerman, his team, and the public better understand user behavior online and the influence that platform design has on that behavior

The tool and study are nearly ready to launch. But Professor Zuckerman has not launched them because of the near certainty that Meta will pursue legal action against him for doing so.

So he’s suing for declaratory judgment that he’s not violating any laws. If he were just suing for declaratory judgment over the CFAA, that would (maybe?) be somewhat understandable or conventional. But, while that argument is in the lawsuit, the main claim in the case is something very, very different. It’s using a part of Section 230, section (c)(2)(B), that almost never gets mentioned, let alone tested.

Most Section 230 lawsuits involve (c)(1): the famed “26 words” that state “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Some Section 230 cases involve (c)(2)(A) which states that “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Many people incorrectly think that Section 230 cases turn on this part of the law, when really, much of those cases are already cut off by (c)(1) because they try to treat a service as a speaker or publisher.

But then there’s (c)(2)(B), which says:

No provider or user of an interactive computer service shall be held liable on account of any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)

As noted, this basically never comes up in cases. But the argument being made here is that this creates some sort of proactive immunity from lawsuits for middleware creators who are building tools (“technical means”) to “restrict access.” In short: does Section 230 protect “Unfollow Everything” from basically any legal threats from Meta, because it’s building a tool to restrict access to content on Meta platforms?

Or, according to the lawsuit:

This provision would immunize Professor Zuckerman from civil liability for designing, releasing, and operating Unfollow Everything 2.0

First, in operating Unfollow Everything 2.0, Professor Zuckerman would qualify as a “provider . . . of an interactive computer service.” The CDA defines the term “interactive computer service” to include, among other things, an “access software provider that provides or enables computer access by multiple users to a computer server,” id. § 230(f)(2), and it defines the term “access software provider” to include providers of software and tools used to “filter, screen, allow, or disallow content.” Professor Zuckerman would qualify as an “access software provider” because Unfollow Everything 2.0 enables the filtering of Facebook content—namely, posts that would otherwise appear in the feed on a user’s homepage. And he would “provide[] or enable[] computer access by multiple users to a computer server” by allowing users who download Unfollow Everything 2.0 to automatically unfollow and re-follow friends, groups, and pages; by allowing users who opt into the research study to voluntarily donate certain data for research purposes; and by offering online updates to the tool.

Second, Unfollow Everything 2.0 would enable Facebook users who download it to restrict access to material they (and Zuckerman) find “objectionable.” Id. § 230(c)(2)(A). The purpose of the tool is to allow users who find the newsfeed objectionable, or who find the specific sequencing of posts within their newsfeed objectionable, to effectively turn off the feed.

I’ve been talking to a pretty long list of lawyers about this and I’m somewhat amazed at how this seems to have taken everyone by surprise. Normally, when new lawsuits come out, I’ll gut check my take on it with a few lawyers and they’ll all agree with each other whether I’m heading in the right direction or the totally wrong direction. But here… the reactions were all over the map, and not in any discernible pattern. More than one person I spoke to started by suggesting that this was a totally crazy legal theory, only to later come back and say “well, maybe it actually makes some sense.”

It could be a trojan horse that no one noticed in Section 230 that effectively bars websites from taking legal action against middleware providers who are providing technical means for people to filter or screen content on their feed. Now, it’s important to note that it does not bar those companies from putting in place technical measures to block such tools, or just banning accounts or whatever. But that’s very different from threatening or filing civil suits.

If this theory works, it could do a lot to enable these kinds of middleware services and make it significantly harder for big social media companies like Meta to stop them. If you believe in adversarial interoperability, that could be a very big deal. Like, “shift the future of the internet we all use” kind of big.

Now, there are many hurdles before we get to that point. And there are some concerns that if this legal theory succeeds, it could also lead to other problematic results (though I’m less convinced by those).

Let’s start with the legal concerns.

First, as noted, this is a very novel and untested legal theory. Upon reading the case initially, my first reaction was that it felt like one of those slightly wacky academic law journal articles you see law professors write sometimes, with some far-out theory they have that no one’s ever really thought about. This one is in the form of a lawsuit, so at some point we’ll find out how the theory works.

But that alone might make a judge unwilling to go down this path.

Then there are some more practical concerns. Is there even standing here? ¯\_(ツ)_/¯ Zuckerman hasn’t released his tool. Meta hasn’t threatened him. He makes a credible claim that given Meta’s past actions, they’re likely to react unfavorably, but is that enough to get standing?

Then there’s the question of whether or not you can even make use of 230 in an affirmative way like this. 230 is used as a defense to get cases thrown out, not proactively for declaratory judgment.

Also, this is not my area of expertise by any stretch of the imagination, but I remember hearing in the past that outside of IP law, courts (and especially courts in the 9th Circuit) absolutely disfavor lawsuits for declaratory judgment (i.e., a lawsuit before there’s any controversy, where you ask the court “hey, can you just check and make sure I’m on the right side of the law here…”). So I could totally see the judge saying “sorry, this is not a proper use of our time” and tossing it. In fact, that might be the most likely result.

Then there’s this kinda funny but possibly consequential issue: there’s a typo in Section 230 that almost everyone has ignored for years. Because it’s never really mattered. Except it matters in this case. Jeff Kosseff, the author of the book on Section 230, always likes to highlight that in (c)(2)(B), it says that the immunity is for using “the technical means to restrict access to material described in paragraph (1).”

But they don’t mean “paragraph (1).” They mean “paragraph (A).” Paragraph (1) is the “26 words” and does not describe any material, so it would make no sense to say “material described in paragraph (1).” It almost certainly means “paragraph (A),” which is the “good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” section. That’s the one that describes material.

I know that, at times, Jeff has joked when people ask him how 230 should be reformed he suggests they fix the typo. But Congress has never listened.

And now it might matter?

The lawsuit basically pretends that the typo isn’t there. Its language inserts the language from “paragraph (A)” where the law says “paragraph (1).”

I don’t know how that gets handled. Perhaps it gets ignored like every time Jeff points out the typo? Perhaps it becomes consequential? Who knows!

There are a few other oddities here, but this article is getting long enough and has mostly covered the important points. However, I will conclude on one other point that one of the people I spoke to raised. As discussed above, Meta has spent most of the past dozen or so years going legally ballistic about anyone trying to scrape or data mine its properties in anyway.

Yet, earlier this year, it somewhat surprisingly bailed out on a case where it had sued Bright Data for scraping/data mining. Lawyer Kieran McCarthy (who follows data scraping lawsuits like no one else) speculated that Meta’s surprising about-face may be because it suddenly realized that for all of its AI efforts, it’s been scraping everyone else. And maybe someone high up at Meta suddenly realized how it was going to look in court when it got sued for all the AI training scraping, if the plaintiffs point out that at the very same time it was suing others for scraping its properties.

For me, I suspect the decision not to appeal might be more about a shift in philosophy by Meta and perhaps some of the other big platforms than it is about their confidence in their ability to win this case. Today, perhaps more important to Meta than keeping others off their public data is having access to everyone else’s public data. Meta is concerned that their perceived hypocrisy on these issues might just work against them. Just last month, Meta had its success in prior scraping cases thrown back in their face in a trespass to chattels case. Perhaps they were worried here that success on appeal might do them more harm than good.

In short, I think Meta cares more about access to large volumes of data and AI than it does about outsiders scraping their public data now. My hunch is that they know that any success in anti-scraping cases can be thrown back at them in their own attempts to build AI training databases and LLMs. And they care more about the latter than the former.

I’ve separately spoken to a few experts who were worried about the consequences if Zuckerman succeeded here. They were worried that it might simultaneously immunize potential bad actors. Specifically, you could see a kind of Cambridge Analytica or Clearview AI situation, where companies trying to get access to data for malign purposes convince people to install their middleware app. This could lead to a massive expropriation of data, and possibly some very sketchy services as a result.

But I’m less worried about that, mainly because it’s the sketchy eventuality of how that data is being used that would still (hopefully?) violate certain laws, not the access to the data itself. Still, there are at least some questions being raised about how this type of more proactive immunity might result in immunizing bad actors that is at least worth thinking about.

Either way, this is going to be a case worth following.

Senate Must Follow House’s Lead In Passing Fourth Amendment Is Not For Sale Act

The Fourth Amendment exists for a reason. It’s supposed to protect our private possessions and data from government snooping, unless they have a warrant. It doesn’t entirely prevent the government from getting access to data, they just need to show probable cause of a crime.

But, of course, the government doesn’t like to make the effort.

And these days, many government agencies (especially law enforcement) have decided to take the shortcut that money can buy: they’re just buying private data on the open market from data brokers and avoiding the whole issue of a warrant altogether.

This could be solved with a serious, thoughtful, comprehensive privacy bill. I’m hoping to have a post soon on the big APRA data privacy bill that’s getting attention lately (it’s a big bill, and I just haven’t had the time to go through the entire bill yet). In the meantime, though, there was some good news, with the House passing the “Fourth Amendment is Not For Sale Act,” which was originally introduced in the Senate by Ron Wyden and appears to have broad bipartisan support.

We wrote about it when it was first introduced, and again when the House voted it out of committee last year. The bill is not a comprehensive privacy bill, but it would close the loophole discussed above.

The Wyden bill just says that if a government agency wants to buy such data, if it would have otherwise needed a warrant to get that data in the first place, it should need to get a warrant to buy it in the market as well.

Anyway, the bill passed 219 to 199 in the House, and it was (thankfully) not a partisan vote at all.

Image

It is a bit disappointing that the vote was so close and that so many Representatives want to allow government agencies, including law enforcement, to be able to purchase private data to get around having to get a warrant. But, at least the majority voted in favor of the bill.

And now, it’s up to the Senate. Senator Wyden posted on Bluesky about how important this bill is, and hopefully the leadership of the Senate understand that as well.

Can confirm. This is a huge and necessary win for Americans' privacy, particularly after the Supreme Court gutted privacy protections under Roe. Now it's time for the Senate to do its job and follow suit.

[image or embed]

— Senator Ron Wyden (@wyden.senate.gov) Apr 17, 2024 at 3:30 PM

SCOTUS Needs To Take Up The Texas Age Verification Lawsuit

I think we could witness one of the most important First Amendment legal showdowns ever.

The U.S. Supreme Court is being asked to rule on the constitutionality of mandatory age verification for porn websites. If the high court takes up the case, it would queue up a landmark debate pertaining to the First Amendment and privacy rights of millions of people.

Free Speech Coalition and the parent companies of the largest adult entertainment websites on the web filed suit in the U.S. District Court for the Western District of Texas with the intention to block House Bill (HB) 1181.

HB 1181 requires mandatory age verification for porn websites with users from Texas IP addresses. It also requires pseudoscientific health warnings to be posted on adult websites. Counsel representing the coalition and the porn companies argued that it violated the First Amendment rights of consumers and owners of the websites. This prompted the federal district court to initially enjoin the state of Texas from enforcing the law because its text appeared to be unconstitutional.

Acting Texas Attorney General Angela Colmenero appealed the injunction to the Fifth Circuit Court of Appeals. After a clear demonstration of classic Fifth Circuit tap dancing and the return of Ken Paxton to helm of the Attorney General’s office, Texas was granted permission to enforce the age verification requirements outlined in the law. Luckily, the circuit judges properly applied the Zauderer standard, denying the requirement to post the bogus health warnings.

Soon after this, Paxton announced lawsuits against the parent companies of Pornhub, xHamster, and Stripchat for violations of HB 1181. The penalties total in millions of dollars in damages, per the law. After the lawsuits for HB 1181 violations were announced and filed in circuit courts in Travis County, counsel for the plaintiffs tried to hold enforcement while they petitioned the high court to take up the case for consideration. Justice Samuel Alito, the circuit justice for the Fifth Circuit, has yet to indicate that the case will be taken up by the Supreme Court. There is no reason why they shouldn’t take it up because of how important this case is moving forward, and how this issue is showing up in so many other states.

The case, Free Speech Coalition et al. v. Paxton, is so important that the national affiliate of the American Civil Liberties Union announced they are aiding the plaintiffs and their current counsel, a team from the big law firm Quinn Emanuel, in their case. They will support the petition for writ of certiorari, potential oral arguments, etc. to render House Bill 1181 and all age verification laws as unconstitutional pipedreams.

Plaintiffs accurately argue that this is settled law, referring to the high court’s landmark decision in Reno v. American Civil Liberties Union. This decision found that segregating the content of the internet by age violates the rights of not only adults but for minors. The vast majority of age verification laws as they are structured now do just that.

While the Supreme Court provided for a less restrictive means to filter out minors from viewing age-restricted materials and potentially facing some level of harm, the vehicles of enforcement and some of the options touted in these bills for controlling minors’ web usage are, to the plaintiffs and civil liberties organizations, a violation of the First Amendment. ACLU and Quinn Emanuel attorneys for the plaintiffs present these arguments in their petition for writ of certiorari, which was filed in April 2024. Now, we just need the Supreme Court to take this seriously and not let the Fifth Circuit, the circuit that upheld a ban on drag shows, dictate law for the nation.

Michael McGrady covers the legal and tech side of the online porn business, among other topics.

The US Banning TikTok Would Play Right Into China’s Hands, And Destroy Decades Of US Work On Promoting An Open Internet

Apparently, the TikTok ban bill is back.

Speaker Mike Johnson plans to include TikTok divestiture legislation already passed by the House in a fast-moving aid package for Ukraine and Israel that the chamber is set to clear on Saturday. The Senate is expected to quickly take up the measure, and President Joe Biden promised Wednesday to sign it immediately.

If signed into law, the bill would deliver a significant blow to a video-sharing app used by as many as 170 million Americans. Its enactment would also mark a major setback to the company’s intense lobbying efforts, especially by Chief Executive Officer Shou Chew, who made the rounds on Capitol Hill last month in a bid to get the Senate to squelch the legislation.

I’ve already explained why the TikTok ban is both unconstitutional and would not do anything to fix the “concerns” that people have raised about it. We’ve also discussed how those most vocally pushing for the TikTok ban appear to be financially conflicted.

But, even more important than all that, is that a TikTok ban would be a disaster for the open web. Yes, other countries have banned apps, but they tend to be authoritarian countries that have never liked and never supported an open web.

Banning an entire app in the US would be a massive attack on the very concept of an open web. And that’s really unfortunate, given that the US used to be the world’s most vocal supporter of the web being kept open and free.

The New York Times recently had a good article calling out what a disaster the ban would be for the open web.

Digital rights groups and others around the world have taken notice — and raised the question of how the moves against TikTok contradict the United States’ arguments in favor of an open internet.

A Russian opposition blogger, Aleksandr Gorbunov, posted on social media last month that Russia could use the move to shut down services like YouTube. And digital rights advocates globally are expressing fears of a ripple effect, with the United States providing cover for authoritarians who want to censor the internet.

In March, the Chinese government, which controls its country’s internet, said America had “one way of saying and doing things about the United States, and another way of saying and doing things about other countries,” citing the TikTok legislation.

Passing the TikTok ban would effectively be telling the world (1) it’s totally okay to ban apps you don’t like, and (2) the U.S. long-standing commitment to the open web was always fake and always bullshit, because the second a successful foreign app came along, we tossed out those principles.

“It would diminish the U.S.’s standing in promoting internet freedom,” said Juan Carlos Lara, the executive director of Derechos Digitales, a Latin American digital rights group based in Chile. “It would definitely not bolster its own case for promoting a free and secure, stable and interoperable internet.”

And that signal will be heard loud and clear around the world:

Mishi Choudhary, a lawyer who founded the New Delhi-based Software Freedom Law Center, said the Indian government would also use a U.S. ban to justify further crackdowns. It has already engaged in internet shutdowns, she said, and it banned TikTok in 2020 over border conflicts with China.

“This gives them good reason to find confidence in their past actions, but also emboldens them to take similar future actions,” she said in an interview.

Mr. Lara of Derechos Digitales noted that countries like Venezuela and Nicaragua had already passed laws that gave the government more control over online content. He said increased government control of the internet was a “tempting idea” that “really risks materializing if such a thing is seen in places like the U.S.”

A forced sale or ban of TikTok could also make it harder for the American government to ask other countries to embrace an internet governed by international organizations, digital rights experts said.

And, if the goal here is to hurt China in particular, that may backfire massively:

Patrick Toomey, the deputy director of the National Security Project at the American Civil Liberties Union, said that if the TikTok measure became law, the “hypocrisy would be inescapable and the dividends to China enormous.”

China has long made a big deal whenever the US government is hypocritical like this. This would be a huge PR win for the Chinese government. It would allow it to claim that its Great Firewall approach to the internet is right, and that the US was admitting that openness and an open internet fails. It would allow China to call out US hypocrisy, and that matters a lot at this moment when China is working hard to build stronger relationships with lots of countries around the globe.

Banning TikTok won’t help the US against China. It will play right into China’s hands. It doesn’t need TikTok to get data on Americans, nor to try to influence Americans. But, destroying decades of US foreign policy promoting an open and free internet serves China’s interests massively.

Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The Record

A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness.

I’m not going to review all the reasons it was wrong. You can go back to my original article for that, though I will note that the argument seemed to suggest that getting rid of Section 230 would both lead to better content moderation and, at the same time, only moderation based on the First Amendment. Both of those points are obviously wrong, but the latter one is incoherent.

Given his long track record of wrongness, I had assumed that much of the article likely came from Lanier. However, I’m going to reassess that in light of Stanger’s recent performance before the House Energy & Commerce Committee. Last week, there was this weird hearing about Section 230, in which the Committee invited three academic critics of Section 230, and not a single person who could counter their arguments and falsehoods. We talked about this hearing a bit in this week’s podcast, with Rebecca MacKinnon from the Wikimedia Foundation.

Stanger was one of the three witnesses. The other two, Mary Anne Franks and Mary Graw Leary, presented some misleading and confused nonsense about Section 230. However, the misleading and confused nonsense about Section 230 at least fits into the normal framework of the debate around Section 230. There is confusion about how (c)(1) and (c)(2) interact, the purpose of Section 230, and (especially) some confusion about CSAM and Section 230 and an apparent unawareness that federal criminal behavior is exempted from Section 230.

But, let’s leave that aside. Because Stanger’s submission was so far off the mark that whoever invited her should be embarrassed. I’ve seen some people testify before Congress without knowing what they’re talking about, but I cannot recall seeing testimony this completely, bafflingly wrong before. Her submitted testimony is wrong in all the ways that the Wired article was wrong and more. There are just blatant factual errors throughout it.

It is impossible to cover all of the nonsense, so we’re just going to pick some gems.

Without Section 230, existing large social media companies would have to adapt. Decentralized Autonomous Organizations, (DAOs) such as BlueSky and Mastodon, would become more attractive. The emergent DAO social media landscape should serve to put further brakes on virality, allowing a more regional social media ecosystem to emerge, thereby creating new demand for local media. In an ideal world, networks of DAOs would comprise a new fediverse (a collection of social networking servers which can communicate with each other, while remaining independently controlled), where users would have greater choice and control over the communities of which they are a part.

So, um. That’s not what DAOs are, professor. You seem to be confusing decentralized social media with decentralized autonomous organizations, which are a wholly different thing. This is kind of like saying “social security benefits” when you mean “social media influencers” because both begin with “social.” They’re not the same thing.

A decentralized social media site is what it says on the tin. It’s a type of social media that isn’t wholly controlled by a single company. Different bits of it can be controlled by others, whether its users or alternative third-party providers. A DAO is an operation, often using mechanisms like cryptocurrency and tokens, to enable a kind of democratic voting, or (possibly) a set of smart contracts, that determine how the loosely defined organization is run. They are not the same.

In theory, a decentralized social media site could be run by a DAO, but I don’t know of any that currently are.

Also, um, decentralized social media can only really exist because of Section 230. “Without Section 230,” you wouldn’t have Bluesky or Mastodon, because they would face ruinous litigation for hosting content that people would sue over. So, no, you would not have either more decentralized social media (which I think is what you meant) or DAOs (which are wholly unrelated). You’d have a lot less, because hosting third-party speech would come with way more liability risk.

Also, there’s nothing inherent to decentralized social media that means you’d “put the brakes on virality.” Mastodon has developed to date in a manner designed to tamp down virality, but Bluesky hasn’t? Nor have other decentralized social media offerings, many of which hope to serve a global conversation where virality is a part of it. And that wouldn’t really change with or without Section 230. Mastodon made that decision because of the types of communities it wanted to foster. And, indeed, its ability to do that is, in part, due to intermediary liability protections like Section 230, that enable the kind of small, more focused community moderation Mastodon embraces already.

It’s really not clear to me that Professor Stanger even knows what Section 230 does.

Non-profits like Wikipedia are concerned that their enterprise could be shut down through gratuitous defamation lawsuits that would bleed them dry until they ceased to exist (such as what happened with Gawker). I am not convinced this is a danger for Wikipedia, since their editing is done by humans who have first amendment rights, and their product is not fodder for virality….

Again, wut? The fact that their editing is “done by humans” has literally no impact on anything here. Why even mention that? Humans get sued for defamation all the time. And, if they’re more likely to get sued for defamation, they’re less likely to even want to edit at all.

And people get mad about their Wikipedia articles all the time, and sometimes they sue over them. Section 230 gets those lawsuits thrown out. Without it, those lawsuits would last longer and be more expensive.

Again, it’s not at all clear if Prof. Stanger even knows what Section 230 is or how it works.

The Facebook Files show that Meta knew that its engagement algorithms had adverse effects on the mental health of teenage girls, yet it has done nothing notable to combat those unintended consequences. Instead, Meta’s lawyers have invoked Section 230 in lawsuits to defend itself against efforts to hold it liable for serious harms

Again, this is just wrong. What the crux of the Facebook Files showed was that Meta was, in fact, doing research to learn about where its algorithms might cause harm in order to try to minimize that harm. However, because of some bad reporting, it now means that companies will be less likely to even do that research, because people like Professor Stanger will misrepresent it, claiming that they did nothing to try to limit the harms. This is just outright false information.

Also, the cases where Meta has invoked Section 230 would be unrelated to the issue being discussed here because 230 is about not being held liable for user content.

The online world brought to life by Section 230 now dehumanizes us by highlighting our own insignificance. Social media and cancel culture make us feel small and vulnerable, where human beings crave feeling large and living lives of meaning, which cannot blossom without a felt sense of personal agency that our laws and institutions are designed to protect. While book publishers today celebrate the creative contributions of their authors, for-profit Internet platforms do not.

I honestly have no idea what’s being said here. “Dehumanizes us by highlighting our own insignificance?” What are you even talking about? People were a lot more “insignificant” pre-internet, when they had no way to speak out. And what does “cancel culture” have to do with literally any of this?

Without Section 230, companies would be liable for the content on their platforms. This would result in an explosion of lawsuits and greater caution in such content moderation, although companies would have to balance such with first amendment rights. Think of all the human jobs that could be generated!

Full employment for tort lawyers! I mean, this is just a modern version of Bastiat’s broken window fallacy. Think of all the economic activity if we just break all the windows in our village!

Again and again, it becomes clear that Stanger has no clue how any of this works. She does not understand Section 230. She does not understand the internet. She does not understand the First Amendment. And she does not understand content moderation. It’s a hell of a thing, considering she is testifying about Section 230 and its impact on social media and the First Amendment.

At a stroke, content moderation for companies would be a vastly simpler proposition. They need only uphold the First Amendment, and the Courts would develop the jurisprudence to help them do that, rather than to put the onus of moderation entirely on companies.

That is… not at all how it would work. They don’t just need to “uphold the First Amendment” (which is not a thing that companies can even do). The First Amendment’s only role is in restricting the government, not companies, from passing laws that infringe on a person’s ability to express themselves.

Instead, as has been detailed repeatedly, companies would face the so-called “moderator’s dilemma.” Because the First Amendment requires distributors to have actual knowledge of content violating the law to be liable, a world without Section 230 would incentivize one of two things, neither of which is “upholding the First Amendment.” They would either let everything go and do as little moderation as possible (so as to avoid the requisite knowledge), or they’d become very aggressive in limiting and removing content to avoid liability (even though this wouldn’t work and they’d still get hit with tons of lawsuits).

We’ve been here before. When government said the American public owned the airwaves, so television broadcasting would be regulated, they put in place regulations that supported the common good. The Internet affects everyone, and our public square is now virtual, so we must put in place measures to ensure that our digital age public dialogue includes everyone. In the television era, the fairness doctrine laid that groundwork. A new lens needs to be developed for the Internet age.

Except, no, that’s just factually wrong. The only reason that the government was able to put regulations on broadcast television was because the government controlled the public spectrum which they licensed to the broadcasters. The Supreme Court made clear in Red Lion that without that, they could not hinder the speech of media companies. So, the idea that you can just apply similar regulations to the internet is just fundamentally clueless. The internet is not publicly owned spectrum licensed to anyone.

While Section 230 perpetuates an illusion that today’s social media companies are common carriers like the phone companies, they are not. Unlike Ma Bell, they curate the content they transmit to users

Again, it appears the Professor is wholly unaware of Section 230 and how it works. The authors of Section 230 made it clear over and over again that they wrote 230 to be the opposite of common carriers. No one who supports Section 230 thinks it makes platforms into common carriers, because it does not. The entire point was to free up companies to choose how to curate content, so as to allow those companies to craft the kinds of communities they wanted. They only people claiming the “illusion” of common carrierness are those who are trying to destroy Section 230.

So there is no “illusion” here, unless you don’t understand what you’re talking about.

The repeal of Section 230 would also be a step in the right direction in addressing what are presently severe power imbalances between government and corporate power in shaping democratic life. It would also shine a spotlight on a globally disturbing fact: the overwhelming majority of global social media is currently in the hands of one man (Mark Zuckerberg), while nearly half the people on earth have a Meta account. How can that be a good thing under any scenario for the free exchange of ideas?

I mean, we agree that it’s bad that Meta is so big. But if you remove Section 230 (as Meta itself has advocated for!), you help Meta get bigger and harm the competition. Meta has a building full of lawyers. They can handle the onslaught of lawsuits that this would bring (as Stanger herself gleefully cheers on). It’s everyone else, the smaller sites, such as the decentralized players (not DAOs) who would get destroyed.

Mastodon admins aren’t going to be able to afford to pay to defend the lawsuits. Bluesky doesn’t have a building full of lawyers. The big winner here would be Meta. The cost to Meta of removing Section 230 is minimal. The cost to everyone trying to eat away at Meta’s marketshare would be massive.

The new speech is governed by the allocation of virality in our virtual public square. People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice. If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm. It is as if the constitution had required citizens to speak through actors or lawyers who answered to the Dutch East India Company, or some other large remote entity. What power should these intermediaries have? When the very logic of speech must shift in order for people to be heard, is that still free speech? This was not a problem foreseen in the law.

I mean, this is just ahistorical nonsense. Historically, most people had no way to get their message out at all. You could talk to your friends, family, co-workers, and neighbors, and that was about it. If you wanted to reach beyond that small group, you required some large gatekeeper (a publisher, a TV or radio producer, a newspaper) to grant you access, which they refused for the vast majority of people.

The internet flipped all that on its head, allowing anyone to effectively speak to anyone. The reason we have algorithms is not “Section 230” and the algorithms aren’t “setting the volume,” they came in to deal with the simple fact that there’s just too much information, and it was flooding the zone. People wanted to find information that was more relevant to them, and with the amount of content available online, the only way to manage that was with some sort of algorithm.

But, again, the rise of algorithms is not a Section 230 issue, even though Stanger seems to think it is.

Getting rid of the liability shield for all countries operating in the United States would have largely unacknowledged positive implications for national security, as well as the profit margins for US-headquartered companies. Foreign electoral interference is not in the interests of democratic stability, precisely because our enemies benefit from dividing us rather than uniting us. All foreign in origin content could therefore be policed at a higher standard, without violating the first amendment or the privacy rights of US citizens. As the National Security Agency likes to emphasize, the fourth amendment does not apply to foreigners and that has been a driver of surveillance protocols since the birth of the Internet. It is probable that the Supreme Court’s developing first amendment jurisprudence for social media in a post-230 world would embrace the same distinction. At a stroke, the digital fentanyl that TikTok represents in its American version could easily be shut down, and we could through a process of public deliberation leading to new statutory law collectively insist on the same optimization targets for well-being, test scores, and time on the platform that Chinese citizens currently enjoy in the Chinese version of TikTok (Douyin)

Again, this is a word salad that is mostly meaningless.

First of all, none of this has anything to do with Section 230, but rather the First Amendment. And it’s already been noted, clearly, that the First Amendment protects American users of foreign apps.

No one is saying “you can’t ban TikTok because of 230,” they’re saying “you can’t ban TikTok because of the First Amendment.” The Supreme Court isn’t going to magically reinvent long-standing First Amendment doctrine because 230 is repealed. This is nonsense.

And, we were just discussing what utter nonsense it is to claim that TikTok is “digital fentanyl” so I won’t even bother repeating that.

There might also be financial and innovation advantages for American companies with this simple legislative act. Any commercial losses for American companies from additional content moderation burdens would be offset by reputational gains and a rule imposed from without on what constitutes constitutionally acceptable content. Foreign electoral interference through misinformation and manipulation could be shut down as subversive activity directed at the Constitution of the United States, not a particular political party.

This part is particularly frustrating. This is why internet companies already moderate. Stanger’s piece repeatedly seems to complain both about too little moderation (electoral interference! Alex Jones!) and too much moderation (algorithms! dastardly Zuck deciding what I can read!).

She doesn’t even seem to realize that her argument is self-contradictory.

But, here, the supposed “financial and innovation advantages” from American companies being able to get “reputational gains” by stopping “misinformation” already exists. And it only exists because of Section 230. Which Professor Stanger is saying we need to remove to get the very thing it enables, and which would be taken away if it were repealed.

This whole thing makes me want to bang my head on my desk repeatedly.

Companies moderate today to (1) make users’ experience better and (2) to make advertisers happier that they’re not facing brand risk from having ads appear next to awful content. The companies that do better already achieve that “reputational benefit,” and they can do that kind of moderation because they know Section 230 prevents costly, wasteful, vexatious litigation from getting too far.

If you remove Section 230, that goes away. As discussed above, companies then are much more limited in the kinds of moderation they can do, which means users have a worse experience and advertisers have a worse experience, leading to reputational harm.

Today, companies already try to remove or diminish the power of electoral interference. That’s a giant part of trust & safety teams’ efforts. But they can really only do it safely because of 230.

The attention-grooming model fostered by Section 230 leads to stupendous quantities of poor-quality data. While an AI model can tolerate a significant amount of poor-quality data, there is a limit. It is unrealistic to imagine a society mediated by mostly terrible communication where that same society enjoys unmolested, high-quality AI. A society must seek quality as a whole, as a shared cultural value, in order to maximize the benefits of AI. Now is the best time for the tech business to mature and develop business models based on quality.

I’ve read this paragraph multiple times, and I still don’t know what it’s saying. Section 230 does not lead to an “attention-grooming model.” That’s just how society works. And, then, when she says society must seek quality as a whole, given how many people are online, the only way to do that is with algorithms trying to make some sort of call on what is, and what is not, quality.

That’s how this works.

Does she imagine that without Section 230, algorithms will go away, but good quality content will magically rise up? Because that’s not how any of this actually works.

Again, there’s much more in her written testimony, and none of it makes any sense at all.

Her spoken testimony was just as bad. Rep. Bob Latta asked her about the national security claims (some of which were quoted above) and we got this word salad, none of which has anything to do with Section 230:

I think it’s important to realize that our internet is precisely unique because it’s so open and that makes it uniquely vulnerable to all sorts of cyber attacks. Just this week, we saw an extraordinarily complicated plot that is most likely done by China, Russia or North Korea that could have blown up the internet as we know it. If you want to look up XZ Utils, Google that and you’ll find all kinds of details. They’re still sorting out what the intention was. It’s extraordinarily sophisticated though, so I think that the idea that we have a Chinese company where data on American children is being stored and potentially utilized in China, can be used to influence our children. It can be used in any number of ways no matter what they tell you. So I very much support and applaud the legislation to repeal, not to repeal, but to end TikToks operations in the United States.

The national security implications are extraordinary. Where the data is stored is so important and how it can be used to manipulate and influence us is so important. And I think the next frontier that I’ll conclude with this, for warfare, is in cyberspace. It’s where weak countries have huge advantages. They can pour resources into hackers who could really blow up our infrastructure, our hospitals, our universities. They’re even trying to get, as you know, into the House. This House right here. So I think repealing Section 230 is connected to addressing a host of potential harms

Nothing mentioned in there — from supply chain attacks like xz utils, to a potential TikTok ban, to hackers breaking into hospitals — has anything whatsoever to do with Section 230. She just throws it in at the end as if they’re connected.

She also claimed that Eric Schmidt has come out in favor of “repealing Section 230,” which was news to me. It also appears to be absolutely false. I went and looked, and the only thing I can find is a Digiday article which claims he called for reforms (not a repeal). The article never actually quotes him saying anything related to Section 230 at all, so it’s unclear what (if anything) he actually said. Literally the only quotes from Schmidt are old man stuff about how the kids these days just need to learn how to put down their phones, and then something weird about the fairness doctrine. Not 230.

Later, in the hearing, she was asked about the impact on smaller companies (some of which I mentioned above) and again demonstrates a near total ignorance of how this all works:

There is some concern, it’s sometimes expressed from small businesses that they are going to be the subject of frivolous lawsuits, defamation lawsuits, and they can be sued out of business even though they’ve defamed no one. I’m less concerned about that because if we were to repeal section (c)(1) of Section 230 of those 26 words, I think the First Amendment would govern and we would develop the jurisprudence to deal with small business in a more refined way. I think if anything, small businesses are in a better position to control and oversee what’s on their platforms than these monolithic large companies we have today. So with a bit of caution, I think that could be addressed.

The First Amendment always governs. But Section 230 is the “more refined way” that we’ve developed to help protect small businesses. The main function of Section 230 is to get cases, that would be long and costly if you had to defend them under the First Amendment, tossed out much earlier at the motion to dismiss stage. Literally that’s Section 230’s main purpose.

If you had to fight it out under the First Amendment, you’re talking about hundreds of thousands of dollars and a much longer case. And that cost is going to lead companies to (1) refuse to host lots of protected content, because it’s not worth the hassle, and (2) be much more open to pulling down any content that anyone complains about.

This is not speculative. There have been studies on this. Weaker intermediary laws always lead to massive overblocking. If Stanger had done her research, or even understood any of this, she would know this.

So why is she the one testifying before Congress?

I’ll just conclude with this banger, which was her final statement to Congress:

I just want to maybe take you back to the first part of your question to explain that, which I thought was a good one, which is that we have a long history of First Amendment jurisprudence in this country that in effect has been stopped by Section 230. In other words, if you review, if you remove (c)(1), that First Amendment jurisprudence will develop to determine when it is crime fire in a crowded theater, whether there’s defamation, whether there’s libel. We believe in free speech in this country, but even the First Amendment has some limits put on it and those could apply to the platforms. We have a strange situation right now if we take that issue of fentanyl that we were discussing earlier, what we have right now is essentially a system where we can go after the users, we can go after the dealers, but we can’t go after the mules. And I think that’s very problematic. We should hold the mules liable. They’re part of the system.

Yeah. So. She actually went to the whole fire in a crowded theater thing. This is the dead-on giveaway that the person speaking has no clue about the First Amendment. That’s dicta from a case from over 100 years ago, in a case that is no longer considered good law, and hasn’t been in decades. Even worse, that dicta came in a case about jailing war protestors.

She also trots out yet another of Ken “Popehat” White’s (an actual First Amendment expert) most annoying tropes about people opining on the First Amendment without understanding it: because the First Amendment has some limits, this new limit must be okay. That’s not how it works. As Ken and others have pointed out, the exceptions to the First Amendment are an established, known, and almost certainly closed set.

The Supreme Court has no interest in expanding that set. It refused to do so for animal crush videos, so it’s not going to magically do it for whatever awful speech you think it should limit.

Anyway, it was a shame that Congress chose to hold a hearing on Section 230 and only bring in witnesses who hate Section 230. Not a single witness who could explain why Section 230 is so important was brought in. But, even worse, they gave one of the three witness spots to someone who was spewing word salad level nonsense, that didn’t make any sense at all, was often factually incorrect (in hilariously embarrassing ways), and seemed wholly unaware of how any relevant thing worked.

Do better, Congress.

Senate Must Follow House’s Lead In Passing Fourth Amendment Is Not For Sale Act

The Fourth Amendment exists for a reason. It’s supposed to protect our private possessions and data from government snooping, unless they have a warrant. It doesn’t entirely prevent the government from getting access to data, they just need to show probable cause of a crime.

But, of course, the government doesn’t like to make the effort.

And these days, many government agencies (especially law enforcement) have decided to take the shortcut that money can buy: they’re just buying private data on the open market from data brokers and avoiding the whole issue of a warrant altogether.

This could be solved with a serious, thoughtful, comprehensive privacy bill. I’m hoping to have a post soon on the big APRA data privacy bill that’s getting attention lately (it’s a big bill, and I just haven’t had the time to go through the entire bill yet). In the meantime, though, there was some good news, with the House passing the “Fourth Amendment is Not For Sale Act,” which was originally introduced in the Senate by Ron Wyden and appears to have broad bipartisan support.

We wrote about it when it was first introduced, and again when the House voted it out of committee last year. The bill is not a comprehensive privacy bill, but it would close the loophole discussed above.

The Wyden bill just says that if a government agency wants to buy such data, if it would have otherwise needed a warrant to get that data in the first place, it should need to get a warrant to buy it in the market as well.

Anyway, the bill passed 219 to 199 in the House, and it was (thankfully) not a partisan vote at all.

Image

It is a bit disappointing that the vote was so close and that so many Representatives want to allow government agencies, including law enforcement, to be able to purchase private data to get around having to get a warrant. But, at least the majority voted in favor of the bill.

And now, it’s up to the Senate. Senator Wyden posted on Bluesky about how important this bill is, and hopefully the leadership of the Senate understand that as well.

Can confirm. This is a huge and necessary win for Americans' privacy, particularly after the Supreme Court gutted privacy protections under Roe. Now it's time for the Senate to do its job and follow suit.

[image or embed]

— Senator Ron Wyden (@wyden.senate.gov) Apr 17, 2024 at 3:30 PM

SCOTUS Needs To Take Up The Texas Age Verification Lawsuit

I think we could witness one of the most important First Amendment legal showdowns ever.

The U.S. Supreme Court is being asked to rule on the constitutionality of mandatory age verification for porn websites. If the high court takes up the case, it would queue up a landmark debate pertaining to the First Amendment and privacy rights of millions of people.

Free Speech Coalition and the parent companies of the largest adult entertainment websites on the web filed suit in the U.S. District Court for the Western District of Texas with the intention to block House Bill (HB) 1181.

HB 1181 requires mandatory age verification for porn websites with users from Texas IP addresses. It also requires pseudoscientific health warnings to be posted on adult websites. Counsel representing the coalition and the porn companies argued that it violated the First Amendment rights of consumers and owners of the websites. This prompted the federal district court to initially enjoin the state of Texas from enforcing the law because its text appeared to be unconstitutional.

Acting Texas Attorney General Angela Colmenero appealed the injunction to the Fifth Circuit Court of Appeals. After a clear demonstration of classic Fifth Circuit tap dancing and the return of Ken Paxton to helm of the Attorney General’s office, Texas was granted permission to enforce the age verification requirements outlined in the law. Luckily, the circuit judges properly applied the Zauderer standard, denying the requirement to post the bogus health warnings.

Soon after this, Paxton announced lawsuits against the parent companies of Pornhub, xHamster, and Stripchat for violations of HB 1181. The penalties total in millions of dollars in damages, per the law. After the lawsuits for HB 1181 violations were announced and filed in circuit courts in Travis County, counsel for the plaintiffs tried to hold enforcement while they petitioned the high court to take up the case for consideration. Justice Samuel Alito, the circuit justice for the Fifth Circuit, has yet to indicate that the case will be taken up by the Supreme Court. There is no reason why they shouldn’t take it up because of how important this case is moving forward, and how this issue is showing up in so many other states.

The case, Free Speech Coalition et al. v. Paxton, is so important that the national affiliate of the American Civil Liberties Union announced they are aiding the plaintiffs and their current counsel, a team from the big law firm Quinn Emanuel, in their case. They will support the petition for writ of certiorari, potential oral arguments, etc. to render House Bill 1181 and all age verification laws as unconstitutional pipedreams.

Plaintiffs accurately argue that this is settled law, referring to the high court’s landmark decision in Reno v. American Civil Liberties Union. This decision found that segregating the content of the internet by age violates the rights of not only adults but for minors. The vast majority of age verification laws as they are structured now do just that.

While the Supreme Court provided for a less restrictive means to filter out minors from viewing age-restricted materials and potentially facing some level of harm, the vehicles of enforcement and some of the options touted in these bills for controlling minors’ web usage are, to the plaintiffs and civil liberties organizations, a violation of the First Amendment. ACLU and Quinn Emanuel attorneys for the plaintiffs present these arguments in their petition for writ of certiorari, which was filed in April 2024. Now, we just need the Supreme Court to take this seriously and not let the Fifth Circuit, the circuit that upheld a ban on drag shows, dictate law for the nation.

Michael McGrady covers the legal and tech side of the online porn business, among other topics.

The US Banning TikTok Would Play Right Into China’s Hands, And Destroy Decades Of US Work On Promoting An Open Internet

Apparently, the TikTok ban bill is back.

Speaker Mike Johnson plans to include TikTok divestiture legislation already passed by the House in a fast-moving aid package for Ukraine and Israel that the chamber is set to clear on Saturday. The Senate is expected to quickly take up the measure, and President Joe Biden promised Wednesday to sign it immediately.

If signed into law, the bill would deliver a significant blow to a video-sharing app used by as many as 170 million Americans. Its enactment would also mark a major setback to the company’s intense lobbying efforts, especially by Chief Executive Officer Shou Chew, who made the rounds on Capitol Hill last month in a bid to get the Senate to squelch the legislation.

I’ve already explained why the TikTok ban is both unconstitutional and would not do anything to fix the “concerns” that people have raised about it. We’ve also discussed how those most vocally pushing for the TikTok ban appear to be financially conflicted.

But, even more important than all that, is that a TikTok ban would be a disaster for the open web. Yes, other countries have banned apps, but they tend to be authoritarian countries that have never liked and never supported an open web.

Banning an entire app in the US would be a massive attack on the very concept of an open web. And that’s really unfortunate, given that the US used to be the world’s most vocal supporter of the web being kept open and free.

The New York Times recently had a good article calling out what a disaster the ban would be for the open web.

Digital rights groups and others around the world have taken notice — and raised the question of how the moves against TikTok contradict the United States’ arguments in favor of an open internet.

A Russian opposition blogger, Aleksandr Gorbunov, posted on social media last month that Russia could use the move to shut down services like YouTube. And digital rights advocates globally are expressing fears of a ripple effect, with the United States providing cover for authoritarians who want to censor the internet.

In March, the Chinese government, which controls its country’s internet, said America had “one way of saying and doing things about the United States, and another way of saying and doing things about other countries,” citing the TikTok legislation.

Passing the TikTok ban would effectively be telling the world (1) it’s totally okay to ban apps you don’t like, and (2) the U.S. long-standing commitment to the open web was always fake and always bullshit, because the second a successful foreign app came along, we tossed out those principles.

“It would diminish the U.S.’s standing in promoting internet freedom,” said Juan Carlos Lara, the executive director of Derechos Digitales, a Latin American digital rights group based in Chile. “It would definitely not bolster its own case for promoting a free and secure, stable and interoperable internet.”

And that signal will be heard loud and clear around the world:

Mishi Choudhary, a lawyer who founded the New Delhi-based Software Freedom Law Center, said the Indian government would also use a U.S. ban to justify further crackdowns. It has already engaged in internet shutdowns, she said, and it banned TikTok in 2020 over border conflicts with China.

“This gives them good reason to find confidence in their past actions, but also emboldens them to take similar future actions,” she said in an interview.

Mr. Lara of Derechos Digitales noted that countries like Venezuela and Nicaragua had already passed laws that gave the government more control over online content. He said increased government control of the internet was a “tempting idea” that “really risks materializing if such a thing is seen in places like the U.S.”

A forced sale or ban of TikTok could also make it harder for the American government to ask other countries to embrace an internet governed by international organizations, digital rights experts said.

And, if the goal here is to hurt China in particular, that may backfire massively:

Patrick Toomey, the deputy director of the National Security Project at the American Civil Liberties Union, said that if the TikTok measure became law, the “hypocrisy would be inescapable and the dividends to China enormous.”

China has long made a big deal whenever the US government is hypocritical like this. This would be a huge PR win for the Chinese government. It would allow it to claim that its Great Firewall approach to the internet is right, and that the US was admitting that openness and an open internet fails. It would allow China to call out US hypocrisy, and that matters a lot at this moment when China is working hard to build stronger relationships with lots of countries around the globe.

Banning TikTok won’t help the US against China. It will play right into China’s hands. It doesn’t need TikTok to get data on Americans, nor to try to influence Americans. But, destroying decades of US foreign policy promoting an open and free internet serves China’s interests massively.

Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The Record

A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness.

I’m not going to review all the reasons it was wrong. You can go back to my original article for that, though I will note that the argument seemed to suggest that getting rid of Section 230 would both lead to better content moderation and, at the same time, only moderation based on the First Amendment. Both of those points are obviously wrong, but the latter one is incoherent.

Given his long track record of wrongness, I had assumed that much of the article likely came from Lanier. However, I’m going to reassess that in light of Stanger’s recent performance before the House Energy & Commerce Committee. Last week, there was this weird hearing about Section 230, in which the Committee invited three academic critics of Section 230, and not a single person who could counter their arguments and falsehoods. We talked about this hearing a bit in this week’s podcast, with Rebecca MacKinnon from the Wikimedia Foundation.

Stanger was one of the three witnesses. The other two, Mary Anne Franks and Mary Graw Leary, presented some misleading and confused nonsense about Section 230. However, the misleading and confused nonsense about Section 230 at least fits into the normal framework of the debate around Section 230. There is confusion about how (c)(1) and (c)(2) interact, the purpose of Section 230, and (especially) some confusion about CSAM and Section 230 and an apparent unawareness that federal criminal behavior is exempted from Section 230.

But, let’s leave that aside. Because Stanger’s submission was so far off the mark that whoever invited her should be embarrassed. I’ve seen some people testify before Congress without knowing what they’re talking about, but I cannot recall seeing testimony this completely, bafflingly wrong before. Her submitted testimony is wrong in all the ways that the Wired article was wrong and more. There are just blatant factual errors throughout it.

It is impossible to cover all of the nonsense, so we’re just going to pick some gems.

Without Section 230, existing large social media companies would have to adapt. Decentralized Autonomous Organizations, (DAOs) such as BlueSky and Mastodon, would become more attractive. The emergent DAO social media landscape should serve to put further brakes on virality, allowing a more regional social media ecosystem to emerge, thereby creating new demand for local media. In an ideal world, networks of DAOs would comprise a new fediverse (a collection of social networking servers which can communicate with each other, while remaining independently controlled), where users would have greater choice and control over the communities of which they are a part.

So, um. That’s not what DAOs are, professor. You seem to be confusing decentralized social media with decentralized autonomous organizations, which are a wholly different thing. This is kind of like saying “social security benefits” when you mean “social media influencers” because both begin with “social.” They’re not the same thing.

A decentralized social media site is what it says on the tin. It’s a type of social media that isn’t wholly controlled by a single company. Different bits of it can be controlled by others, whether its users or alternative third-party providers. A DAO is an operation, often using mechanisms like cryptocurrency and tokens, to enable a kind of democratic voting, or (possibly) a set of smart contracts, that determine how the loosely defined organization is run. They are not the same.

In theory, a decentralized social media site could be run by a DAO, but I don’t know of any that currently are.

Also, um, decentralized social media can only really exist because of Section 230. “Without Section 230,” you wouldn’t have Bluesky or Mastodon, because they would face ruinous litigation for hosting content that people would sue over. So, no, you would not have either more decentralized social media (which I think is what you meant) or DAOs (which are wholly unrelated). You’d have a lot less, because hosting third-party speech would come with way more liability risk.

Also, there’s nothing inherent to decentralized social media that means you’d “put the brakes on virality.” Mastodon has developed to date in a manner designed to tamp down virality, but Bluesky hasn’t? Nor have other decentralized social media offerings, many of which hope to serve a global conversation where virality is a part of it. And that wouldn’t really change with or without Section 230. Mastodon made that decision because of the types of communities it wanted to foster. And, indeed, its ability to do that is, in part, due to intermediary liability protections like Section 230, that enable the kind of small, more focused community moderation Mastodon embraces already.

It’s really not clear to me that Professor Stanger even knows what Section 230 does.

Non-profits like Wikipedia are concerned that their enterprise could be shut down through gratuitous defamation lawsuits that would bleed them dry until they ceased to exist (such as what happened with Gawker). I am not convinced this is a danger for Wikipedia, since their editing is done by humans who have first amendment rights, and their product is not fodder for virality….

Again, wut? The fact that their editing is “done by humans” has literally no impact on anything here. Why even mention that? Humans get sued for defamation all the time. And, if they’re more likely to get sued for defamation, they’re less likely to even want to edit at all.

And people get mad about their Wikipedia articles all the time, and sometimes they sue over them. Section 230 gets those lawsuits thrown out. Without it, those lawsuits would last longer and be more expensive.

Again, it’s not at all clear if Prof. Stanger even knows what Section 230 is or how it works.

The Facebook Files show that Meta knew that its engagement algorithms had adverse effects on the mental health of teenage girls, yet it has done nothing notable to combat those unintended consequences. Instead, Meta’s lawyers have invoked Section 230 in lawsuits to defend itself against efforts to hold it liable for serious harms

Again, this is just wrong. What the crux of the Facebook Files showed was that Meta was, in fact, doing research to learn about where its algorithms might cause harm in order to try to minimize that harm. However, because of some bad reporting, it now means that companies will be less likely to even do that research, because people like Professor Stanger will misrepresent it, claiming that they did nothing to try to limit the harms. This is just outright false information.

Also, the cases where Meta has invoked Section 230 would be unrelated to the issue being discussed here because 230 is about not being held liable for user content.

The online world brought to life by Section 230 now dehumanizes us by highlighting our own insignificance. Social media and cancel culture make us feel small and vulnerable, where human beings crave feeling large and living lives of meaning, which cannot blossom without a felt sense of personal agency that our laws and institutions are designed to protect. While book publishers today celebrate the creative contributions of their authors, for-profit Internet platforms do not.

I honestly have no idea what’s being said here. “Dehumanizes us by highlighting our own insignificance?” What are you even talking about? People were a lot more “insignificant” pre-internet, when they had no way to speak out. And what does “cancel culture” have to do with literally any of this?

Without Section 230, companies would be liable for the content on their platforms. This would result in an explosion of lawsuits and greater caution in such content moderation, although companies would have to balance such with first amendment rights. Think of all the human jobs that could be generated!

Full employment for tort lawyers! I mean, this is just a modern version of Bastiat’s broken window fallacy. Think of all the economic activity if we just break all the windows in our village!

Again and again, it becomes clear that Stanger has no clue how any of this works. She does not understand Section 230. She does not understand the internet. She does not understand the First Amendment. And she does not understand content moderation. It’s a hell of a thing, considering she is testifying about Section 230 and its impact on social media and the First Amendment.

At a stroke, content moderation for companies would be a vastly simpler proposition. They need only uphold the First Amendment, and the Courts would develop the jurisprudence to help them do that, rather than to put the onus of moderation entirely on companies.

That is… not at all how it would work. They don’t just need to “uphold the First Amendment” (which is not a thing that companies can even do). The First Amendment’s only role is in restricting the government, not companies, from passing laws that infringe on a person’s ability to express themselves.

Instead, as has been detailed repeatedly, companies would face the so-called “moderator’s dilemma.” Because the First Amendment requires distributors to have actual knowledge of content violating the law to be liable, a world without Section 230 would incentivize one of two things, neither of which is “upholding the First Amendment.” They would either let everything go and do as little moderation as possible (so as to avoid the requisite knowledge), or they’d become very aggressive in limiting and removing content to avoid liability (even though this wouldn’t work and they’d still get hit with tons of lawsuits).

We’ve been here before. When government said the American public owned the airwaves, so television broadcasting would be regulated, they put in place regulations that supported the common good. The Internet affects everyone, and our public square is now virtual, so we must put in place measures to ensure that our digital age public dialogue includes everyone. In the television era, the fairness doctrine laid that groundwork. A new lens needs to be developed for the Internet age.

Except, no, that’s just factually wrong. The only reason that the government was able to put regulations on broadcast television was because the government controlled the public spectrum which they licensed to the broadcasters. The Supreme Court made clear in Red Lion that without that, they could not hinder the speech of media companies. So, the idea that you can just apply similar regulations to the internet is just fundamentally clueless. The internet is not publicly owned spectrum licensed to anyone.

While Section 230 perpetuates an illusion that today’s social media companies are common carriers like the phone companies, they are not. Unlike Ma Bell, they curate the content they transmit to users

Again, it appears the Professor is wholly unaware of Section 230 and how it works. The authors of Section 230 made it clear over and over again that they wrote 230 to be the opposite of common carriers. No one who supports Section 230 thinks it makes platforms into common carriers, because it does not. The entire point was to free up companies to choose how to curate content, so as to allow those companies to craft the kinds of communities they wanted. They only people claiming the “illusion” of common carrierness are those who are trying to destroy Section 230.

So there is no “illusion” here, unless you don’t understand what you’re talking about.

The repeal of Section 230 would also be a step in the right direction in addressing what are presently severe power imbalances between government and corporate power in shaping democratic life. It would also shine a spotlight on a globally disturbing fact: the overwhelming majority of global social media is currently in the hands of one man (Mark Zuckerberg), while nearly half the people on earth have a Meta account. How can that be a good thing under any scenario for the free exchange of ideas?

I mean, we agree that it’s bad that Meta is so big. But if you remove Section 230 (as Meta itself has advocated for!), you help Meta get bigger and harm the competition. Meta has a building full of lawyers. They can handle the onslaught of lawsuits that this would bring (as Stanger herself gleefully cheers on). It’s everyone else, the smaller sites, such as the decentralized players (not DAOs) who would get destroyed.

Mastodon admins aren’t going to be able to afford to pay to defend the lawsuits. Bluesky doesn’t have a building full of lawyers. The big winner here would be Meta. The cost to Meta of removing Section 230 is minimal. The cost to everyone trying to eat away at Meta’s marketshare would be massive.

The new speech is governed by the allocation of virality in our virtual public square. People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice. If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm. It is as if the constitution had required citizens to speak through actors or lawyers who answered to the Dutch East India Company, or some other large remote entity. What power should these intermediaries have? When the very logic of speech must shift in order for people to be heard, is that still free speech? This was not a problem foreseen in the law.

I mean, this is just ahistorical nonsense. Historically, most people had no way to get their message out at all. You could talk to your friends, family, co-workers, and neighbors, and that was about it. If you wanted to reach beyond that small group, you required some large gatekeeper (a publisher, a TV or radio producer, a newspaper) to grant you access, which they refused for the vast majority of people.

The internet flipped all that on its head, allowing anyone to effectively speak to anyone. The reason we have algorithms is not “Section 230” and the algorithms aren’t “setting the volume,” they came in to deal with the simple fact that there’s just too much information, and it was flooding the zone. People wanted to find information that was more relevant to them, and with the amount of content available online, the only way to manage that was with some sort of algorithm.

But, again, the rise of algorithms is not a Section 230 issue, even though Stanger seems to think it is.

Getting rid of the liability shield for all countries operating in the United States would have largely unacknowledged positive implications for national security, as well as the profit margins for US-headquartered companies. Foreign electoral interference is not in the interests of democratic stability, precisely because our enemies benefit from dividing us rather than uniting us. All foreign in origin content could therefore be policed at a higher standard, without violating the first amendment or the privacy rights of US citizens. As the National Security Agency likes to emphasize, the fourth amendment does not apply to foreigners and that has been a driver of surveillance protocols since the birth of the Internet. It is probable that the Supreme Court’s developing first amendment jurisprudence for social media in a post-230 world would embrace the same distinction. At a stroke, the digital fentanyl that TikTok represents in its American version could easily be shut down, and we could through a process of public deliberation leading to new statutory law collectively insist on the same optimization targets for well-being, test scores, and time on the platform that Chinese citizens currently enjoy in the Chinese version of TikTok (Douyin)

Again, this is a word salad that is mostly meaningless.

First of all, none of this has anything to do with Section 230, but rather the First Amendment. And it’s already been noted, clearly, that the First Amendment protects American users of foreign apps.

No one is saying “you can’t ban TikTok because of 230,” they’re saying “you can’t ban TikTok because of the First Amendment.” The Supreme Court isn’t going to magically reinvent long-standing First Amendment doctrine because 230 is repealed. This is nonsense.

And, we were just discussing what utter nonsense it is to claim that TikTok is “digital fentanyl” so I won’t even bother repeating that.

There might also be financial and innovation advantages for American companies with this simple legislative act. Any commercial losses for American companies from additional content moderation burdens would be offset by reputational gains and a rule imposed from without on what constitutes constitutionally acceptable content. Foreign electoral interference through misinformation and manipulation could be shut down as subversive activity directed at the Constitution of the United States, not a particular political party.

This part is particularly frustrating. This is why internet companies already moderate. Stanger’s piece repeatedly seems to complain both about too little moderation (electoral interference! Alex Jones!) and too much moderation (algorithms! dastardly Zuck deciding what I can read!).

She doesn’t even seem to realize that her argument is self-contradictory.

But, here, the supposed “financial and innovation advantages” from American companies being able to get “reputational gains” by stopping “misinformation” already exists. And it only exists because of Section 230. Which Professor Stanger is saying we need to remove to get the very thing it enables, and which would be taken away if it were repealed.

This whole thing makes me want to bang my head on my desk repeatedly.

Companies moderate today to (1) make users’ experience better and (2) to make advertisers happier that they’re not facing brand risk from having ads appear next to awful content. The companies that do better already achieve that “reputational benefit,” and they can do that kind of moderation because they know Section 230 prevents costly, wasteful, vexatious litigation from getting too far.

If you remove Section 230, that goes away. As discussed above, companies then are much more limited in the kinds of moderation they can do, which means users have a worse experience and advertisers have a worse experience, leading to reputational harm.

Today, companies already try to remove or diminish the power of electoral interference. That’s a giant part of trust & safety teams’ efforts. But they can really only do it safely because of 230.

The attention-grooming model fostered by Section 230 leads to stupendous quantities of poor-quality data. While an AI model can tolerate a significant amount of poor-quality data, there is a limit. It is unrealistic to imagine a society mediated by mostly terrible communication where that same society enjoys unmolested, high-quality AI. A society must seek quality as a whole, as a shared cultural value, in order to maximize the benefits of AI. Now is the best time for the tech business to mature and develop business models based on quality.

I’ve read this paragraph multiple times, and I still don’t know what it’s saying. Section 230 does not lead to an “attention-grooming model.” That’s just how society works. And, then, when she says society must seek quality as a whole, given how many people are online, the only way to do that is with algorithms trying to make some sort of call on what is, and what is not, quality.

That’s how this works.

Does she imagine that without Section 230, algorithms will go away, but good quality content will magically rise up? Because that’s not how any of this actually works.

Again, there’s much more in her written testimony, and none of it makes any sense at all.

Her spoken testimony was just as bad. Rep. Bob Latta asked her about the national security claims (some of which were quoted above) and we got this word salad, none of which has anything to do with Section 230:

I think it’s important to realize that our internet is precisely unique because it’s so open and that makes it uniquely vulnerable to all sorts of cyber attacks. Just this week, we saw an extraordinarily complicated plot that is most likely done by China, Russia or North Korea that could have blown up the internet as we know it. If you want to look up XZ Utils, Google that and you’ll find all kinds of details. They’re still sorting out what the intention was. It’s extraordinarily sophisticated though, so I think that the idea that we have a Chinese company where data on American children is being stored and potentially utilized in China, can be used to influence our children. It can be used in any number of ways no matter what they tell you. So I very much support and applaud the legislation to repeal, not to repeal, but to end TikToks operations in the United States.

The national security implications are extraordinary. Where the data is stored is so important and how it can be used to manipulate and influence us is so important. And I think the next frontier that I’ll conclude with this, for warfare, is in cyberspace. It’s where weak countries have huge advantages. They can pour resources into hackers who could really blow up our infrastructure, our hospitals, our universities. They’re even trying to get, as you know, into the House. This House right here. So I think repealing Section 230 is connected to addressing a host of potential harms

Nothing mentioned in there — from supply chain attacks like xz utils, to a potential TikTok ban, to hackers breaking into hospitals — has anything whatsoever to do with Section 230. She just throws it in at the end as if they’re connected.

She also claimed that Eric Schmidt has come out in favor of “repealing Section 230,” which was news to me. It also appears to be absolutely false. I went and looked, and the only thing I can find is a Digiday article which claims he called for reforms (not a repeal). The article never actually quotes him saying anything related to Section 230 at all, so it’s unclear what (if anything) he actually said. Literally the only quotes from Schmidt are old man stuff about how the kids these days just need to learn how to put down their phones, and then something weird about the fairness doctrine. Not 230.

Later, in the hearing, she was asked about the impact on smaller companies (some of which I mentioned above) and again demonstrates a near total ignorance of how this all works:

There is some concern, it’s sometimes expressed from small businesses that they are going to be the subject of frivolous lawsuits, defamation lawsuits, and they can be sued out of business even though they’ve defamed no one. I’m less concerned about that because if we were to repeal section (c)(1) of Section 230 of those 26 words, I think the First Amendment would govern and we would develop the jurisprudence to deal with small business in a more refined way. I think if anything, small businesses are in a better position to control and oversee what’s on their platforms than these monolithic large companies we have today. So with a bit of caution, I think that could be addressed.

The First Amendment always governs. But Section 230 is the “more refined way” that we’ve developed to help protect small businesses. The main function of Section 230 is to get cases, that would be long and costly if you had to defend them under the First Amendment, tossed out much earlier at the motion to dismiss stage. Literally that’s Section 230’s main purpose.

If you had to fight it out under the First Amendment, you’re talking about hundreds of thousands of dollars and a much longer case. And that cost is going to lead companies to (1) refuse to host lots of protected content, because it’s not worth the hassle, and (2) be much more open to pulling down any content that anyone complains about.

This is not speculative. There have been studies on this. Weaker intermediary laws always lead to massive overblocking. If Stanger had done her research, or even understood any of this, she would know this.

So why is she the one testifying before Congress?

I’ll just conclude with this banger, which was her final statement to Congress:

I just want to maybe take you back to the first part of your question to explain that, which I thought was a good one, which is that we have a long history of First Amendment jurisprudence in this country that in effect has been stopped by Section 230. In other words, if you review, if you remove (c)(1), that First Amendment jurisprudence will develop to determine when it is crime fire in a crowded theater, whether there’s defamation, whether there’s libel. We believe in free speech in this country, but even the First Amendment has some limits put on it and those could apply to the platforms. We have a strange situation right now if we take that issue of fentanyl that we were discussing earlier, what we have right now is essentially a system where we can go after the users, we can go after the dealers, but we can’t go after the mules. And I think that’s very problematic. We should hold the mules liable. They’re part of the system.

Yeah. So. She actually went to the whole fire in a crowded theater thing. This is the dead-on giveaway that the person speaking has no clue about the First Amendment. That’s dicta from a case from over 100 years ago, in a case that is no longer considered good law, and hasn’t been in decades. Even worse, that dicta came in a case about jailing war protestors.

She also trots out yet another of Ken “Popehat” White’s (an actual First Amendment expert) most annoying tropes about people opining on the First Amendment without understanding it: because the First Amendment has some limits, this new limit must be okay. That’s not how it works. As Ken and others have pointed out, the exceptions to the First Amendment are an established, known, and almost certainly closed set.

The Supreme Court has no interest in expanding that set. It refused to do so for animal crush videos, so it’s not going to magically do it for whatever awful speech you think it should limit.

Anyway, it was a shame that Congress chose to hold a hearing on Section 230 and only bring in witnesses who hate Section 230. Not a single witness who could explain why Section 230 is so important was brought in. But, even worse, they gave one of the three witness spots to someone who was spewing word salad level nonsense, that didn’t make any sense at all, was often factually incorrect (in hilariously embarrassing ways), and seemed wholly unaware of how any relevant thing worked.

Do better, Congress.

Texas Court Dismisses Ken Paxton’s Lawsuit Against Yelp For Accurately Describing Crisis Pregnancy Centers

Last fall, we wrote about Yelp going to court in California to try to block Texas’s indicted and facing trial shortly Attorney General Ken Paxton from suing the company for using its speech to accurately warn users that “Crisis Pregnancy Centers” do not generally offer any actual medical care.

As you may know, anti-abortion advocates have opened up so-called “crisis pregnancy centers,” which are designed to look like medical facilities to help pregnant mothers consider their options. Many people are pointed to these crisis centers instead, when searching for potential abortion providers. The centers pretend to be a neutral advocate to help them consider their options, while in reality they are designed to steer expectant mothers away from abortion.

Yelp, quite reasonably, decided to use its own First Amendment rights to provide some more info about those crisis pregnancy centers to better inform potential visitors. It posted notices on crisis pregnancy centers saying: “This is a Crisis Pregnancy Center. Crisis Pregnancy Centers typically provide limited medical services and may not have licensed medical professionals onsite.”

Image

Ken Paxton, who cosplays as a “free speech” supporter on the internet, wrote a letter to Yelp threatening to sue them for this speech. In response, Yelp changed the message to be even more accurate, but Paxton was still upset with their speech, which now said “This is a Crisis Pregnancy Center. Crisis Pregnancy Centers do not offer abortions or referrals to abortion providers.” This is accurate speech, which even Paxton admits is accurate. He just doesn’t like it.

Image

After Texas threatened to sue Yelp once again, Yelp went to court first in a California federal court to get Paxton to shut up and to stop interfering with the company’s free speech rights. Paxton responded by suing Yelp in a Texas state court. Unfortunately, the court in California “reluctantly” rejected the lawsuit due to “Younger abstention,” which basically says a federal court doesn’t have jurisdiction over a case while a state court is considering the same matter. Yelp has appealed to the 9th Circuit.

In the meantime, though, last week, the Texas state court tossed out Paxton’s lawsuit. There’s not much to go on in the order, as it basically just says “we agree with Yelp’s special appearance” in this case.

Having considered Defendant Yelp Inc.’s (“Yelp”) Verified Special Appearance (“Special Appearance”), the responses, and the replies, if any, the Court finds that the Special Appearance should be GRANTED.

You can look at the “Special Appearance” by Yelp which lays out the main reasons the case should be dismissed, with the big one being the court’s lack of personal jurisdiction over Yelp:

This case involves a misguided lawsuit by the State of Texas (“State”) against Yelp, a California-based website operator with no offices in Texas, for allegedly violating the Texas Deceptive Trade Practices Act (“DTPA”) by posting a truthful consumer notice about crisis pregnancy centers on its nationwide website. In the Petition, the State does not allege that Yelp is “at home” in Texas, that the consumer notice was purposely directed at Texas, or that the DTPA claim arises from or relates to Yelp’s contacts with Texas. Instead, the State admits that Yelp is located in San Francisco, California, alleges that Yelp “targeted pregnancy resources centers nationwide,” and concedes that the DTPA claim arises out of a notice posted, from California, on the Yelp business pages of “every pregnancy resource center across the nation,” not merely those in Texas.

Yelp files this Special Appearance because the Petition should be dismissed for lack of personal jurisdiction for many reasons. Yelp, a nonresident of Texas, lacks sufficient minimum contacts with Texas to demonstrate purposeful availment, the DTPA claim does not arise from or relate to Yelp’s contacts with Texas, and the exercise of jurisdiction over Yelp would offend traditional notions of fair play and substantial justice. Yelp should not be haled across the country into a foreign jurisdiction to respond to a baseless claim premised on conduct that occurred exclusively in California.

So, it sure sounds like the Texas court tossed the case out on jurisdictional grounds, without even needing to get to the ridiculousness of Texas trying to sue a company over its accurate speech. The case in front of the 9th Circuit remains in play, though I’m not sure how this latest situation will play into that. If the Texas case is now dead (though, I guess Texas could appeal), then the Younger abstention issue should be moot?

Either way, it’s yet another example (one of so many) of Texas showing off its unconstitutional, censorial tendencies — in a state with leadership who claims to be supportive of free speech.

When Viral Advocacy Fails: TikTok’s Call Flood To Congress Backfires

Flooding Congress with phone calls can work wonders to stop bad bills at times. The SOPA blackout 12 years ago was one of the most effective advocacy campaigns in history. Coincidentally, I was at the Capitol that day, and wandering the halls between meetings, hearing phones ringing non-stop was amazing.

However, that process was carefully planned out over weeks, with sites pushing a very clear message of why internet users should call Congress and complain about the terrible copyright laws that were being pushed.

It appears that TikTok may have taken the wrong lesson from all that and assumed that simply flooding Congress with calls is an effective strategy. It can be, but you have to equip callers with a basic understanding of what it is that they’re calling for and why. And maybe it doesn’t make sense to do it on a bill built off the (mostly false) belief that your app is controlling the minds of gullible American voters.

On Thursday, TikTok put up a pop-up on all US users’ screens when they went to get their daily fill of random videos:

Image

“Stop a TikTok shutdown!” it yells, claiming that “Congress is planning a total ban of TikTok. Speak up now — before your government strips 170 million Americans of their Constitutional right to free expression.”

The bill in question is stupid. It’s a fear-mongering (bipartisan) bunch of grandstanding nonsense. It doesn’t technically “ban” TikTok, but would directly require ByteDance to divest its ownership in the company. If ByteDance does not do so, then it is a ban (despite the bill’s sponsors insisting it’s not). It does seem like a pretty clear bill of attainder, targeting a single company, TikTok, out of yet another fear-mongering moral panic that a successful internet company coming out of China must be evil.

As we’ve been saying for years now, if the fear is about the privacy of American users of the platform, Congress could pass a comprehensive privacy bill. They just choose not to do so. Instead, they play up a silly culture war, which will only lead to even more retribution for American apps outside the US. Indeed, expect to see other countries passing similar bills demanding that US companies divest from successful apps in their countries, as a result of this stupid bill.

And, on top of that, the bill is almost certainly a First Amendment violation, as has been found during previous attempts to effectively ban TikTok, none of which have gone well in court.

TikTok’s gambit apparently worked in terms of getting people to call. But it didn’t always effectively get the message out:

TikTok users flooded some congressional offices with dozens of calls. Results were mixed: Some staffers dismissed the callers as uninformed, or as pranksters, or as “teenagers and old people saying they spend their whole day on the app.”

And, look, when you have a bunch of overly anxious politicians who think that TikTok is like Chinese mind control over American brains (it’s not, but that’s what they seem to think), it’s not difficult to see how telling TikTok users to call Congress could drive those politicians to think this is even more evidence of why the bill is needed, especially when there is a flood of calls from unsophisticated constituents talking about how they “spend their whole day on the app.”

And that seems to have been the case.

House Energy and Commerce Chair Cathy McMorris Rodgers (R-Wash.) said if anything, TikTok’s orchestrated calling campaign “only exposed the degree in which TikTok can manipulate and target a message.”

And thus it’s no surprise that the committee voted 50 to 0 to advance the bill:

Lawmakers on the Energy and Commerce Committee, which greenlit the bill Thursday afternoon after months of negotiations, said the intent was not to get rid of TikTok, but to prevent a Chinese company from having access to large troves of American data.  The committee voted 50-0 to advance the bill to the full House or Representatives.

Again, it’s a painfully stupid and reactionary bill, but this campaign seemed pretty mistargeted. There was a way in which TikTok could have more effectively leveraged its large user base to talk about the problems and risks of such a bill. But just sending them in to scream at Congress was perhaps not the best approach given the specific animus behind this bill.

Once Again, Ron Wyden Had To Stop Bad “Protect The Children” Internet Bills From Moving Forward

Senator Ron Wyden is a one-man defense for preventing horrible bills from moving forward in the Senate. Last month, he stopped Josh Hawley from moving a very problematic STOP CSAM bill from moving forward, and now he’s had to do it again.

A (bipartisan) group of senators traipsed to the Senate floor Wednesday evening. They tried to skip the line and quickly move some bad bills forward by asking for unanimous consent. Unless someone’s there to object, it effectively moves the bill forward, ending committee debate about it. Traditionally, this process is used for moving non-controversial bills, but lately it’s been used to grandstand about stupid bills.

Senator Lindsey Graham announced his intention to pull this kind of stunt on bills that he pretends are about “protecting the children” but which do no such thing in reality. Instead of it being just him, he rounded up a bunch of senators and they all pulled out the usual moral panic lines about two terrible bills: EARN IT and STOP CSAM. Both bills are designed to make it sound like good ideas and about protecting children, but the devil is very much in the detail, as both bills undermine end-to-end encryption while assuming that if you just put liability on websites, they’ll magically make child predators disappear.

And while both bills pretend not to attack encryption — and include some language about how they’re not intended to do so — both of them leave open the possibility that the use of end-to-end encryption will be used as evidence against websites for bad things done on those websites.

But, of course, as is the standard for the group of grandstanding senators, they present these bills as (1) perfect and (2) necessary to “protect the children.” The problem is that the bills are actually (1) ridiculously problematic and (2) will actually help bad people online in making end-to-end encryption a liability.

The bit of political theater kicked off with Graham having Senators Grassley, Cornyn, Durbin, Klobuchar, and Hawley talk on and on about the poor kids online. Notably, none of them really talked about how their bills worked (because that would reveal how the bills don’t really do what they pretend they do). Durbin whined about Section 230, misleadingly and mistakenly blaming it for the fact that bad people exist. Hawley did the thing that he loves doing, in which he does his mock “I’m a big bad Senator taking on those evil tech companies” schtick, while flat out lying about reality.

But Graham closed it out with the most misleading bit of all:

In 2024, here’s the state of play: the largest companies in America — social media outlets that make hundreds of billions of dollars a year — you can’t sue if they do damage to your family by using their product because of Section 230

This is a lie. It’s a flat out lie and Senator Graham and his staffers know this. All Section 230 says is that if there is content on these sites that violate the law, the liability goes after whoever created the content. If the features of the site itself “do damage,” then you can absolutely sue the company. But no one is actually complaining about the features. They’re complaining about content. And the liability on the content has to go to who created it.

The problem here is that Graham and all the other senators want to hold companies liable for the speech of users. And that is a very, very bad idea.

Now these platforms enrich our lives, but they destroy our lives.

These platforms are being used to bully children to death.

They’re being used to take sexual images and voluntarily and voluntarily obtain and sending them to the entire world. And there’s not a damn thing you can do about it. We had a lady come before the committee, a mother saying that her daughter was on a social media site that had an anti-bullying provisions. They complained three times about what was happening to her daughter. She killed herself. They went to court. They got kicked out by section 230.

I don’t know the details of this particular case, but first off, the platforms didn’t bully anyone. Other people did. Put the blame on the people actually causing the harm. Separately, and importantly, you can’t blame someone’s suicide on someone else when no one knows the real reasons. Otherwise, you actually encourage increased suicides, as it gives people an ultimate way to “get back” at someone.

Senator Wyden got up and, as he did last month, made it quite clear that we need to stop child sexual abuse and predators. He talked about his bill, which would actually help on these issues by giving law enforcement the resources it needs to go after the criminals, rather than the idea of the bills being pushed that simply blame social media companies for not magically making bad people disappear.

We’re talking about criminal issues, and Senator Wyden is looking to handle it by empowering law enforcement to deal with the criminals. Senators Graham, Durbin, Grassley, Cornyn, Klobuchar, and Hawley are looking to sue tech companies for not magically stopping criminals. One of those approaches makes sense for dealing with criminal activity. And yet it’s the other one that a bunch of senators have lined up behind.

And, of course, beyond the dangerous approach of EARN IT, it inherently undermines encryption, which makes kids (and everyone) less safe, as Wyden also pointed out.

Now, the specific reason I oppose EARN It is it will weaken the single strongest technology that protects children and families online. Something known as strong encryption.

It’s going to make it easier to punish sites that use encryption to secure private conversations and personal devices. This bill is designed to pressure communications and technology companies to scan users messages.

I, for one, don’t find that a particularly comforting idea.

Now, the sponsors of the bill have argued — and Senator Graham’s right, we’ve been talking about this a while — that their bills don’t harm encryption. And yet the bills allow courts to punish companies that offer strong encryption.

In fact, while it includes some they language about protecting encryption, it explicitly allows encryption to be used as evidence for various forms of liability. Prosecutors are going to be quick to argue that deploying encryption was evidence of a company’s negligence preventing the distribution of CSAM, for example.

The bill is also designed to encourage scanning of content on users phones or computers before information is sent over the Internet which has the same consequences as breaking encryption. That’s why a hundred civil society groups including the American Library Association — people then I think all of us have worked for — Human Rights Campaign, the list goes… Restore the Fourth. All of them oppose this bill because of its impact on essential security.

Weakening encryption is the single biggest gift you can give to these predators and these god-awful people who want to stalk and spy on kids. Sexual predators are gonna have a far easier time stealing photographs of kids, tracking their phones, and spying on their private messages once encryption is breached. It is very ironic that a bill that’s supposed to make kids safer would have the effect of threatening the privacy and security of all law-abiding Americans.

My alternative — and I want to be clear about this because I think Senator Graham has been sincere about saying that this is a horrible problem involving kids. We have a disagreement on the remedy. That’s what is at issue.

And what I want us to do is to focus our energy on giving law enforcement officials the tools they need to find and prosecute these monstrous criminals responsible for exploiting kids and spreading vile abuse materials online.

That can help prevent kids from becoming victims in the first place. So I have introduced to do this: the Invest in Child Safety Act to direct five billion dollars to do three specific things to deal with this very urgent problem.

Graham then gets up to respond and lies through his teeth:

There’s nothing in this bill about encryption. We say that this is not an encryption bill. The bill as written explicitly prohibits courts from treating encryption as an independent basis for liability.

We’re agnostic about that.

That’s not true. As Wyden said, the bill has some hand-wavey language about not treating encryption as an independent basis for liability, but it does explicitly allow for encryption to be one of the factors that can be used to show negligence by a platform, as long as you combine it with other factors.

Section (7)(A) is the hand-wavey bit saying you can’t use encryption as “an independent basis” to determine liability, but (7)(B) effectively wipes that out by saying nothing in that section about encryption “shall be construed to prohibit a court from considering evidence of actions or circumstances described in that subparagraph.” In other words, you just have to add a bit more, and then can say “and also, look, they use encryption!”

And another author of the bill, Senator Blumenthal, has flat out said that EARN IT is deliberately written to target encryption. He falsely claims that companies would “use encryption… as a ‘get out of jail free’ card.” So, Graham is lying when he says encryption isn’t a target of the bill. One of his co-authors on the bill admits otherwise.

Graham went on:

What we’re trying to do is hold these companies accountable by making sure they engage in best business practices. The EARN IT acts simply says for you to have liability protections, you have to prove that you’ve tried to protect children. You have to earn it. You’re just not given to you. You have to have the best business practices in place that voluntary commissions that lay out what would be the best way to harden these sites against sexually exploitation. If you do those things you get liability, it’s just not given to you forever. So this is not about encryption.

As to your idea. I’d love to talk to you about it. Let’s vote on both, but the bottom line here is there’s always a reason not to do anything that holds these people liable. That’s the bottom line. They’ll never agree to any bill that allows you to get them in court ever. If you’re waiting on these companies to give this body permission for the average person to sue you. It ain’t never going to happen.

So… all of that is wrong. First of all, the very original version of the EARN IT Act did have provisions to make company’s “earn” 230 protections by following best practices, but that’s been out of the bill for ages. The current version has no such thing.

The bill does set up a commission to create best practices, but (unlike the earlier versions of the bill) those best practice recommendations have no legal force or requirements. And there’s nothing in the bill that says if you follow them you get 230 protections, and if you don’t, you don’t.

Does Senator Graham even know which version of the bill he’s talking about?

Instead, the bill outright modifies Section 230 (before the Commission even researches best practices) and says that people can sue tech companies for the distribution of CSAM. This includes using the offering of encryption as evidence to support the claims that CSAM distribution was done because of “reckless” behavior by a platform.

Either Senator Graham doesn’t know what bill he’s talking about (even though it’s his own bill) or he doesn’t remember that he changed the bill to do something different than it used to try to do.

It’s ridiculous that Senator Wyden remains the only senator who sees this issue clearly and is willing to stand up and say so. He’s the only one who seems willing to block the bad bills while at the same time offering a bill that actually targets the criminals.

Ridiculous: Journalist Held In Contempt For Not Revealing Sources

Going way, way back, we’ve talked about the need for protection of journalistic sources, in particular the need for a federal journalism shield law. I can find stories going back about 15 years of us talking about it here on Techdirt. The issue might not come up that often, but that doesn’t make it any less important.

On Thursday, a judge held former CBS journalist Catherine Herridge in contempt for refusing to reveal her sources regarding stories she wrote about scientist Yanping Chen.

The ruling, from U.S. District Court Judge Christopher R. Cooper, will be stayed for 30 days or until Herridge can appeal the ruling.

Cooper ruled that Herridge violated his Aug. 1 order demanding that Herridge reveal how she learned about a federal probe into Chen, who operated a graduate program in Virginia. Herridge, who was recently laid off from CBS News, wrote the stories in question when she worked for Fox News in 2017.

In his ruling, Judge Cooper claims that he’s at least somewhat reluctant about this result, but he still goes forward with it arguing (I believe incorrectly) that he needs to balance the rights of Chen with Herridge’s First Amendment rights.

The Court does not reach this result lightly. It recognizes the paramount importance of a free press in our society and the critical role that confidential sources play in the work of investigative journalists like Herridge. Yet the Court also has its own role to play in upholding the law and safeguarding judicial authority. Applying binding precedent in this Circuit, the Court resolved that Chen’s need for the requested information to vindicate her rights under the Privacy Act overcame Herridge’s qualified First Amendment reporter’s privilege in this case. Herridge and many of her colleagues in the journalism community may disagree with that decision and prefer that a different balance be struck, but she is not permitted to flout a federal court’s order with impunity. Civil contempt is the proper and time-tested remedy to ensure that the Court’s order, and the law underpinning it, are not rendered meaningless.

But the First Amendment is not a balancing test. And if subpoenas or other attempts to reveal sources can be used in this manner, the harm to journalism will be vast. Journalism only works properly when journalists can legitimately promise confidentiality to sources. And that’s even more true for whistleblowers.

Admittedly, this case is a bit of a mess. It appears that the FBI falsely believed that Chen was a Chinese spy and investigated her, but let it go when they couldn’t support that claim. However, someone (likely in the FBI) leaked the info to Herridge, who reported on it. Chen sued the FBI, who won’t reveal who leaked the info. She’s now using lawful discovery to find out who leaked the info as part of the lawsuit. You can understand that Chen has been wronged in this situation, and it’s likely someone in the FBI who did so. And, in theory, there should be a remedy for that.

But, the problem is that this goes beyond just that situation and gets to the heart of what journalism is and why journalists need to be able to protect sources.

If a ruling like this stands, it means that no journalist can promise confidentiality, when a rush to court can force the journalist to cough up the details. And the end result is that fewer whistleblowers will be willing to speak to media, allowing more cover-ups and more corruption. The impact of a ruling like this is immensely problematic.

There’s a reason that, for years, we’ve argued for a federal shield law to make it clear that journalists should never be forced to give up sources. In the past, attempts to pass such laws have often broken down over debates concerning who they should apply to and how to identify “legitimate” journalists vs. those pretending to be journalists to avoid coughing up info.

But there is a simple solution to that: don’t have it protect “journalists,” have the law protect such information if it is obtained in the course of engaging in journalism. That is, if someone wants to make use of the shield law, they need to show that the contact and information obtained from the source was part of a legitimate effort to report a story to the public in some form, and they can present the steps they were taking to do so.

At the very least, the court recognizes that the contempt fees should be immediately stayed so that Herridge can appeal the decision:

The Court will stay that contempt sanction, however, to afford Herridge an opportunity to appeal this decision. Courts in this district and beyond have routinely stayed contempt sanctions to provide journalists ample room to litigate their assertions of privilege fully in the court of appeals before being coerced into compliance….

Hopefully, the appeals court recognizes how problematic this is. But, still, Congress can and should act to get a real shield law in place.

Judge Appears Correctly Skeptical Of Elon’s SLAPP Suit Against Critic

We have pointed out just how ridiculous Elon Musk’s SLAPP lawsuit against the Center for Countering Digital Hate is, so much that I supported the filing of an amicus brief in support of CCDH, even as I find CCDH’s positions and research to be generally problematic and misleading. But, even if their research methods aren’t great, they still deserve their right to speak out, and they should not face ruinous litigation from a petulant CEO who only pretends to support free speech.

On Thursday, there were oral arguments in the case, and to say they did not go well for Elon would be an understatement. The judge appeared to openly mock the company for its terrible legal arguments. And, most importantly, he (correctly) pointed out how “antithetical” to free speech this lawsuit appeared to be:

“You put that in terms of safety, and I’ve got to tell you, I guess you can use that word, but I can’t think of anything basically more antithetical to the First Amendment than this process of silencing people from publicly disseminated information once it’s been published,” Breyer said.

“You’re trying to shoehorn this theory by using these words into a viable breach of contract claim,” the judge added.

This was exactly the point that was raised in the amicus brief (brilliantly put together by Harvard’s Cyberlaw clinic). That the claims of “breach of contract” were a nonsense attempt to stifle speech, and hoping that by not including a defamation claim it would somehow avoid First Amendment scrutiny. The judge, Charles Breyer, seemed to have figured out ExTwitter’s sneaky plan pretty easily.

Near the end of the hearing, the judge noted that if something is proven to be true a defamation lawsuit falls apart. Why, he said, didn’t Musk’s X bring a defamation suit if the company believes X’s reputation has been harmed?

“You could’ve brought a defamation case, you didn’t bring a defamation case,” Breyer said. “And that’s significant.”

Yeah, because everyone knows that there was no actual defamation.

The judge appeared also to see through the nonsense of the breach of contract claims directly. ExTwitter claims that CCDH should be liable for the loss of ad revenue of advertisers leaving the platform in response to CCDH’s research report. But, the judge pointed out how tenuous this was, to the point of calling the argument “one of the most vapid extensions of law I’ve ever heard.”

But in order to make this case, X had to show the group knew the financial loss was “foreseeable” when it started its account and began abiding by Twitter’s terms of service, in 2019, before Musk acquired the site.

X lawyer Hawk argued that the platform’s terms of service state that the rules for the site could change at any time, including that suspended users whom the group says spread hate speech could be reinstated.

And so, Hawk said, if changes to the rules were foreseeable, then the financial loss from its reports on users spreading hate should have also been foreseeable.

This logic confused and frustrated the judge.

“That, of course, reduces foreseeability to one of the most vapid extensions of law I’ve ever heard,” Breyer said.

There are times, in a courtroom, where you shouldn’t read very much into things a judge says. And then there are times where it’s pretty clear the judge understands just how how wrong one side is. This is one of the latter cases.

According to a friend who attended the hearing (virtually, since it was on Zoom), these quotes don’t even get to how bad the hearing was for Elon. Apparently, at one point the judge asked ExTwitter’s lawyer “are you serious?” which is never a good thing. ExTwitter’s lawyer also had to walk back a few arguments in court, including when the company tried to apply the wrong terms of service to a separate non-profit they had tried to drag into the case. And, finally, towards the end of the hearing, apparently ExTwitter’s lawyer tried to claim that they had pled actual malice (which, you know, is kind of important), only to have CCDH’s lawyer point out that they had not. CCDH is right. You can look at the amended complaint yourself.

None of that is likely to go over well with this judge.

We Can’t Have Serious Discussions About Section 230 If People Keep Misrepresenting It

At the Supreme Court’s oral arguments about Florida and Texas’ social media content moderation laws, there was a fair bit of talk about Section 230. As we noted at the time, a few of the Justices (namely Clarence Thomas and Neil Gorsuch) seemed confused about Section 230 and also about what role (if any) it had regarding these laws.

The reality is that the only role for 230 is in preempting those laws. Section 230 has a preemption clause that basically says no state laws can go into effect that contradict Section 230 (in other words: no state laws that dictate how moderation must work). But that wasn’t what the discussion was about. The discussion was mostly about Thomas and Gorsuch’s confusion over 230 and thinking that the argument for Section 230 (that you’re not held liable for third party speech) contradicts the arguments laid out by NetChoice/CCIA in these cases, where they talked about the platforms’ own speech.

Gorsuch and Thomas were mixing up two separate things, as both the lawyers for the platforms and the US made clear. There are multiple kinds of speech at issue here. Section 230 does not hold platforms liable for third-party speech. But the issue with these laws was whether or not it constricted the platforms’ ability to express themselves in the way in which they moderated. That is, the editorial decisions that were being made expressing “this is what type of community we enable” are a form of public expression that the Florida & Texas laws seek to stifle.

That is separate from who is liable for individual speech.

But, as is the way of the world whenever it comes to discussions on Section 230, lots of people are going to get confused.

Today that person is Steven Brill, one of the founders of NewsGuard, a site that seeks to “rate” news organizations, including for their willingness to push misinformation. Brill publishes stories for NewsGuard on a Substack (!?!?) newsletter titled “Reality Check.” Unfortunately, Brill’s piece is chock full of misinformation regarding Section 230. Let’s do some correcting:

February marks the 28th anniversary of the passage of Section 230 of the Telecommunications Act of 1996. Today, Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online. But in February of 1996, this three-paragraph section of a massive telecommunications bill aimed at modernizing regulations related to the nascent cable television and cellular phone industries was an afterthought. Not a word was written about it in mainstream news reports covering the passage of the overall bill.

The article originally claimed it was the 48th anniversary, though it was later corrected (without a correction notice — which is something Newsguard checks on when rating the trustworthiness of publications). That’s not that big a deal, and I don’t think there’s anything wrong with “stealth” corrections for typos and minor errors like that.

But this sentence is just flat out wrong: “Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online.” It’s just not true. Section 230 gives limited exemptions from some forms of liability for third party content that they had no role in creating. That’s quite different than what Brill claims. His formulation suggests they’re not liable for anything they, themselves, put online. That’s false.

Section 230 is all about putting the liability on whichever party created the violation under the law. If a website is just hosting the content, but someone else created the content, the liability should go to the creator of the content, not the host.

Courts have had no problem finding liability on social media platforms for things they themselves post online. We have a string of such cases, covering Roommates, Amazon, HomeAway, InternetBrands, Snap and more. In every one of those cases (contrary to Brill’s claims), the courts have found that Section 230 does not protect things these platforms post online.

Brill gets a lot more wrong. He discusses the Prodigy and CompuServe cases and then says this (though he gives too much credit to CompuServe’s lack of moderation being the reason why the court ruled that way):

That’s why those who introduced Section 230 called it the “Protection for Good Samaritans” Act. However, nothing in Section 230 required screening for harmful content, only that those who did screen and, importantly, those who did not screen would be equally immune. And, as we now know, when social media replaced these dial-up services and opened its platforms to billions of people who did not have to pay to post anything, their executives and engineers became anything but good Samaritans. Instead of using the protection of Section 230 to exercise editorial discretion, they used it to be immune from liability when their algorithms deliberately steered people to inflammatory conspiracy theories, misinformation, state-sponsored disinformation, and other harmful content. As then-Federal Communications Commission Chairman Reed Hundt told me 25 years later, “We saw the internet as a way to break up the dominance of the big networks, newspapers, and magazines who we thought had the capacity to manipulate public opinion. We never dreamed that Section 230 would be a protection mechanism for a new group of manipulators — the social media companies with their algorithms. Those companies didn’t exist then.”

This is both wrong and misleading. First of all, nothing in Section 230 could “require” screening for harmful content, because both the First and Fourth Amendments would forbid that. So the complaint that it did not require such screening is not just misplaced, it’s silly.

We’ve gone over this multiple times. Pre-230, the understanding was that, under the First Amendment, liability of a distributor was dependent on whether or not the distributor had clear knowledge of the violative nature of the content. As the court in Smith v. California made clear, it would make no sense to hold someone liable without knowledge:

For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature.

That’s the First Amendment problem. But, we can take that a step further as well. If the state now requires scanning, you have a Fourth Amendment problem. Specifically, as soon as the government makes scanning mandatory, none of the content found during such scanning can ever be admissible in court, because no warrant was issued upon probable cause. As we again described a couple years ago:

The Fourth Amendment prohibits unreasonable searches and seizures by the government. Like the rest of the Bill of Rights, the Fourth Amendment doesn’t apply to private entities—except where the private entity gets treated like a government actor in certain circumstances. Here’s how that happens: The government may not make a private actor do a search the government could not lawfully do itself. (Otherwise, the Fourth Amendment wouldn’t mean much, because the government could just do an end-run around it by dragooning private citizens.) When a private entity conducts a search because the government wants it to, not primarily on its own initiative, then the otherwise-private entity becomes an agent of the government with respect to the search. (This is a simplistic summary of “government agent” jurisprudence; for details, see the Kosseff paper.) And government searches typically require a warrant to be reasonable. Without one, whatever evidence the search turns up can be suppressed in court under the so-called exclusionary rule because it was obtained unconstitutionally. If that evidence led to additional evidence, that’ll be excluded too, because it’s “the fruit of the poisonous tree.”

All of that seems kinda important?

Yet Brill rushes headlong on the assumption that 230 could have and should have required mandatory scanning for “harmful” content.

Also, most harmful content remains entirely protected by the First Amendment, making this idea even more ridiculous. There would be no liability for it.

Brill seems especially confused about how 230 and the First Amendment work together, suggesting (incorrectly) that 230 gives them some sort of extra editorial benefit that it does not convey:

With Section 230 in place, the platforms will not only have a First Amendment right to edit, but also have the right to do the kind of slipshod editing — or even the deliberate algorithmic promotion of harmful content — that has done so much to destabilize the world.  

Again, this is incorrect on multiple levels. The First Amendment gives them the right to edit. It also gives them the right to slipshod editing. And the right to promote harmful content via algorithms. That has nothing to do with Section 230.

The idea that “algorithmic promotion of harmful content… has done so much to destabilize the world” is a myth that has mostly been debunked. Some early algorithms weren’t great, but most have gotten much better over time. There’s little to no supporting evidence that “algorithms” have been particularly harmful over the long run.

Indeed, what we’ve seen is that while there were some bad algorithms a decade or so ago, pressure from the market has pushed the companies to improve. Users, advertisers, the media, have all pressured the companies to improve their algorithms and it seems to work.

Either way, those algorithms still have nothing to do with Section 230. The First Amendment lets companies use algorithms to recommend things, because algorithms are, themselves, expressions of opinion (“we think you would like this thing more than the next thing”) and nothing in there would trigger legal liability even if you dropped Section 230 altogether.

It’s a best (or worst) of both worlds, enjoyed by no other media companies.

This is simply false. Outright false. EVERY company that has a website that allows third-party content is protected by Section 230 for that third-party content. No company is protected for first-party content, online or off.

For example, last year, Fox News was held liable to the tune of $787 million for defaming Dominion Voting Systems by putting on guests meant to pander to its audience by claiming voter fraud in the 2020 election. The social media platforms’ algorithms performed the same audience-pleasing editing with the same or worse defamatory claims. But their executives and shareholders were protected by Section 230. 

Except… that’s not how any of this works, even without Section 230. Fox News was held liable because the content was produced by Fox News. All of the depositions and transcripts were… Fox News executives and staff. Because they created the defamatory content.

The social media apps didn’t create the content.

This is the right outcome. The blame should always go to the party who violated the law in creating the content.

And Fox News is equally as protected by Section 230 if there is defamation created by someone else but posted in a comment to a Fox News story (something that seems likely to happen frequently).

This whole column is misleading in the extreme, and simply wrong at other points. NewsGuard shouldn’t be publishing misinformation itself given that the company claims it’s promoting accuracy in news and pushing back against misinformation.

In SCOTUS NetChoice Cases, Texas’s And Florida’s Worst Enemy Is (Checks Notes) Elon Musk.

Next week, the Supreme Court will hear oral argument in NetChoice v. Paxton and Moody v. NetChoice. The cases are about a pair of laws, enacted by Texas and Florida, that attempt to force large social media platforms such as YouTube, Instagram, and X to host large amounts of speech against their will. (Think neo-Nazi rants, anti-vax conspiracies, and depictions of self-harm.) The states’ effort to co-opt social media companies’ editorial policies blatantly violates the First Amendment.

Since the laws are constitutional trainwrecks, it’s no surprise that Texas’s and Florida’s legal theories are weak. They rely heavily on the notion that what social media companies do is not really editing — and thus is not expressive. Editors, Texas says in a brief, are “reputationally responsible” for the content they reproduce. And yet, the state continues, “no reasonable observer associates” social media companies with the speech they disseminate.

This claim is absurd on its face. Everyone holds social media companies “reputationally responsible” for their content moderation. Users do, because most of them don’t like using a product full of hate speech and harassment. Advertisers do, out of a concern for their “brand safety.” Journalists do. Civil rights groups do. Even the Republican politicians who enacted this pair of bad laws do — that’s why they yell about how “Big Tech oligarchs” engage in so-called censorship.

That the Texas and Florida GOP are openly contemptuous of the First Amendment, and incompetent to boot, isn’t exactly news. So let’s turn instead to some delicious ironies. 

Consider that the right’s favorite social media addict, robber baron, and troll Elon Musk has single-handedly destroyed Texas’s and Florida’s case.

After the two states’ laws were enacted, Elon Musk conducted something of a natural experiment in content moderation—one that has wrecked those laws’ underlying premise. Musk purchased Twitter, transformed it into X, and greatly reduced content moderation on the service. As tech reporter Alex Kantrowitz remarks, the new approach “privileges” extreme content from “edgelords.”

This, in turn, forces users to work harder to find quality content, and to tolerate being exposed to noxious content. But users don’t have to put up with this — and they haven’t. “Since Musk bought Twitter in October 2022,” Kantrowitz finds, “it’s lost approximately 13 percent of its app’s daily active users.” Clearly, users “associate” social-media companies with the speech they host!

It gets better. Last November, Media Matters announced that, searching X, it had found several iconic brands’ advertisements displayed next to neo-Nazi posts. Did Musk say, “Whatever, dudes, racist content being placed next to advertisements on our site doesn’t affect X’s reputation”? No. He had X sue Media Matters.

In its complaint, X asserts that it “invests heavily” in efforts to keep “fringe content” away from advertisers’ posts. The company also alleges that Media Matters gave the world a “false impression” about what content tends to get “pair[ed]” on the platform. These statements make sense only if people care — and X cares that people care — about how X arranges content on X.

X even states that Media Matters has tried to “tarnish X’s reputation by associating [X] with racist content.” It would be hard to admit more explicitly that social-media companies are “reputationally responsible” for, because they are “associated” with, the content they disseminate.

Consider also that Texas ran to Musk’s defense. Oblivious to how Musk’s vendetta hurts Texas’s case at the Supreme Court, Ken Paxton, the state’s attorney general, opened a fraud investigation against Media Matters (the basic truth of whose report Musk’s lawsuit does not dispute).

Consider finally how Texas’s last-ditch defense gets mowed down by the right’s favorite Supreme Court justice. According to Texas, social-media companies can scrub the reputational harm from spreading abhorrent content simply by “disavowing” that content. But none other than Justice Clarence Thomas has blown this argument apart. If, Thomas writes, a state could force speech on an entity merely by letting that entity “disassociate” from the speech with a “disclaimer,” that “would justify any law compelling speech.”

Only the government can “censor” speech. Texas and Florida are the true censors here, as they seek to restrict the expressive editorial judgment of social-media companies. That conduct is expressive. Just ask Elon Musk. And that expressiveness is fatal to Texas’s and Florida’s laws. Just ask Clarence Thomas. Texas’s and Florida’s social-media speech codes aren’t just unconstitutional, they can’t even be defended coherently.

Corbin Barthold is internet policy counsel at TechFreedom.

Prominent MAGA Supporter Is Worried New KOSA Won’t Suppress Enough LGBTQ Speech

By now you know that Senator Richard Blumenthal has released a new version of KOSA, the misleadingly named Kids Online Safety Act, that he pretends fixes all the problems. It doesn’t. It still represents a real threat to speech online, and in particular speech from LGBTQ users. This is why Blumenthal, a prominent Democrat, is putting out press releases including supportive quotes from infamous anti-LGBTQ groups like the Institute for Family Studies and the “American Principles Project” (one of the leading forces behind anti-trans bills across the US). Incredibly, it also has an approving quote from NCOSE, formerly known as “Morality in Media,” a bunch of prudish busybodies who believe all pornography should be banned, and who began life trying to get “salacious” magazines banned.

When a bill is getting supportive quotes from NCOSE, an organization whose entire formation story is based around an attempt to ban books, you know that bill is not good for speech.

Why is a Democratic Senator like Blumenthal lining up with such regressive, censorial, far right nonsense peddlers? Well, because he doesn’t give a shit that KOSA is going to do real harm to LGBTQ kids or violate the Constitution he swore an oath to protect: he just wants to get a headline or two claiming he’s protecting children, with not a single care about how much damage it will actually do.

Of course, as we noted, the latest bill does make it marginally more difficult to directly suppress LGBTQ content. It removed the ability of state Attorneys General to enforce one provision, the duty of care provision, though still allows them to enforce other provisions and to sue social media companies if those state AGs feel the companies aren’t complying with the law.

Still, at least some of the MAGA crowd feel that this move, making it marginally more difficult for state AGs to try to force LGBTQ content offline means the bill is no longer worth supporting. Here’s Charlie Kirk, a leading MAGA nonsense peddler who founded and runs Turning Point USA, whining that the bill is no longer okay, since it won’t be used to silence LGBTQ folks as easily:

Image

If you can’t read that, it’s Charlie saying:

The Senate is considering the Kids Online Safety Act (KOSA), a bill that looks to protect underage children from groomers, pornographers, and other predators online.

But the bill ran into trouble because LGBT groups were worried it would make it too easy for red state AGs to target predators who try to groom children into mutilating themselves or destroying themselves with hormones and puberty blockers.

So now, the bill has been overhauled to take away power from from state AGs (since some of them might be conservatives who care about children) and instead give almost all power to the FTC, currently read by ultra-left ideologue Lina Khan. Sure enough, LGBT groups have dropped all their concerns.

We’ve seen this pattern before. What are the odds that this bill does zero to protect children but a lot to vaguely enhance the power of Washington bureaucrats to destroy whoever they want, for any reason?

If you can get past his ridiculous language, you can see that he’s (once again, like the Heritage Foundation and KOSA co-sponsor Senator Marsha Blackburn before him) admitting that the reason the MAGA crowd supports KOSA is to silence LGBTQ voices, which he falsely attacks as “groomers, pornographers, and other predators.”

He’s wrong that the bill can’t still be used for this, but he’s correct that the bill now gives tremendous power to whoever is in charge of the FTC, whether its Lina Khan… or whatever MAGA incel could be put in place if Trump wins.

Meanwhile, if Kirk is so concerned about child predators and groomers, it’s odd you never see him call out the Catholic church. Or, his former employee who was recently sentenced to years in jail for his “collection” of child sexual abuse videos. Or the organization that teamed up with Turning Point USA to sponsor an event, even though the CEO was convicted of “coercing and enticing” a minor. It’s quite interesting that Kirk is so quick to accuse LGBTQ folks of “grooming” and “predation,” when he keeps finding actual such people around himself, and he never says a word.

Either way, I’m curious if watching groups like TPUSA freak out about this bill not being censorial enough of LGBTQ content will lead Republicans to get cold feet on supporting this bill.

At the very least, though, it’s a confirmation that Republican support for this bill is based on their strong belief that it will censor and suppress LGBTQ content.

How Allowing Copyright On AI-Generated Works Could Destroy Creative Industries

Generative AI continues to be the hot topic in the digital world – and beyond. A previous blog post noted that this has led to people finally asking the important question whether copyright is fit for the digital world. As far as AI is concerned, there are two sides to the question. The first is whether generative AI systems can be trained on copyright materials without the need for licensing. That has naturally dominated discussions, because many see an opportunity to impose what is effectively a copyright tax on generative AI. The other question is whether the output of generative AI systems can be copyrighted. As another Walled Post explained, the current situation is unclear. In the US, purely AI-generated art cannot currently be copyrighted and forms part of the public domain, but it may be possible to copyright works that include significant human input.

Given the current interest in generative AI, it’s no surprise that there are lots of pundits out there pontificating on what it all means. I find Christopher S. Penn’s thoughts on the subject to be consistently insightful and worth reading, unlike those of many other commentators. Even better, his newsletter and blog are free. His most recent newsletter will be of particular interest to Walled Culture readers, and has a bold statement concerning AI and copyright:

We should unequivocally ensure machine-made content can never be protected under intellectual property laws, or else we’re going to destroy the entire creative economy.

His newsletter includes a short harmonized tune generated using AI. Penn points out that it is trivially easy to automate the process of varying that tune and its harmony using AI, in a way that scales to billions of harmonized tunes covering a large proportion of all possible songs:

If my billion songs are now copyrighted, then every musician who composes a song from today forward has to check that their composition isn’t in my catalog of a billion variations – and if it is (which, mathematically, it probably will be), they have to pay me.

Moreover, allowing copyright in this way would result in a computing arms race. Those with the deepest pockets could use more powerful hardware and software to produce more AI tunes faster than anyone else, allowing them to copyright them first:

That wipes out the music industry. That wipes out musical creativity, because suddenly there is no incentive to create and publish original music for commercial purposes, including making a living as a musician. You know you’ll just end up in a copyright lawsuit sooner or later with a company that had better technology than you.

That’s one good reason for not allowing music – or images, videos or text – generated by AI to be granted copyright. As Penn writes, doing so would just create a huge industry whose only purpose is generating a library of works that is used for suing human creators for alleged copyright infringement. The bullying and waste already caused by the similar patent troll industry shows why this is not something we would want. Here’s another reason why copyright for AI creations is a bad idea according to Penn:

If machine works remain non-copyrightable, there’s a strong disincentive for companies like Disney to use machine-made works. They won’t be able to enforce copyright on them, which makes those works less valuable than human-led works that they can fully protect. If machine works suddenly have the same copyright status as human-led works, then a corporation like Disney has much greater incentive to replace human creators as quickly as possible with machines, because the machines will be able to scale their created works to levels only limited by compute power.

This chimes with something that I have argued before: that generative AI could help to make human-generated art more valuable. The value of human creativity will be further enhanced if companies are unable to claim copyright in AI-generated works. It’s an important line of thinking, because it emphasizes that it is not in the interest of artists to allow copyright on AI-generated works, whatever Big Copyright might have them believe.

Follow me @glynmoody on Mastodon and on Bluesky. Originally published to Walled Culture.

How Allowing Copyright On AI-Generated Works Could Destroy Creative Industries

Generative AI continues to be the hot topic in the digital world – and beyond. A previous blog post noted that this has led to people finally asking the important question whether copyright is fit for the digital world. As far as AI is concerned, there are two sides to the question. The first is whether generative AI systems can be trained on copyright materials without the need for licensing. That has naturally dominated discussions, because many see an opportunity to impose what is effectively a copyright tax on generative AI. The other question is whether the output of generative AI systems can be copyrighted. As another Walled Post explained, the current situation is unclear. In the US, purely AI-generated art cannot currently be copyrighted and forms part of the public domain, but it may be possible to copyright works that include significant human input.

Given the current interest in generative AI, it’s no surprise that there are lots of pundits out there pontificating on what it all means. I find Christopher S. Penn’s thoughts on the subject to be consistently insightful and worth reading, unlike those of many other commentators. Even better, his newsletter and blog are free. His most recent newsletter will be of particular interest to Walled Culture readers, and has a bold statement concerning AI and copyright:

We should unequivocally ensure machine-made content can never be protected under intellectual property laws, or else we’re going to destroy the entire creative economy.

His newsletter includes a short harmonized tune generated using AI. Penn points out that it is trivially easy to automate the process of varying that tune and its harmony using AI, in a way that scales to billions of harmonized tunes covering a large proportion of all possible songs:

If my billion songs are now copyrighted, then every musician who composes a song from today forward has to check that their composition isn’t in my catalog of a billion variations – and if it is (which, mathematically, it probably will be), they have to pay me.

Moreover, allowing copyright in this way would result in a computing arms race. Those with the deepest pockets could use more powerful hardware and software to produce more AI tunes faster than anyone else, allowing them to copyright them first:

That wipes out the music industry. That wipes out musical creativity, because suddenly there is no incentive to create and publish original music for commercial purposes, including making a living as a musician. You know you’ll just end up in a copyright lawsuit sooner or later with a company that had better technology than you.

That’s one good reason for not allowing music – or images, videos or text – generated by AI to be granted copyright. As Penn writes, doing so would just create a huge industry whose only purpose is generating a library of works that is used for suing human creators for alleged copyright infringement. The bullying and waste already caused by the similar patent troll industry shows why this is not something we would want. Here’s another reason why copyright for AI creations is a bad idea according to Penn:

If machine works remain non-copyrightable, there’s a strong disincentive for companies like Disney to use machine-made works. They won’t be able to enforce copyright on them, which makes those works less valuable than human-led works that they can fully protect. If machine works suddenly have the same copyright status as human-led works, then a corporation like Disney has much greater incentive to replace human creators as quickly as possible with machines, because the machines will be able to scale their created works to levels only limited by compute power.

This chimes with something that I have argued before: that generative AI could help to make human-generated art more valuable. The value of human creativity will be further enhanced if companies are unable to claim copyright in AI-generated works. It’s an important line of thinking, because it emphasizes that it is not in the interest of artists to allow copyright on AI-generated works, whatever Big Copyright might have them believe.

Follow me @glynmoody on Mastodon and on Bluesky. Originally published to Walled Culture.

Elon Only Started Buying Up Twitter Shares After Twitter Refused To Ban Plane Tracking Account

Ever since he first started to make moves to purchase Twitter, Elon Musk has framed his interest in “rigorously adhering to” principles of free speech. As we’ve noted, you have to be ridiculously gullible to believe that’s true, given Elon’s long history of suppressing speech, but a new book about Elon’s purchase suggests that from the very start a major motivation in the purchase, was to silence accounts he disliked.

According to an excerpt of a new book by reporter Kurt Wagner about the purchase (and called out by the SF Chronicle), Elon had reached out to then Twitter CEO Parag Agrawal to ask him to remove student Jack Sweeney’s ElonJet account (which publicly tracks the location of Elon’s private plane). It was only when Agrawal refused, that Elon started buying up shares in the site.

The excerpt slips in that point in a discussion about how Jack Dorsey arranged what turned out to be a disastrous meeting between Agrawal and Musk early in the process:

The day after, Dorsey sent Musk a private message in hopes of setting up a call with Parag Agrawal, whom Dorsey had hand-picked as his own replacement as CEO a few months earlier. “I want to make sure Parag is doing everything possible to build towards your goals until close,” Dorsey wrote to Musk. “He is really great at getting things done when tasked with specific direction.”

Dorsey drew up an agenda that included problems Twitter was working on, short-term action items and long-term priorities. He sent it to Musk for review, along with a Google Meet link. “Getting this nailed will increase velocity,” Dorsey wrote. He was clearly hoping his new pick for owner would like his old pick for CEO.

This was probably wishful thinking. Musk was already peeved with Agrawal, with whom he’d had a terse text exchange weeks earlier after Agrawal chastised Musk for some of his tweets. Musk had also unsuccessfully petitioned Agrawal to remove a Twitter account that was tracking his private plane; the billionaire started buying Twitter shares shortly after Agrawal denied his request.

In other words, for all his posturing about the need to purchase the site to support free speech, it appears that at least one major catalyzing moment was Twitter’s refusal to shut down an account Elon hated.

As we’ve pointed out again and again, historically, Twitter was pretty committed to setting rules and trying to enforce them with its moderation policies, and refusing to take down accounts unless they violated the rules. Sometimes this created somewhat ridiculous scenarios, but at least there were principles behind it. Nowadays, the principles seem to revolve entirely around Elon’s whims.

The case study of Sweeney’s ElonJet account seems to perfectly encapsulate all that. It was widely known that Elon had offered Sweeney $5k to take the account down. Sweeney had counter-offered $50k. That was in the fall of 2021. Given the timing of this latest report, it appears that Elon’s next move was to try to pressure Agrawal to take down the account. Agrawal rightly refused, because it did not violate the rules.

It was at that point he started to buy up shares, and to present himself (originally) as an activist investor. Eventually that shifted into his plan to buy the entire site outright, which he claimed was to support free speech, even though now it appears he was focused on removing ElonJet.

At one point, Elon had claimed that he would keep the ElonJet account up:

Image

But, also, as we now know, three weeks after that tweet, he had his brand new trust & safety boss, Ella Irwin, tell the trust & safety team to filter ElonJet heavily using the company’s “Visibility Filter” (VF) tool, which many people claim is “shadowbanning”):

Image

Less than two weeks later, he banned the account outright, claiming (ridiculously) that the account was “doxxing” him and publishing “assassination coordinates.”

Image

He then also banned Sweeney’s personal account, even as it wasn’t publishing such info. Followed by banning journalists who merely were mentioning that @ElonJet had been banned.

At this point it should have been abundantly clear that Musk was never interested in free speech on Twitter (now ExTwitter), but it’s fascinating to learn that one of the motivating factors in buying the site originally — even as he pretended it was about free speech — was really to silence a teenager’s account.

Don’t Fall For The Latest Changes To The Dangerous Kids Online Safety Act 

The authors of the dangerous Kids Online Safety Act (KOSA) unveiled an amended version last week, but it’s still an unconstitutional censorship bill that continues to empower state officials to target services and online content they do not like. We are asking everyone reading this to oppose this latest version, and to demand that their representatives oppose it—even if you have already done so. 

KOSA remains a dangerous bill that would allow the government to decide what types of information can be shared and read online by everyone. It would still require an enormous number of websites, apps, and online platforms to filter and block legal, and important, speech. It would almost certainly still result in age verification requirements. Some of its provisions have changed over time, and its latest changes are detailed below. But those improvements do not cure KOSA’s core First Amendment problems. Moreover, a close review shows that state attorneys general still have a great deal of power to target online services and speech they do not like, which we think will harm children seeking access to basic health information and a variety of other content that officials deem harmful to minors.  

We’ll dive into the details of KOSA’s latest changes, but first we want to remind everyone of the stakes. KOSA is still a censorship bill and it will still harm a large number of minors who have First Amendment rights to access lawful speech online. It will endanger young people and impede the rights of everyone who uses the platforms, services, and websites affected by the bill. Based on our previous analyses, statements by its authors and various interest groups, as well as the overall politicization of youth education and online activity, we believe the following groups—to name just a few—will be endangered:  

  • LGBTQ+ Youth will be at risk of having content, educational material, and their own online identities erased.  
  • Young people searching for sexual health and reproductive rights information will find their search results stymied. 
  • Teens and children in historically oppressed and marginalized groups will be unable to locate information about their history and shared experiences. 
  • Activist youth on either side of the aisle, such as those fighting for changes to climate laws, gun laws, or religious rights, will be siloed, and unable to advocate and connect on platforms.  
  • Young people seeking mental health help and information will be blocked from finding it, because even discussions of suicide, depression, anxiety, and eating disorders will be hidden from them. 
  • Teens hoping to combat the problem of addiction—either their own, or that of their friends, families, and neighbors, will not have the resources they need to do so.  
  • Any young person seeking truthful news or information that could be considered depressing will find it harder to educate themselves and engage in current events and honest discussion. 
  • Adults in any of these groups who are unwilling to share their identities will find themselves shunted onto a second-class internet alongside the young people who have been denied access to this information. 

What’s Changed in the Latest (2024) Version of KOSA 

In its impact, the latest version of KOSA is not meaningfully different from those previous versions. The “duty of care” censorship section remains in the bill, though modified as we will explain below. The latest version removes the authority of state attorneys general to sue or prosecute people for not complying with the “duty of care.” But KOSA still permits these state officials to enforce other part of the bill based on their political whims and we expect those officials to use this new law to the same censorious ends as they would have of previous versions. And the legal requirements of KOSA are still only possible for sites to safely follow if they restrict access to content based on age, effectively mandating age verification.   

KOSA is still a censorship bill and it will still harm a large number of minors

Duty of Care is Still a Duty of Censorship 

Previously, KOSA outlined a wide collection of harms to minors that platforms had a duty to prevent and mitigate through “the design and operation” of their product. This includes self-harm, suicide, eating disorders, substance abuse, and bullying, among others. This seemingly anodyne requirement—that apps and websites must take measures to prevent some truly awful things from happening—would have led to overbroad censorship on otherwise legal, important topics for everyone as we’ve explained before.  

The updated duty of care says that a platform shall “exercise reasonable care in the creation and implementation of any design feature” to prevent and mitigate those harms. The difference is subtle, and ultimately, unimportant. There is no case law defining what is “reasonable care” in this context. This language still means increased liability merely for hosting and distributing otherwise legal content that the government—in this case the FTC—claims is harmful.  

Design Feature Liability 

The bigger textual change is that the bill now includes a definition of a “design feature,” which the bill requires platforms to limit for minors. The “design feature” of products that could lead to liability is defined as: 

any feature or component of a covered platform that will encourage or increase the frequency, time spent, or activity of minors on the covered platform, or activity of minors on the covered platform. 

Design features include but are not limited to 

(A) infinite scrolling or auto play; 

(B) rewards for time spent on the platform; 

(C) notifications; 

(D) personalized recommendation systems; 

(E) in-game purchases; or 

(F) appearance altering filters. 

These design features are a mix of basic elements and those that may be used to keep visitors on a site or platform. There are several problems with this provision. First, it’s not clear when offering basic features that many users rely on, such as notifications, by itself creates a harm. But that points to the fundamental problem of this provision. KOSA is essentially trying to use features of a service as a proxy to create liability for speech online that the bill’s authors do not like. But the list of harmful designs shows that the legislators backing KOSA want to regulate online content, not just design.   

For example, if an online service presented an endless scroll of math problems for children to complete, or rewarded children with virtual stickers and other prizes for reading digital children’s books, would lawmakers consider those design features harmful? Of course not. Infinite scroll and autoplay are generally not a concern for legislators. It’s that these lawmakers do not likesome lawful content that is accessible via online service’s features. 

What KOSA tries to do here then is to launder restrictions on content that lawmakers do not like through liability for supposedly harmful “design features.” But the First Amendment still prohibits Congress from indirectly trying to censor lawful speech it disfavors.  

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities.

Allowing the government to ban content designs is a dangerous idea. If the FTC decided that direct messages, or encrypted messages, were leading to harm for minors—under this language they could bring an enforcement action against a platform that allowed users to send such messages. 

Regardless of whether we like infinite scroll or auto-play on platforms, these design features are protected by the First Amendment; just like the design features we do like. If the government tried to limit an online newspaper from using an infinite scroll feature or auto-playing videos, that case would be struck down. KOSA’s latest variant is no different.   

Attorneys General Can Still Use KOSA to Enact Political Agendas 

As we mentioned above, the enforcement available to attorneys general has been narrowed to no longer include the duty of care. But due to the rule of construction and the fact that attorneys general can still enforce other portions of KOSA, this is cold comfort. 

For example, it is true enough that the amendments to KOSA prohibit a state from targeting an online service based on claims that in hosting LGBTQ content that it violated KOSA’s duty of care. Yet that same official could use another provision of KOSA—which allows them to file suits based on failures in a platform’s design—to target the same content. The state attorney general could simply claim that they are not targeting the LGBTQ content, but rather the fact that the content was made available to minors via notifications, recommendations, or other features of a service. 

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities. And KOSA leaves all of the bill’s censorial powers with the FTC, a five-person commission nominated by the president. This still allows a small group of federal officials appointed by the President to decide what content is dangerous for young people. Placing this enforcement power with the FTC is still a First Amendment problem: no government official, state or federal, has the power to dictate by law what people can read online.  

The Long Fight Against KOSA Continues in 2024 

For two years now, EFF has laid out the clear arguments against this bill. KOSA creates liability if an online service fails to perfectly police a variety of content that the bill deems harmful to minors. Services have little room to make any mistakes if some content is later deemed harmful to minors and, as a result, are likely to restrict access to a broad spectrum of lawful speech, including information about health issues like eating disorders, drug addiction, and anxiety.  

The fight against KOSA has amassed an enormous coalition of people of all ages and all walks of life who know that censorship is not the right approach to protecting people online, and that the promise of the internet is one that must apply equally to everyone, regardless of age. Some of the people who have advocated against KOSA from day one have now graduated high school or college. But every time this bill returns, more people learn why we must stop it from becoming law.   

We cannot afford to allow the government to decide what information is available online. Please contact your representatives today to tell them to stop the Kids Online Safety Act from moving forward. 

Republished from the EFF’s Deeplinks blog.

George Santos Files Very Silly Copyright Lawsuit Against Jimmy Kimmel Over His Cameo Videos

Former Rep. George Santos, kicked out of Congress last year for being an irredeemable liar, has spent his time since expulsion pulling in the big bucks making videos on Cameo for anywhere between $350 and $500 a pop.

Last year, Senator John Fetterman made news when he got Santos to record a Cameo video trolling disgraced, indicted colleague Senator Bob Menendez who refuses to resign. That video had Santos urging “Bobby” to “hang in there.” Earlier this month, Santos admitted that he’d surpassed 1,200 videos in the last few months, bringing in a few hundred thousand dollars.

Apparently, a little over a dozen of those came from talk show host Jimmy Kimmel, who started a segment in December called “Will Santos Say It.” Kimmel submitted wacky Cameo requests and played some on the show. Back in December, Santos complained about this — mainly that he wasn’t getting paid enough for the videos.

Over the weekend, Santos actually sued Kimmel, along with ABC/Disney, claiming copyright infringement. Because, I’m sure, Disney doesn’t employ any copyright lawyers who will eat Santos and his lawyer for lunch and spit out the remains into bowls made out of Mickey Mouse.

The lawsuit is not good. The crux is that Kimmel (1) misrepresented himself and (2) purchased videos under a “personal” license instead of a “commercial” one, and therefore this is both fraud and copyright infringement.

It is likely neither.

On the copyright side, Kimmel has a strong fair use claim. He used them for commentary and criticism without harming the market for Santos’ Cameos (in fact, they likely increased it). The fraud part is just nonsense. Santos didn’t lose money out of this, he made money.

The lawsuit undermines its copyright claims by inserting Kimmel’s commentary, which helps to show how this is fair use (and amusing):

KIMMEL: Yeah so now this Cameo thing, according to George, is really paying off. He claims he’s made more money in seven days than he did in Congress for a year. And part of that money came from me. I sent him a bunch of crazy video requests because I wanted to see what he would read and what he wouldn’t read, and I showed some of them on the air on Thursday, um, and now he’s demanding […] to be paid a commercial rate. Could you imagine if I get sued by George Santos for a fraud? I mean how good would that be? It would be like a dream come true. So since I started buying his videos his rates went way up to $500 a piece. He should be thanking me for buying these videos. But I have a big stockpile you want to see one? Again George had no idea these requests were from me, I just wrote them and sent them in. So “Will Santos say it?” Here we go […] [CAMEOS #4 and #5 were then published]

The lawsuit also includes the five prompts that Kimmel (under made-up names) submitted to Santos that were later aired. Kimmel says he submitted more, and it’s unclear what happened with the others, if Santos’ legal threat made them go away or if he even made them.

Still, for your enjoyment, here are the prompts:

a. On or about December 6, 2023, at approximately 4:46 p.m. Kimmel, misrepresenting himself as “Chris Cates” made the following fraudulent representation to Santos: “George please congratulate my friend Gary Fortuna for winning the Clearwater Florida Beef Eating Contest. He ate almost 6 pounds of loose ground beef in under 30 minutes – which was a new record! He’s not feeling great right now but the doctor thinks he will be released from the hospital soon. Please wish him a speedy recovery!” (“Fake Request 1”)

b. On or about December 6, 2023 at approximately 4:55 p.m. Kimmel, misrepresenting himself as “Jane” made the following fraudulent representation to Santos: “George please congratulate my mom Brenda on the successful cloning of her beloved schnauzer Adolf. She and Doctor Haunschnaffer went through a lot of dogs in the trial runs but they finally got it to stick. Tell her to give Adolf a big belly rub for me!” (“Fake Request 2”)

c. On or about December 7, 2023, at approximately 12:18 p.m. Kimmel, misrepresenting himself as “Ron” made the following fraudulent representation to Santos: “My name is Ron. Please tell my wife to call me George. Not George my name is Ron. You are George. Just tell her to call me George. But again Ron. I haven’t seen Swoosie or the kids since my disco birthday and it’s not fair. She says I burned down the shed shooting off fireworks but I was trying to scare a bear away. It isn’t fair. I love my Swoosie and I just want our family together on Christmas or if not that Valentimes Day or Flag. Watch out for bears.” (“Fake Request 3”)

d. On or about December 7, 2023, at approximately 12:32 p.m. Kimmel, misrepresenting himself as “Uncle Joe” made the following fraudulent representation to Santos: “George can you please congratulate my legally blind niece Julia on passing her driving test. They said she couldn’t do it – even shouldn’t, but she’s taught herself to be able to drive safely using her other sense. She’s not a quitter! That said, the day after she got her license, she got in a really bad car accident so if you could also wish her a speedy recovery that would be amazing. She’s in a bodycast and is a very bummed out – but with help from Jesus and President Trump, soon she will be back on the road!” (“Fake Request 4”)

e. On or about December 7, 2023, at approximately 12:26 p.m. Kimmel, misrepresenting himself as “Christian” made the following fraudulent representation to Santos:: “Hey George. My friend Heath just came out as a Furry and I’d love for you to tell him that his friends and family all accept him. His “fursona” is a platypus mixed with a beaver. He calls it a Beav-apus. Can you say we all love you Beav-a-pus? He also just got the go ahead from Arby’s corporate to go to work in the outfit so we’re all so happy for him to be himself at work and at home. Could you also do a loud “Yiff yiff yiff!”? That’s the sound Beav-a-pus makes as Beav-a-pus. Thank you so much.” (“Fake Request 5”)

The presence of a recently disgraced Congressman makes some of those videos seem newsworthy on its own, adding to the fair use argument.

As noted above, Disney has a few lawyers who understand copyright. It seems likely that Santos is going to get ripped to shreds in court.

❌