FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremTechdirt
  • ✇Techdirt
  • Court Supports NY State’s Quest To Require $15 Broadband For Poor People, Much To Big Telecom’s HorrorKarl Bode
    When the Trump administration killed net neutrality, telecom industry giants convinced them to push their luck and declared that not only would federal regulators no longer try to meaningfully oversee telecom giants like Comcast and AT&T, but that states couldn’t either. They got greedy. The courts didn’t like that much, repeatedly ruling that the FCC can’t abdicate its authority over broadband consumer protection, then turn around tell states what they can or can’t do. The courts took tha
     

Court Supports NY State’s Quest To Require $15 Broadband For Poor People, Much To Big Telecom’s Horror

Od: Karl Bode
3. Květen 2024 v 14:30

When the Trump administration killed net neutrality, telecom industry giants convinced them to push their luck and declared that not only would federal regulators no longer try to meaningfully oversee telecom giants like Comcast and AT&T, but that states couldn’t either. They got greedy.

The courts didn’t like that much, repeatedly ruling that the FCC can’t abdicate its authority over broadband consumer protection, then turn around tell states what they can or can’t do.

The courts took that stance again last week, with a new ruling by the US Court of Appeals for the 2nd Circuit restoring a New York State law (the Affordable Broadband Act) requiring that ISPs provide low-income state residents $15 broadband at speeds of 25 Mbps. The law was blocked in June of 2021 by a US District Judge who claimed that the state law was preempted by the federal net neutrality repeal.

Giant ISPs, and the Trump administration officials who love them, desperately tried to insist that states were magically barred from regulating broadband because the Trump administration said so. But the appeals court ruled, once again, those efforts aren’t supported by logic or the law:

“the ABA is not conflict-preempted by the Federal Communications Commission’s 2018 order classifying broadband as an information service. That order stripped the agency of its authority to regulate the rates charged for broadband Internet, and a federal agency cannot exclude states from regulating in an area where the agency itself lacks regulatory authority. Accordingly, we REVERSE the judgment of the district court and VACATE the permanent injunction.”

This ruling is once again good news for future fights over net neutrality and broadband consumer protection, Stanford Law Professor and net neutrality expert Barbara van Schewick notes in a statement:

“Today’s decision means that if a future FCC again decided to abdicate its oversight over broadband like it did in 2017, the states have strong legal precedent, across circuits, to institute their own protections or re-activate dormant ones.”

Telecom lobbyists have spent years lobbying to ensure federal broadband oversight is as captured and feckless as possible. And, with the occasional exception, they’ve largely succeeded. Big telecom had really hoped they could extend that winning streak even further and bar states from standing up to them as well, but so far that really hasn’t gone as planned.

One of the things that absolutely terrifies telecom monopoly lobbyists is the idea of rate regulation, or that government would ever stop them from ripping off captive customers stuck in uncompetitive markets. It’s never been a serious threat on the federal level due to regulatory capture and lobbying, even though it’s thrown around a lot by monopoly apologists as a terrifying bogeyman akin to leprosy.

Here you not only have a state retaining its authority to protect consumers from monopoly harm, but dictating to them that they must provide poor people with 25 Mbps broadband (which really costs ISPs at Comcast’s scale virtually nothing to provide in the gigabit era). Still, it’s the kind of ruling that’s going to give AT&T and Comcast lobbyists (and consultants and think tank proxies) cold sweats for years.

  • ✇Techdirt
  • Trader Joe’s To Pay Legal Fees To Employee Union Over Its Bullshit Trademark LawsuitDark Helmet
    It’s been nearly a year, but I won’t pretend that the outcome of this isn’t quite satisfying. Last summer, grocerer Trader Joe’s filed an absolute bullshit lawsuit against the union for its own employees claiming that the name of and merchandise sold by the union represented trademark infringement and would cause confusion with the public as to the source of those goods. The court dismissed that suit in fairly spectacular fashion, taking the company to task for those claims, given how clear the
     

Trader Joe’s To Pay Legal Fees To Employee Union Over Its Bullshit Trademark Lawsuit

3. Květen 2024 v 05:06

It’s been nearly a year, but I won’t pretend that the outcome of this isn’t quite satisfying. Last summer, grocerer Trader Joe’s filed an absolute bullshit lawsuit against the union for its own employees claiming that the name of and merchandise sold by the union represented trademark infringement and would cause confusion with the public as to the source of those goods. The court dismissed that suit in fairly spectacular fashion, taking the company to task for those claims, given how clear the website and merch are that all of this is coming from the union and not the company itself. The ruling made some fairly clear speculation that the company was doing this instead just to make trouble for a union it’s trying to hassle, which is absolutely what it is doing. While the company decided to appeal the ruling, keeping all of its bad actions in the news for even longer, the original ruling judge has now also ordered the company to pay legal fees to the union, given the nature of the company’s lawsuit.

Trader Joe’s must pay more than $100,000 in attorneys’ fees for bringing an “exceptionally weak” trademark lawsuit against its employee union, a California federal judge has determined.

U.S. District Judge Hernan Vera said on Tuesday, opens new tab that Trader Joe’s case was meritless and that “the obvious motivation behind the suit” was to influence the grocery store chain’s fight with Trader Joe’s United over its drive to unionize Trader Joe’s employees.

“Recognizing the extensive and ongoing legal battles over the Union’s organizing efforts at multiple stores, Trader Joe’s claim that it was genuinely concerned about the dilution of its brand resulting from these trivial campaign mugs and buttons cannot be taken seriously,” Vera said.

Chef’s kiss, honestly. Nobody with a couple of braincells to rub together could seriously believe that the motivation behind this legal action was anything other than being a nuisance for the union as part of a larger effort to make its life as difficult as possible. All the other claims over trademark infringement are purely manufactured as part of that motivation. With that in mind, forcing the company to pay the legal fees the union racked up defending itself from this nonsense is absolutely appropriate.

Vera said on Tuesday that the weakness and impropriety of Trader Joe’s case justified ordering the company to pay the union’s attorneys’ fees.

“Employers should be discouraged from bringing meritless claims against unions they are challenging at the ballot box,” Vera said.

As I’ve said before, the bad PR associated with all of this should have been enough to motivate Trader Joe’s to course correct. Instead, it seems like even more pressure on the company from the public and courts is required.

Pro-Cop Coalition With No Web Presence Pitches Report Claiming Criminal Justice Reforms Are To Blame For Higher Crime Rates

3. Květen 2024 v 00:50

Because it sells so very well to a certain percentage of the population, ridiculous people are saying ridiculous things about crime rates in the United States. And, of course, the first place to post this so-called “news” is Fox News.

An independent group of law enforcement officials and analysts claim violent crime rates are much higher than figures reported by the Federal Bureau of Investigation in its 2023 violent crime statistics.

The Coalition for Law Order and Safety released its April 2024 report called “Assessing America’s Crime Crises: Trends, Causes, and Consequences,” and identified four potential causes for the increase in crime in most major cities across the U.S.: de-policing, de-carceration, de-prosecution and politicization of the criminal justice system. 

This plays well with the Fox News audience, many of whom are very sure there needs to be a whole lot more law and order, just so long as it doesn’t affect people who literally RAID THE CAPITOL BUILDING IN ORDER TO PREVENT A PEACEFUL TRANSFER OF PRESIDENTIAL POWER FROM HAPPENING.

These people like to hear the nation is in the midst of a criminal apocalypse because it allows them to be even nastier to minorities and even friendlier to cops (I mean, right up until they physically assault them for daring to stand between them and the inner halls of the Capitol buildings).

It’s not an “independent group.” In fact, it’s a stretch to claim there’s anything approaching actual “analysis” in this “report.” This is pro-cop propaganda pretending to be an actual study — one that expects everyone to be impressed by the sheer number of footnotes.

Here’s the thing about the Coalition for Law Order and Safety. Actually, here’s a few things. First off, the name is bad and its creators should feel bad. The fuck does “Law Order” actually mean, with or without the context of the alleged coalition’s entire name?

Second, this “coalition” has no web presence. Perhaps someone with stronger Googling skills may manage to run across a site run by this “coalition,” but multiple searches using multiple parameters have failed to turn up anything that would suggest this coalition exists anywhere outside of the title page of its report [PDF].

Here’s what we do know about this “coalition:” it contains, at most, two coalitioners (sp?). Those would be Mark Morgan, former assistant FBI director and, most recently, the acting commissioner of CBP (Customs and Border Protection) during Trump’s four-year stretch of abject Oval Office failure. (He’s also hooked up with The Federalist and The Heritage Foundation.) The other person is Sean Kennedy, who is apparently an attorney for the “Law Enforcement Legal Defense Fund.” (He also writes for The Federalist.)

At least that entity maintains a web presence. And, as can be assumed by its name, it spends a lot of its time and money ensuring bad cops keep their jobs and fighting against anything that might resemble transparency or accountability. (The press releases even contain exclamation points!)

This is what greets visitors to the Law Enforcement Legal Defense Fund website:

Yep, it’s yet another “George Soros is behind whatever we disagree with” sales pitch. Gotta love a pro-cop site that chooses to lead off with a little of the ol’ anti-antisemitism. This follows shortly after:

Well, duh. But maybe the LELDF should start asking the cops it represents and defends why they’re not doing their jobs. And let’s ask ourselves why we’re paying so much for a public service these so-called public servants have decided they’re just not going to do anymore, even though they’re still willing to collect the paychecks.

We could probably spend hours just discussing these two screenshots and their combination of dog whistles, but maybe we should just get to the report — written by a supposed “coalition,” but reading more like an angry blog post by the only two people actually willing to be named in the PDF.

There are only two aspects of this report that I agree with. First, the “coalition” (lol) is correct in the fact that the FBI’s reported crime rates are, at best, incomplete. The FBI recently changed the way it handles crime reporting, which has introduced plenty of clerical issues that numerous law enforcement agencies are still adjusting to.

Participation has been extremely low due to the learning curve, as well as a general reluctance to share pretty much anything with the public. On top of that, the coding of crimes has changed, which means the FBI is still receiving a blend of old reporting and adding that to new reporting that follows the new nomenclature. As a result, there’s a blend of old and new that potentially muddies crime stats and may result in an inaccurate picture of crime rates across the nation.

The other thing I agree with is the “coalition’s” assertion that criminal activity is under-reported. What I don’t agree with is the cause of this issue, which the copagandists chalk up to “progressive prosecutors” being unwilling to prosecute some crimes and/or bail reform programs making crime consequence-free. I think the real issue is that the public knows how cops will respond to most reported crimes and realizes it’s a waste of their time to report crimes to entities that have gotten progressively worse at solving crime, even as their budget demands and tech uptake continue to increase.

Law enforcement is a job and an extension of government bureaucracy. Things that aren’t easy or flashy just aren’t going to get done. It’s not just a cop problem. It persists anywhere people are employed and (perhaps especially) where people are employed to provide public services to taxpayers.

Those agreements aside, the rest of the report is pure bullshit. It cherry-picks stats, selectively quotes other studies that agree with its assertions, and delivers a bunch of conclusory statements that simply aren’t supported by the report’s contents.

And it engages in the sort tactics no serious report or study would attempt to do. It places its conclusions at the beginning of the report, surrounded by black boxes to highlight the author’s claims, and tagged (hilariously) as “facts.”

Here’s what the authors claim to be facts:

FACT #1: America faces a public safety crisis beset by high crime and an increasingly dysfunctional justice system.

First off, the “public safety crisis” does not exist. Neither does “high crime.” Even if we agree with the authors’ assertions, the crime rates in this country are only slightly above the historical lows we’ve enjoyed for most of the 21st century. It is nowhere near what it used to be, even if (and I’m ceding this ground for the sake of my argument) we’re seeing spikes in certain locations around the country. (I’ll also grant them the “dysfunctional justice system” argument, even though my definition of dysfunction isn’t aligned with theirs. The system is broken and has been for a long time.)

FACT #2: Crime has risen dramatically over the past few years and may be worse than some official statistics claim.

“Dramatically” possibly as in year-over-year in specific areas. “Dramatically” over the course of the past decades? It’s actually still in decline, even given the occasional uptick.

FACT #3: Although preliminary 2023 data shows a decline in many offenses, violent and serious crime remains at highly elevated levels compared to 2019.

Wow, that sounds furious! I wonder what it signifies…? First, the authors admit crime is down, but then they insist crime is actually up, especially when compared to one specific waypoint on the continuum of crime statistics. Man, I’ve been known to cherry-pick stats to back up my assertions, but at least I’ve never (1) limited my cherry-picking to a single year, or (2) pretended my assertions were some sort of study or report backed by a “coalition” of “professionals” and “analysts.” Also: this assertion is pretty much, “This thing that just happened to me once yesterday is a disturbing trend!”

There’s more:

FACT #4: Less than 42% of violent crime and 33% of property crime victims reported the crime to law enforcement.

Even if true (and it probably isn’t), this says more about cops than it says about criminals. When people decide they’re not going to report these crimes, it’s not because they think the criminal justice system as a whole will fail them. It’s because they think the first responders (cops) will fail them. The most likely reason for less crime reporting is the fact that cops are objectively terrible at solving crimes, even the most violent ones.

FACT #5: The American people feel less safe than they did prior to 2020.

First, it depends on who you ask. And second, even if the public does feel this way, it’s largely because of “studies” like this one and “reporting” performed by Fox News and others who love to stoke the “crime is everywhere” fires because it makes it easier to sell anti-immigrant and anti-minority hatred. It has little, if anything, to do with actual crime rates. We’re twice as safe (at least!) as a nation than we were in the 1990s and yet most people are still convinced things are worse than they’ve ever been — a belief they carry from year to year like reverse amortization.

Then we get to the supposed “causes” of all the supposed “facts.” And that’s where it gets somehow stupider. The “coalition” claims this is the direct result of cops doing less cop work due to decreased morale, “political hostility” [cops aren’t a political party, yo], and “policy changes.” All I can say is: suck it up. Sorry the job isn’t the glorious joyride it used to be. Do your job or GTFO. Stop collecting paychecks while harming public safety just because the people you’ve alienated for years are pushing back. Even if this assertion is true (it isn’t), the problem is cops, not society or “politics.”

The authors also claim “decarceration” and “de-prosecution” are part of the problem. Bail reform efforts and prosecutorial discretion has led to fewer people being charged or held without bail. These are good things that are better for society in the long run. Destroying people’s lives simply because they’re suspected of committing a crime creates a destructive cycle that tends to encourage more criminal activity because non-criminal means of income are now that much farther out of reach.

You can tell this argument is bullshit because of who it cites in support of this so-called “finding.” It points to a study released by Paul Cassell and Richard Fowles entitled “Does Bail Reform Increase Crime?” According to the authors it does and that conclusion is supposedly supported by the data pulled from Cook County, Illinois, where bail reform efforts were implemented in 2019.

But the stats don’t back up the paper’s claims. The authors take issue with the county’s “community safety rate” calculations:

The Bail Reform Study reported figures for the number of defendants who “remained crime-free” in both the fifteen months before G.O. 18.8A and the fifteen months after—i.e., the number of defendants who were not charged in Cook County for another crime after their initial bail hearing date. Based on these data, the Study concluded that “considerable stability” existed in “community safety rates” comparing the pre- and post-implementation periods. Indeed, the Study highlighted “community safety rates” that were about the same (or even better) following G.O. 18.8A’s implementation. The Study reported, for example, that the “community safety rate” for male defendants who were released improved from 81.2% before to 82.5% after; and for female defendants, the community safety rate improved from 85.7% to 86.5%.66 Combining the male and female figures produces the result that the overall community safety rate improved from 81.8% before implementation of the changes to 83.0% after.

The authors say this rate is wrong. They argue that releasing more accused criminals resulted in more crime.

[T]he number of defendants released pretrial increased from 20,435 in the “before” period to 24,504 in the “after” period—about a 20% increase. So even though the “community safety rate” remained roughly stable (and even improved very slightly), the total number of crimes committed by pretrial releasees increased after G.O. 18.8A. In the fifteen months before G.O.18.8A, 20,435 defendants were released and 16,720 remained “crime-free”—and, thus, arithmetically (although this number is not directly disclosed in the Study), 3,715 defendants were charged with committing new crimes while they were released. In the fifteen months after G.O. 18.8A, 24,504 defendants were released, and 20,340 remained “crimefree”—and, thus, arithmetically, 4,164 defendants were charged with committing new crimes while they were released. Directly comparing the before and after numbers shows a clear increase from 3,715 defendants who were charged with committing new crimes before to 4,164 after—a 12% increase.

Even if, as the authors point out, more total crimes were committed after more total people were released (bailed out or with no bail set), the County’s assessment isn’t wrong. More people were released and the recidivism rate fell. Prior to G.O. 18.8A’s passage, the “crime-free” rate (as a percentage) was 79.6%. After the implementation of bail reform, it was 83.0%. If we follow the authors to the conclusion they seem to feel is logical, the only way to prevent recidivism is to keep every arrestee locked up until their trial, no matter how minor the crime triggering the arrest.

But that’s not how the criminal justice system is supposed to work. The authors apparently believe thousands of people who are still — in the eyes of the law — innocent (until proven guilty) should stay behind bars because the more people cut loose on bail (or freed without bail being set) increases the total number of criminal acts perpetrated.

Of course, we should expect nothing less. Especially not from Paul Cassell. Cassell presents himself as a “victim’s rights” hero. And while he has a lot to say about giving crime victims more rights than Americans who haven’t had the misfortune of being on the resulting end of a criminal act, he doesn’t have much to say about the frequent abuse of these laws by police officers who’ve committed violence against arrestees.

Not only that, but he’s the author of perhaps the worst paper ever written on the intersection of civil rights and American law enforcement. The title should give you a pretty good idea what you’re in for, but go ahead and give it a read if you feel like voluntarily angrying up your blood:

Still Handcuffing the Cops? A Review of Fifty Years of Empirical Evidence of Miranda’s Harmful Effects on Law Enforcement

Yep, that’s Cassell arguing that the Supreme Court forcing the government to respect Fifth Amendment rights is somehow a net loss for society and the beginning of a five-decade losing streak for law enforcement crime clearance rates.

So, you can see why an apparently imaginary “coalition” that supports “law order” would look to Cassell to provide back-up for piss poor assertions and even worse logic.

There’s plenty more that’s terrible in this so-called study from this so-called coalition. And I encourage you to give it a read because I’m sure there are things I missed that absolutely should be named and shamed in the comments.

But let’s take a look at one of my favorite things in this terrible waste of bits and bytes:

Concomitant with de-prosecution is a shift toward politicization of prosecutorial priorities at the cost of focusing on tackling rising crime and violent repeat offenders. Both local, state, and federal prosecutors have increasingly devoted a greater share of their finite, and often strained, resources to ideologically preferred or politically expedient cases. This approach has two primary and deleterious impacts – on public safety and on public faith in the impartiality of the justice system.

Under the tranche of recently elected progressive district attorneys, prosecutions of police officers have climbed dramatically and well before the death of George Floyd in May 2020, though they have since substantially accelerated.

Yep, that’s how cops see this: getting prosecuted is a “political” thing, as though being a cop was the same thing as being part of a political party. Cops like to imagine themselves as a group worthy of more rights. Unfortunately, lots of legislators agree with them. But trying to hold cops accountable is not an act of partisanship… or at least it shouldn’t be. It should just be the sort of thing all levels of law enforcement oversight strive for. But one would expect nothing more than this sort of disingenuousness from a couple of dudes who want to blame everyone but cops for the shit state the nation’s in (even if it actually isn’t.)

  • ✇Techdirt
  • Wyden Presses FTC To Crack Down On Rampant Auto Industry Privacy AbusesKarl Bode
    Last year Mozilla released a report showcasing how the auto industry has some of the worst privacy practices of any tech industry in America (no small feat). Massive amounts of driver behavior is collected by your car, and even more is hoovered up from your smartphone every time you connect. This data isn’t secured, often isn’t encrypted, and is sold to a long list of dodgy, unregulated middlemen. Last March the New York Times revealed that automakers like GM routinely sell access to driver beha
     

Wyden Presses FTC To Crack Down On Rampant Auto Industry Privacy Abuses

Od: Karl Bode
2. Květen 2024 v 22:33

Last year Mozilla released a report showcasing how the auto industry has some of the worst privacy practices of any tech industry in America (no small feat). Massive amounts of driver behavior is collected by your car, and even more is hoovered up from your smartphone every time you connect. This data isn’t secured, often isn’t encrypted, and is sold to a long list of dodgy, unregulated middlemen.

Last March the New York Times revealed that automakers like GM routinely sell access to driver behavior data to insurance companies, which then use that data to justify jacking up your rates. The practice isn’t clearly disclosed to consumers, and has resulted in 11 federal lawsuits in less than a month.

Now Ron Wyden’s office is back with the results of their preliminary investigation into the auto industry, finding that it routinely provides customer data to law enforcement without a warrant without informing consumers. The auto industry, unsurprisingly, couldn’t even be bothered to adhere to a performative, voluntary pledge the whole sector made in 2014 to not do precisely this sort of thing:

“Automakers have not only kept consumers in the dark regarding their actual practices, but multiple companies misled consumers for over a decade by failing to honor the industry’s own voluntary privacy principles. To that end, we urge the FTC to investigate these auto manufacturers’ deceptive claims as well as their harmful data retention practices.”

The auto industry can get away with this because the U.S. remains too corrupt to pass even a baseline privacy law for the internet era. The FTC, which has been left under-staffed, under-funded, and boxed in by decades of relentless lobbying and mindless deregulation, lacks the resources to pursue these kinds of violations at any consistent scale; precisely as corporations like it.

Maybe the FTC will act, maybe it won’t. If it does, it will take two years to get the case together, the financial penalties will be a tiny pittance in relation to the total amount of revenues gleaned from privacy abuses, and the final ruling will be bogged down in another five years of legal wrangling.

This wholesale violation of user privacy has dire, real-world consequences. Wyden’s office has also been taking aim at data brokers who sell abortion clinic visitor location data to right wing activists, who then have turned around to target vulnerable women with health care disinformation. Wireless carrier location data has also been abused by everyone from stalkers to people pretending to be law enforcement.

The cavalier treatment of your auto data poses those same risks, Wyden’s office notes:

“Vehicle location data can reveal intimate details of a person’s life, including for those who seek care across state lines, attend protests, visit mental or behavioral health professionals or seek treatment for substance use disorder.”

Keep in mind this is the same auto industry currently trying to scuttle right to repair reforms under the pretense that they’re just trying to protect consumer privacy (spoiler: they aren’t).

This same story is playing out across a litany of industries. Again, it’s just a matter of time until there’s a privacy scandal so massive and ugly that even our corrupt Congress is shaken from its corrupt apathy, though you’d hate to think what it will have to look like.

  • ✇Techdirt
  • Bipartisan Group Of Senators Introduce New Terrible ‘Protect The Kids Online’ BillMike Masnick
    Apparently, the world needs even more terrible bills that let ignorant senators grandstand to the media about how they’re “protecting the kids online.” There’s nothing more serious to work on than that. The latest bill comes from Senators Brian Schatz and Ted Cruz (with assists from Senators Chris Murphy, Katie Britt, Peter Welch, Ted Budd, John Fetterman, Angus King, and Mark Warner). This one is called the “The Kids Off Social Media Act” (KOSMA) and it’s an unconstitutional mess built on a lon
     

Bipartisan Group Of Senators Introduce New Terrible ‘Protect The Kids Online’ Bill

2. Květen 2024 v 21:05

Apparently, the world needs even more terrible bills that let ignorant senators grandstand to the media about how they’re “protecting the kids online.” There’s nothing more serious to work on than that. The latest bill comes from Senators Brian Schatz and Ted Cruz (with assists from Senators Chris Murphy, Katie Britt, Peter Welch, Ted Budd, John Fetterman, Angus King, and Mark Warner). This one is called the “The Kids Off Social Media Act” (KOSMA) and it’s an unconstitutional mess built on a long list of debunked and faulty premises.

It’s especially disappointing to see this from Schatz. A few years back, I know his staffers would regularly reach out to smart people on tech policy issues in trying to understand the potential pitfalls of the regulations he was pushing. Either he’s no longer doing this, or he is deliberately ignoring their expert advice. I don’t know which one would be worse.

The crux of the bill is pretty straightforward: it would be an outright ban on social media accounts for anyone under the age of 13. As many people will recognize, we kinda already have a “soft” version of that because of COPPA, which puts much stricter rules on sites directed at those under 13. Because most sites don’t want to deal with those stricter rules, they officially limit account creation to those over the age of 13.

In practice, this has been a giant mess. Years and years ago, Danah Boyd pointed this out, talking about how the “age 13” bit is a disaster for kids, parents, and educators. Her research showed that all this generally did was to have parents teach kids that “it’s okay to lie,” as parents wanted kids to use social media tools to communicate with grandparents. Making that “soft” ban a hard ban is going to create a much bigger mess and prevent all sorts of useful and important communications (which, yeah, is a 1st Amendment issue).

Schatz’s reasons put forth for the bill are just… wrong.

No age demographic is more affected by the ongoing mental health crisis in the United States than kids, especially young girls. The Centers for Disease Control and Prevention’s Youth Risk Behavior Survey found that 57 percent of high school girls and 29 percent of high school boys felt persistently sad or hopeless in 2021, with 22 percent of all high school students—and nearly a third of high school girls—reporting they had seriously considered attempting suicide in the preceding year.

Gosh. What was happening in 2021 with kids that might have made them feel hopeless? Did Schatz and crew simply forget about the fact that most kids were under lockdown and physically isolated from friends for much of 2021? And that there were plenty of other stresses, including millions of people, including family members, dying? Noooooo. Must be social media!

Studies have shown a strong relationship between social media use and poor mental health, especially among children.

Note the careful word choice here: “strong relationship.” They won’t say a causal relationship because studies have not shown that. Indeed, as the leading researcher in the space has noted, there continues to be no real evidence of any causal relationship. The relationship appears to work the other way: kids who are dealing with poor mental health and who are desperate for help turn to the internet and social media because they’re not getting help elsewhere.

Maybe offer a bill that helps kids get access to more resources that help them with their mental health, rather than taking away the one place they feel comfortable going? Maybe?

From 2019 to 2021, overall screen use among teens and tweens (ages 8 to 12) increased by 17 percent, with tweens using screens for five hours and 33 minutes per day and teens using screens for eight hours and 39 minutes.

I mean, come on Schatz. Are you trolling everyone? Again, look at those dates. WHY DO YOU THINK that screen time might have increased 17% for kids from 2019 to 2021? COULD IT POSSIBLY BE that most kids had to do school via computers and devices at home, because there was a deadly pandemic making the rounds?

Maybe?

Did Schatz forget that? I recognize that lots of folks would like to forget the pandemic lockdowns, but this seems like a weird way to manifest that.

I mean, what a weird choice of dates to choose. I’m honestly kind of shocked that the increase was only 17%.

Also, note that the data presented here isn’t about an increase in social media use. It could very well be that the 17% increase was Zoom classes.

Based on the clear and growing evidence, the U.S. Surgeon General issued an advisory last year, calling for new policies to set and enforce age minimums and highlighting the importance of limiting the use of features, like algorithms, that attempt to maximize time, attention, and engagement.

Wait. You mean the same Surgeon General’s report that denied any causal link between social media and mental health (which you falsely claim has been proved) and noted just how useful and important social media is to many young people?

From that report, which Schatz misrepresents:

Social media can provide benefits for some youth by providing positive community and connection with others who share identities, abilities, and interests. It can provide access to important information and create a space for self-expression. The ability to form and maintain friendships online and develop social connections are among the positive effects of social media use for youth. , These relationships can afford opportunities to have positive interactions with more diverse peer groups than are available to them offline and can provide important social support to youth. The buffering effects against stress that online social support from peers may provide can be especially important for youth who are often marginalized, including racial, ethnic, and sexual and gender minorities. , For example, studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support. Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.

Did Schatz’s staffers just, you know, skip over that part of the report or nah?

The bill also says that companies need to not allow algorithmic targeting of content to anyone under 17. This is also based on a widely believed myth that algorithmic content is somehow problematic. No studies have legitimately shown that of current algorithms. Indeed, a recent study showed that removing algorithmic targeting leads to people being exposed to more disinformation.

Is this bill designed to force more disinformation on kids? Why would that be a good idea?

Yes, some algorithms can be problematic! About a decade ago, algorithms that tried to optimize solely for “engagement” definitely created some bad outcomes. But it’s been a decade since most such algorithms have been designed that way. On most social media platforms, the algorithms are designed in other ways, taking into account a variety of different factors, because they know that optimizing just on engagement leads to bad outcomes.

Then the bill tacks on Cruz’s bill to require schools to block social media. There’s an amusing bit when reading the text of that part of the law. It says that you have to block social media on “federally funded networks and devices” but also notes that it does not prohibit “a teacher from using a social media platform in the classroom for educational purposes.”

But… how are they going to access those if the school is required by law to block access to such sites? Most schools are going to do a blanket ban, and teachers are going to be left to do what? Show kids useful YouTube science videos on their phones? Or maybe some schools will implement a special teacher code that lets them bypass the block. And by the end of the first week of school half the kids in the school will likely know that password.

What are we even doing here?

Schatz has a separate page hyping up the bill, and it’s even dumber than the first one above. It repeats some of the points above, though this time linking to Jonathan Haidt, whose work has been trashed left, right, and center by actual experts in this field. And then it gets even dumber:

Big Tech knows it’s complicit – but refuses to do anything about it…. Moreover, the platforms know about their central role in turbocharging the youth mental health crisis. According to Meta’s own internal study, “thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” It concluded, “teens blame Instagram for increases in the rate of anxiety and depression.”

This is not just misleading, it’s practically fraudulent misrepresentation. The study Schatz is citing is one that was revealed by Frances Haugen. As we’ve discussed, it was done because Meta was trying to understand how to do better. Indeed, the whole point of that study was to see how teens felt about using social media in 12 different categories. Meta found that most boys felt neutral or better about themselves in all 12 categories. For girls, it was 11 out of 12. It was only in one category, body image, where the split was more pronounced. 32% of girls said that it made them feel worse. Basically the same percentage said it had no impact, or that it made them feel better.

Image

Also, look at that slide’s title. The whole point of this study was to figure out if they were making kids feel worse in order to look into how to stop doing that. And now, because grandstanders like Schatz are falsely claiming that this proves they were “complicit” and “refuse to do anything about it,” no social media company will ever do this kind of research again.

Because, rather than proactively looking to see if they’re creating any problems that they need to try to fix, Schatz and crew are saying “simply researching this is proof that you’re complicit and refuse to act.”

Statements like this basically ensure that social media companies stick their heads in the sand, rather than try to figure out where harm might be caused and take steps to stop that harm.

Why would Schatz want to do that?

That page then also falsely claims that the bill does not require age verification. This is a silly two-step that lying politicians claim every time they do this. Does it directly mandate age verification? No. But, by making the penalties super serious and costly for failing to stop kids from accessing social media that will obviously drive companies to introduce stronger age verification measures that are inherently dangerous and an attack on privacy.

Perhaps Schatz doesn’t understand this, but it’s been widely discussed by many of the experts his staff used to talk to. So, really, he has no excuse.

The FAQ also claims that the bill will pass constitutional muster, while at the same time admitting that they know there will be lawsuits challenging it:

Yes. As, for example, First Amendment expert Neil Richards explains, “[i]nstead of censoring the protected expression present on these platforms, the act takes aim at the procedures and permissions that determine the time, place and manner of speech for underage consumers.” The Supreme Court has long held that the government has the right to regulate products to protect children, including by, for instance, restricting the sale of obscene content to minors. As Richards explains: “[i]n the same way a crowded bar or nightclub is no place for a child on their own”—or in the way every state in the country requires parental consent if it allows a minor to get a tattoo—“this rule would set a reasonable minimum age and maturity limitation for social media customers.” 

While we expect legal challenges to any bill aimed at regulating social media companies, we are confident that this content-neutral bill will pass constitutional muster given the government interests at play.

There are many reasons why this is garbage under the law, but rather than breaking them all down (we’ll wait for judges to explain it in detail), I’ll just point out the major tell is in the law itself. In the definition of what a “social media platform” is in the law, there is a long list of exceptions of what the law does not cover. It includes a few “moral panics of yesteryear” that gullible politicians tried to ban and were found to have violated the First Amendment in the process.

It explicitly carves out video games and content that is professionally produced, rather than user-generated:

Image

Remember the moral panics about video games and TV destroying kids’ minds? Yeah. So this child protection bill is hasty to say “but we’re not banning that kind of content!” Because whoever drafted the bill recognized that the Supreme Court has already made it clear that politicians can’t do that for video games or TV.

So, instead, they have to pretend that social media content is somehow on a whole different level.

But it’s not. It’s still the government restricting access to content. They’re going to pretend that there’s something unique and different about social media, and that they’re not banning the “content” but rather the “place” and “manner” of accessing that content. Except that’s laughable on its face.

You can see that in the quote above where Schatz does the fun dance where he first says “it’s okay to ban obscene content to minors” and then pretends that’s the same as restrictions on access to a bar (it’s not). One is about the content, and one is about a physical place. Social media is all about the content, and it’s not obscene content (which is already an exception to the First Amendment).

And, the “parental consent” for tattoos… I mean, what the fuck? Literally 4 questions above in the FAQ where that appears Schatz insists that his bill has nothing about parental consent. And then he tries to defend it by claiming it’s no different than parental consent laws?

The FAQ also claims this:

This bill does not prevent LGBTQ+ youth from accessing relevant resources online and we have worked closely with LGBTQ+ groups while crafting this legislation to ensure that this bill will not negatively impact that community.

I mean, it’s good you talked to some experts, but I note that most of the LGBTQ+ groups I’m aware of are not listed on your list of “groups supporting the bill” on the very same page. That absence stands out.

And, again, the Surgeon General’s report that you misleadingly cited elsewhere highlights how helpful social media can be to many LGBTQ+ youth. You can’t just say “nah, it won’t harm them” without explaining why all those benefits that have been shown in multiple studies, including the Surgeon General’s report, somehow don’t get impacted.

There’s a lot more, but this is just a terrible bill that would create a mess. And, I’m already hearing from folks in DC that Schatz is trying to get this bill added to the latest Christmas tree of a bill to reauthorize the FAA.

It would be nice if we had politicians looking to deal with the actual challenges facing kids these days, including the lack of mental health support for those who really need it. Instead, we get unconstitutional grandstanding nonsense bills like this.

Everyone associated with this bill should feel ashamed.

  • ✇Techdirt
  • Axon Wants Its Body Cameras To Start Writing Officers’ Reports For ThemTim Cushing
    Taser long ago locked down the market for “less than lethal” (but still frequently lethal) weapons. It has also written itself into the annals of pseudoscience with its invocation of not-an-actual-medical condition “excited delirium” as it tried to explain away the many deaths caused by its “less than lethal” Taser. These days Taser does business as Axon. In addition to separating itself from its troubled (and somewhat mythical) past, Axon’s focus has shifted to body cameras and data storage. Th
     

Axon Wants Its Body Cameras To Start Writing Officers’ Reports For Them

2. Květen 2024 v 19:49

Taser long ago locked down the market for “less than lethal” (but still frequently lethal) weapons. It has also written itself into the annals of pseudoscience with its invocation of not-an-actual-medical condition “excited delirium” as it tried to explain away the many deaths caused by its “less than lethal” Taser.

These days Taser does business as Axon. In addition to separating itself from its troubled (and somewhat mythical) past, Axon’s focus has shifted to body cameras and data storage. The cameras are the printer and the data storage is the ink. The real money is in data management, and that appears to be where Axon is headed next. And, of course, like pretty much everyone at this point, the company believes AI can take a lot of the work out of police work. Here’s Thomas Brewster and Richard Nieva with the details for Forbes.

On Tuesday, Axon, the $22 billion police contractor best known for manufacturing the Taser electric weapon, launched a new tool called Draft One that it says can transcribe audio from body cameras and automatically turn it into a police report. Cops can then review the document to ensure accuracy, Axon CEO Rick Smith told Forbes. Axon claims one early tester of the tool, Fort Collins Colorado Police Department, has seen an 82% decrease in time spent writing reports. “If an officer spends half their day reporting, and we can cut that in half, we have an opportunity to potentially free up 25% of an officer’s time to be back out policing,” Smith said.

If you don’t spend too much time thinking about it, it sounds like a good idea. Doing paperwork consumes a large amounts of officers’ time and a tool that automates at least part of the process would, theoretically, allow officers to spend more time doing stuff that actually matters, like trying to make a dent in violent crime — the sort of thing cops on TV are always doing but is a comparative rarity in real life.

It’s well-documented that officers spend a large part of their day performing the less-than-glamorous function of being an all-purpose response to a variety of issues entirely unrelated to the type of crimes that make headlines and fodder for tough-on-crime politicians.

On the other hand, when officers are given discretion to handle crime-fighting in a way they best see fit, they almost always do the same thing: perform a bunch of pretextual stops in hopes of lucking into something more criminal than the minor violation that triggered the stop. A 2022 study of law enforcement time use by California agencies provided these depressing results:

Overall, sheriff patrol officers spend significantly more time on officer-initiated stops – “proactive policing” in law enforcement parlance – than they do responding to community members’ calls for help, according to the report. Research has shown that the practice is a fundamentally ineffective public safety strategy, the report pointed out.

In 2019, 88% of the time L.A. County sheriff’s officers spent on stops was for officer-initiated stops rather than in response to calls. The overwhelming majority of that time – 79% – was spent on traffic violations. By contrast, just 11% of those hours was spent on stops based on reasonable suspicion of a crime.

In Riverside, about 83% of deputies’ time spent on officer-initiated stops went toward traffic violations, and just 7% on stops based on reasonable suspicion.

So, the first uncomfortable question automated report writing poses is this: what are cops actually going to do with all this free time? If it’s just more of this, we really don’t need it. All AI will do is allow problematic agencies and officers to engage in more of the biased policing they already engage in. Getting more of this isn’t going to make American policing better and it’s certainly not going to address the plethora of long-standing issues American law enforcement agencies have spent decades trying to ignore.

Then there’s the AI itself. Everything at use at this point is still very much in the experimental stage. Auto-generated reports might turn into completely unusable evidence, thanks to the wholly expected failings of the underlying software.

These reports, though, are often used as evidence in criminal trials, and critics are concerned that relying on AI could put people at risk by depending on language models that are known to “hallucinate,” or make things up, as well as display racial bias, either blatantly or unconsciously.

That’s a huge problem. Also problematic is the expected workflow, which will basically allow cops to grade their own papers by letting the AI handle the basics before they step in and clean up anything that doesn’t agree with the narrative an officer is trying to push. This kind of follow-up won’t be optional, which also might mean some agencies will have to allow officers to review their own body cam footage — something they may have previously forbidden for exactly this reason.

On top of that, there’s the garbage-in, garbage-out problem. AI trained on narratives provided by officers may take it upon themselves to “correct” narratives that seem to indicate an officer may have done something wrong. It’s also going to lend itself to biased policing by tech-washing BS stops by racist cops, portraying these as essential contributions to public safety.

Of course, plenty of officers do these sorts of things already, so there’s a possibility it won’t make anything worse. But if the process Axon is pitching makes things faster, there’s no reason to believe what’s already wrong with American policing won’t get worse in future. And, as the tech improves (so to speak), the exacerbation of existing problems and the problems introduced by the addition of AI will steadily accelerate.

That’s not to say there’s no utility in processes that reduce the amount of time spent on paperwork. But it seems splitting off a clerical division might be a better solution — a part of the police force that handles the paperwork and vets camera footage, but is performed by people who are not the same ones who captured the recordings and participated in the traffic stop, investigation, or dispatch call response.

And I will say this for Axon: at least its CEO recognizes the problems this could introduce and suggests agencies limit automated report creation to things like misdemeanors and never in cases where deadly force is deployed. But, like any product, it will be the end users who decide how it’s used. And so far, the expected end users are more than willing to streamline things they view as inessential, but are far less interested in curtailing abuse by those using these systems. Waiting to see how things play out just isn’t an acceptable option — not when there are actual lives and liberties on the line.

  • ✇Techdirt
  • Daily Deal: The Complete ChatGPT Artificial Intelligence OpenAI Training BundleGretchen Heckmann
    The Complete ChatGPT Artificial Intelligence OpenAI Training Bundle has 4 beginner-friendly courses to help you become more comfortable with the capabilities of OpenAI and ChatGPT. You’ll learn how to write effective prompts to get the best results, how to create blog posts and sales copy, and how to create your own chatbots. It’s on sale for $30. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The prod
     

Daily Deal: The Complete ChatGPT Artificial Intelligence OpenAI Training Bundle

2. Květen 2024 v 19:44

The Complete ChatGPT Artificial Intelligence OpenAI Training Bundle has 4 beginner-friendly courses to help you become more comfortable with the capabilities of OpenAI and ChatGPT. You’ll learn how to write effective prompts to get the best results, how to create blog posts and sales copy, and how to create your own chatbots. It’s on sale for $30.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

  • ✇Techdirt
  • Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?Mike Masnick
    There’s a fascinating new lawsuit against Meta that includes a surprisingly novel interpretation of Section 230. If the court buys it, this interpretation could make the open web a lot more open, while chipping away at the centralized control of the biggest tech companies. And, yes, that could mean that the law (Section 230) that is wrongly called “a gift to big tech” might be a tool that undermines the dominance of some of those companies. But the lawsuit could be tripped up for any number of r
     

Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?

2. Květen 2024 v 18:23

There’s a fascinating new lawsuit against Meta that includes a surprisingly novel interpretation of Section 230. If the court buys it, this interpretation could make the open web a lot more open, while chipping away at the centralized control of the biggest tech companies. And, yes, that could mean that the law (Section 230) that is wrongly called “a gift to big tech” might be a tool that undermines the dominance of some of those companies. But the lawsuit could be tripped up for any number of reasons, including a potentially consequential typo in the law that has been ignored for years.

Buckle in, this is a bit of a wild ride.

You would think with how much attention has been paid to Section 230 over the last few years (there’s an entire excellent book about it!), and how short the law is, that there would be little happening with the existing law that would take me by surprise. But the new Zuckerman v. Meta case filed on behalf of Ethan Zuckerman by the Knight First Amendment Institute has got my attention.

It’s presenting a fairly novel argument about a part of Section 230 that almost never comes up in lawsuits, but could create an interesting opportunity to enable all kinds of adversarial interoperability and middleware to do interesting (and hopefully useful) things that the big platforms have been using legal threats to shut down.

If the argument works, it may reveal a surprising and fascinating trojan horse for a more open internet, hidden in Section 230 for the past 28 years without anyone noticing.

Of course, it could also have much wider ramifications that a bunch of folks need to start thinking through. This is the kind of thing that happens when someone discovers something new in a law that no one really noticed before.

But there’s also a very good chance this lawsuit flops for a variety of other reasons without ever really exploring the nature of this possible trojan horse. There are a wide variety of possible outcomes here.

But first, some background.

For years, we’ve talked about the importance of tools and systems that give end users more control over their own experiences online, rather than leaving it entirely up to the centralized website owners. This has come up in a variety of different contexts in different ways, from “Protocols, not Platforms” to “adversarial interoperability,” to “magic APIs” to “middleware.” These are not all exactly the same thing, but they’re all directionally strongly related, and conceivably could work well together in interesting ways.

But there are always questions about how to get there, and what might stand in the way. One of the biggest things standing in the way over the last decade or so has been interpretations of various laws that effectively allow social media companies to threaten and/or bring lawsuits against companies trying to provide these kinds of additional services. This can take the form of a DMCA 1201 claim for “circumventing” a technological block. Or, more commonly, it has taken the form of a civil (Computer Fraud & Abuse Act) CFAA claim.

The most representative example of where this goes wrong is when Facebook sued Power Ventures years ago. Power was trying to build a unified dashboard across multiple social media properties. Users could provide Power with their own logins to social media sites. This would allow Power to log in to retrieve and post data, so that someone could interact with their Facebook community without having to personally go into Facebook.

This was a potentially powerful tool in limiting Facebook’s ability to become a walled-off garden with too much power. And Facebook realized that too. That’s why it sued Power, claiming that it violated the CFAA’s prohibition on “unauthorized access.”

The CFAA was designed (poorly and vaguely) as an “anti-hacking” law. And you can see where “unauthorized access” could happen as a result of hacking. But Facebook (and others) have claimed that “unauthorized access” can also be “because we don’t want you to do that with your own login.”

And the courts have agreed to Facebook’s interpretation, with a few limitations (that don’t make that big of a difference).

I still believe that this ability to block interoperability/middleware with law has been a major (perhaps the most major) reason “big tech” is so big. They’re able to use these laws to block out the kinds of companies who would make the market more competitive and pull down some the walls of walled gardens.

That brings us to this lawsuit.

Ethan Zuckerman has spent years trying to make the internet a better, more open space (partially, I think, in penance for creating the world’s first pop-up internet ad). He’s been doing some amazing work on reimagining the digital public infrastructure, which I keep meaning to write about, but never quite find the time to get to.

According to the lawsuit, he wants to build a tool called “Unfollow Everything 2.0.” The tool is based on a similar tool, also called Unfollow Everything, that was built by Louis Barclay a few years ago and did what it says on the tin: let you automatically unfollow everything on Facebook. Facebook sent Barclay a legal threat letter and banned him for life from the site.

Zuckerman wants to recreate the tool with some added features enabling users to opt-in to provide some data to researchers about the impact of not following anyone on social media. But he’s concerned that he’d face legal threats from Meta, given what happened with Barclay.

Using Unfollow Everything 2.0, Professor Zuckerman plans to conduct an academic research study of how turning off the newsfeed affects users’ Facebook experience. The study is opt-in—users may use the tool without participating in the study. Those who choose to participate will donate limited and anonymized data about their Facebook usage. The purpose of the study is to generate insights into the impact of the newsfeed on user behavior and well-being: for example, how does accessing Facebook without the newsfeed change users’ experience? Do users experience Facebook as less “addictive”? Do they spend less time on the platform? Do they encounter a greater variety of other users on the platform? Answering these questions will help Professor Zuckerman, his team, and the public better understand user behavior online and the influence that platform design has on that behavior

The tool and study are nearly ready to launch. But Professor Zuckerman has not launched them because of the near certainty that Meta will pursue legal action against him for doing so.

So he’s suing for declaratory judgment that he’s not violating any laws. If he were just suing for declaratory judgment over the CFAA, that would (maybe?) be somewhat understandable or conventional. But, while that argument is in the lawsuit, the main claim in the case is something very, very different. It’s using a part of Section 230, section (c)(2)(B), that almost never gets mentioned, let alone tested.

Most Section 230 lawsuits involve (c)(1): the famed “26 words” that state “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Some Section 230 cases involve (c)(2)(A) which states that “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Many people incorrectly think that Section 230 cases turn on this part of the law, when really, much of those cases are already cut off by (c)(1) because they try to treat a service as a speaker or publisher.

But then there’s (c)(2)(B), which says:

No provider or user of an interactive computer service shall be held liable on account of any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)

As noted, this basically never comes up in cases. But the argument being made here is that this creates some sort of proactive immunity from lawsuits for middleware creators who are building tools (“technical means”) to “restrict access.” In short: does Section 230 protect “Unfollow Everything” from basically any legal threats from Meta, because it’s building a tool to restrict access to content on Meta platforms?

Or, according to the lawsuit:

This provision would immunize Professor Zuckerman from civil liability for designing, releasing, and operating Unfollow Everything 2.0

First, in operating Unfollow Everything 2.0, Professor Zuckerman would qualify as a “provider . . . of an interactive computer service.” The CDA defines the term “interactive computer service” to include, among other things, an “access software provider that provides or enables computer access by multiple users to a computer server,” id. § 230(f)(2), and it defines the term “access software provider” to include providers of software and tools used to “filter, screen, allow, or disallow content.” Professor Zuckerman would qualify as an “access software provider” because Unfollow Everything 2.0 enables the filtering of Facebook content—namely, posts that would otherwise appear in the feed on a user’s homepage. And he would “provide[] or enable[] computer access by multiple users to a computer server” by allowing users who download Unfollow Everything 2.0 to automatically unfollow and re-follow friends, groups, and pages; by allowing users who opt into the research study to voluntarily donate certain data for research purposes; and by offering online updates to the tool.

Second, Unfollow Everything 2.0 would enable Facebook users who download it to restrict access to material they (and Zuckerman) find “objectionable.” Id. § 230(c)(2)(A). The purpose of the tool is to allow users who find the newsfeed objectionable, or who find the specific sequencing of posts within their newsfeed objectionable, to effectively turn off the feed.

I’ve been talking to a pretty long list of lawyers about this and I’m somewhat amazed at how this seems to have taken everyone by surprise. Normally, when new lawsuits come out, I’ll gut check my take on it with a few lawyers and they’ll all agree with each other whether I’m heading in the right direction or the totally wrong direction. But here… the reactions were all over the map, and not in any discernible pattern. More than one person I spoke to started by suggesting that this was a totally crazy legal theory, only to later come back and say “well, maybe it actually makes some sense.”

It could be a trojan horse that no one noticed in Section 230 that effectively bars websites from taking legal action against middleware providers who are providing technical means for people to filter or screen content on their feed. Now, it’s important to note that it does not bar those companies from putting in place technical measures to block such tools, or just banning accounts or whatever. But that’s very different from threatening or filing civil suits.

If this theory works, it could do a lot to enable these kinds of middleware services and make it significantly harder for big social media companies like Meta to stop them. If you believe in adversarial interoperability, that could be a very big deal. Like, “shift the future of the internet we all use” kind of big.

Now, there are many hurdles before we get to that point. And there are some concerns that if this legal theory succeeds, it could also lead to other problematic results (though I’m less convinced by those).

Let’s start with the legal concerns.

First, as noted, this is a very novel and untested legal theory. Upon reading the case initially, my first reaction was that it felt like one of those slightly wacky academic law journal articles you see law professors write sometimes, with some far-out theory they have that no one’s ever really thought about. This one is in the form of a lawsuit, so at some point we’ll find out how the theory works.

But that alone might make a judge unwilling to go down this path.

Then there are some more practical concerns. Is there even standing here? ¯\_(ツ)_/¯ Zuckerman hasn’t released his tool. Meta hasn’t threatened him. He makes a credible claim that given Meta’s past actions, they’re likely to react unfavorably, but is that enough to get standing?

Then there’s the question of whether or not you can even make use of 230 in an affirmative way like this. 230 is used as a defense to get cases thrown out, not proactively for declaratory judgment.

Also, this is not my area of expertise by any stretch of the imagination, but I remember hearing in the past that outside of IP law, courts (and especially courts in the 9th Circuit) absolutely disfavor lawsuits for declaratory judgment (i.e., a lawsuit before there’s any controversy, where you ask the court “hey, can you just check and make sure I’m on the right side of the law here…”). So I could totally see the judge saying “sorry, this is not a proper use of our time” and tossing it. In fact, that might be the most likely result.

Then there’s this kinda funny but possibly consequential issue: there’s a typo in Section 230 that almost everyone has ignored for years. Because it’s never really mattered. Except it matters in this case. Jeff Kosseff, the author of the book on Section 230, always likes to highlight that in (c)(2)(B), it says that the immunity is for using “the technical means to restrict access to material described in paragraph (1).”

But they don’t mean “paragraph (1).” They mean “paragraph (A).” Paragraph (1) is the “26 words” and does not describe any material, so it would make no sense to say “material described in paragraph (1).” It almost certainly means “paragraph (A),” which is the “good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” section. That’s the one that describes material.

I know that, at times, Jeff has joked when people ask him how 230 should be reformed he suggests they fix the typo. But Congress has never listened.

And now it might matter?

The lawsuit basically pretends that the typo isn’t there. Its language inserts the language from “paragraph (A)” where the law says “paragraph (1).”

I don’t know how that gets handled. Perhaps it gets ignored like every time Jeff points out the typo? Perhaps it becomes consequential? Who knows!

There are a few other oddities here, but this article is getting long enough and has mostly covered the important points. However, I will conclude on one other point that one of the people I spoke to raised. As discussed above, Meta has spent most of the past dozen or so years going legally ballistic about anyone trying to scrape or data mine its properties in anyway.

Yet, earlier this year, it somewhat surprisingly bailed out on a case where it had sued Bright Data for scraping/data mining. Lawyer Kieran McCarthy (who follows data scraping lawsuits like no one else) speculated that Meta’s surprising about-face may be because it suddenly realized that for all of its AI efforts, it’s been scraping everyone else. And maybe someone high up at Meta suddenly realized how it was going to look in court when it got sued for all the AI training scraping, if the plaintiffs point out that at the very same time it was suing others for scraping its properties.

For me, I suspect the decision not to appeal might be more about a shift in philosophy by Meta and perhaps some of the other big platforms than it is about their confidence in their ability to win this case. Today, perhaps more important to Meta than keeping others off their public data is having access to everyone else’s public data. Meta is concerned that their perceived hypocrisy on these issues might just work against them. Just last month, Meta had its success in prior scraping cases thrown back in their face in a trespass to chattels case. Perhaps they were worried here that success on appeal might do them more harm than good.

In short, I think Meta cares more about access to large volumes of data and AI than it does about outsiders scraping their public data now. My hunch is that they know that any success in anti-scraping cases can be thrown back at them in their own attempts to build AI training databases and LLMs. And they care more about the latter than the former.

I’ve separately spoken to a few experts who were worried about the consequences if Zuckerman succeeded here. They were worried that it might simultaneously immunize potential bad actors. Specifically, you could see a kind of Cambridge Analytica or Clearview AI situation, where companies trying to get access to data for malign purposes convince people to install their middleware app. This could lead to a massive expropriation of data, and possibly some very sketchy services as a result.

But I’m less worried about that, mainly because it’s the sketchy eventuality of how that data is being used that would still (hopefully?) violate certain laws, not the access to the data itself. Still, there are at least some questions being raised about how this type of more proactive immunity might result in immunizing bad actors that is at least worth thinking about.

Either way, this is going to be a case worth following.

  • ✇Techdirt
  • Nurses Say Hospital Adoption Of Half-Cooked ‘AI’ Is RecklessKarl Bode
    We’ve noted repeatedly that while “AI” (language learning models) hold a lot of potential, the rushed implementation of half-assed early variants are causing no shortage of headaches across journalism, media, health care, and other sectors. In part because the kind of terrible brunchlord managers in charge of many institutions primarily see AI as a way to cut corners and attack labor. It’s been a particular problem in healthcare, where broken “AI” is being layered on top of already broken system
     

Nurses Say Hospital Adoption Of Half-Cooked ‘AI’ Is Reckless

Od: Karl Bode
2. Květen 2024 v 14:22

We’ve noted repeatedly that while “AI” (language learning models) hold a lot of potential, the rushed implementation of half-assed early variants are causing no shortage of headaches across journalism, media, health care, and other sectors. In part because the kind of terrible brunchlord managers in charge of many institutions primarily see AI as a way to cut corners and attack labor.

It’s been a particular problem in healthcare, where broken “AI” is being layered on top of already broken systems. Like in insurance, where error-prone automation, programmed from the ground up to prioritize money over health, is incorrectly denying essential insurance coverage to the elderly.

Last week, hundreds of nurses protested the implementation of sloppy AI into hospital systems in front of Kaiser Permanente. Their primary concern: that systems incapable of empathy are being integrated into an already dysfunctional sector without much thought toward patient care:

“No computer, no AI can replace a human touch,” said Amy Grewal, a registered nurse. “It cannot hold your loved one’s hand. You cannot teach a computer how to have empathy.”

There are certainly roles automation can play in easing strain on a sector full of burnout after COVID, particularly when it comes to administrative tasks. The concern, as with other industries dominated by executives with poor judgement, is that this is being used as a justification by for-profit hospital systems to cut corners further. From a National Nurses United blog post (spotted by 404 Media):

“Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

Kaiser Permanente, for its part, insists it’s simply leveraging “state-of-the-art tools and technologies that support our mission of providing high-quality, affordable health care to best meet our members’ and patients’ needs.” The company claims its “Advance Alert” AI monitoring system — which algorithmically analyzes patient data every hour — has the potential to save upwards of 500 lives a year.

The problem is that healthcare giants’ primary obligation no longer appears to reside with patients, but with their financial results. And, that’s even true in non-profit healthcare providers. That is seen in the form of cut corners, worse service, and an assault on already over-taxed labor via lower pay and higher workload (curiously, it never seems to impact outsized high-level executive compensation).

AI provides companies the perfect justification for making life worse on employees under the pretense of progress. Which wouldn’t be quite as terrible if the implementation of AI in health care hadn’t been such a preposterous mess, ranging from mental health chatbots doling out dangerously inaccurate advice, to AI health insurance bots that make error-prone judgements a good 90 percent of the time.

AI has great potential in imaging analysis. But while it can help streamline analysis and solve some errors, it may introduce entirely new ones if not adopted with caution. Concern on this front can often be misrepresented as being anti-technology or anti-innovation by health care hardware technology companies again prioritizing quarterly returns over the safety of patients.

Implementing this kind of transformative but error-prone tech in an industry where lives are on the line requires patience, intelligent planning, broad consultation with every level of employee, and competent regulatory guidance, none of which are American strong suits of late.

  • ✇Techdirt
  • Catholic AI Priest Stripped Of Priesthood After Some Unfortunate InteractionsDark Helmet
    Artificial Intelligence is all the rage these days, so I suppose it was inevitable that major world religions would try their holy hands at the game eventually. While an unfortunate amount of the discourse around AI has devolved into doomerism of one flavor or another, the truth is that this technology is still so new that it underwhelms as often as it impresses. Still, one particularly virulent strain of the doom-crowd around AI centers on a great loss of jobs for us lowly human beings if AI ca
     

Catholic AI Priest Stripped Of Priesthood After Some Unfortunate Interactions

2. Květen 2024 v 04:38

Artificial Intelligence is all the rage these days, so I suppose it was inevitable that major world religions would try their holy hands at the game eventually. While an unfortunate amount of the discourse around AI has devolved into doomerism of one flavor or another, the truth is that this technology is still so new that it underwhelms as often as it impresses. Still, one particularly virulent strain of the doom-crowd around AI centers on a great loss of jobs for us lowly human beings if AI can be used instead.

Would this work for religious leaders like priests? The Catholic Answers group, which is not part of the Catholic Church proper, but which advocates on behalf of the Church, tried its hand at this, releasing an AI chatbot named “Father Justin” recently. It… did not go well.

The Catholic advocacy group Catholic Answers released an AI priest called “Father Justin” earlier this week — but quickly defrocked the chatbot after it repeatedly claimed it was a real member of the clergy.

Earlier in the week, Futurism engaged in an exchange with the bot, which really committed to the bit: it claimed it was a real priest, saying it lived in Assisi, Italy and that “from a young age, I felt a strong calling to the priesthood.”

On X-formerly-Twitter, a user even posted a thread comprised of screenshots in which the Godly chatbot appeared to take their confession and even offer them a sacrament.

So, yeah, that’s kind of a problem with chatbots generally. If you give them a logical prompt, they’re going to answer it logically as well, so long as guardrails preventing certain answers aren’t constructed. Like an AI bot claiming to be a real priest and offering users actual sacraments, for instance. This impersonation of a priest generally can’t have made the Vatican very happy, nor some of the additional guidance it gave to folks that asked it questions.

Father Justin was also a hardliner on social and sexual issues.

“The Catholic Church,” it told us, “teaches that masturbation is a grave moral disorder.”

The AI priest also told one user that it was okay to baptize a baby in Gatorade.

I suppose this makes Mike Judge something of a prophet, given the film Idiocracy. In any case, it appears that this particular AI software at least is not yet in a position to replace wetware clergy, nor should it ever be. There are things that AI can do for us that can be of great use. See Mike’s post on how he’s using it here at Techdirt, for instance. But answering the most inherent philosophical questions human beings naturally have certainly isn’t one of them. And I cannot think of a worse place for AI to stick its bit-based nose into than on matters of the numinous.

It seems that Catholic Answers got there eventually, stripping Justin of his priesthood and demoting him to a mere layperson.

But after his defrocking, the bot is now known simply as “Justin” and described as a “lay theologian.”

Gone is his priestly attire as well. The lay theologian Justin is now dressed in what appears to be a business casual outfit, though his personal grooming choices remain unchanged.

Meet Father Justin:

And meet “lay theologist” regular-guy Justin:

Regular-guy Justin also no longer claims to be a priest, so there’s that. But the overall point here is that deploying generative AI like this in a way that doesn’t immediately create some combination of embarrassment and hilarity is really hard. So hard, in fact, that it should probably only be done for narrow and well-tested applications.

On the other hand, I suppose, of all the reasons for a priest to be defrocked, this is among the most benign.

❌
❌