FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremTechdirt
  • ✇Techdirt
  • Xbox’s ‘Business Update Event’ Attempts To Address Rumors…VaguelyDark Helmet
    As anyone paying attention to the video game industry will already know, the last couple of weeks have seen a great deal of rumor and speculation as to the state of Xbox-istan. What started as unsubstantiated rumors suggesting that Xbox was about to make some of its Microsoft-exclusive titles crossplatform to other consoles morphed into more outlandish theories that Microsoft was going to stop making Xbox consoles altogether. Xbox chief Phil Spencer addressed the latter of those rumors in an int
     

Xbox’s ‘Business Update Event’ Attempts To Address Rumors…Vaguely

21. Únor 2024 v 04:42

As anyone paying attention to the video game industry will already know, the last couple of weeks have seen a great deal of rumor and speculation as to the state of Xbox-istan. What started as unsubstantiated rumors suggesting that Xbox was about to make some of its Microsoft-exclusive titles crossplatform to other consoles morphed into more outlandish theories that Microsoft was going to stop making Xbox consoles altogether. Xbox chief Phil Spencer addressed the latter of those rumors in an internal memo, alongside a promise to host a “Business Update Event.”

And so that event happened. Was there information in it? Yes! Did it clear everything up? Kind of! Was it yet another example of vague or confusing communication coming out of Xbox’s leadership? How could it possibly be otherwise?

We’ll start with the rumored crossplatform titles. Much of the rumor mill correctly suggested that there would be 4 games that would be going crossplatform soon. And that turns out to have been true! They’re just not the ones people wanted. And you don’t get to know officially which games we’re talking about, either.

After weeks of rumors around its strategy regarding Xbox console exclusives, Microsoft announced today that it is “going to take four games to the other consoles.” The company stopped short of announcing what those now non-exclusive games would be, but it did point out that neither Starfield nor Bethesda’s upcoming Indiana Jones and the Great Circle would be appearing on other consoles.

All four of the soon-to-be multi-platform titles are “over a year old,” Xbox chief Phil Spencer said in an “Updates on the Xbox Business” podcast video. The list includes a couple of “community-driven” games that are “first iterations of a franchise” that could show growth on non-Xbox consoles, as well as two others that Spencer said were “smaller games that were never really meant to be built as kind of platform exclusives… I think there is an interesting story for us of introducing Xbox franchises to players on other platforms to get them more interested in Xbox.”

Now, on the one hand, more information getting to the public is generally good. And I’m sure there is some sort of business reason why the announcement of what these four games are can’t be officially made, yet. But I also can’t for the life of me understand why this announcement would be made without being able to name the games. This still would typically allow for a lot of rumors to float around, so what was the point?

Fortunately, in this case, journalists did the work and appear to have answered that question for us, such that the speculation will probably be held at bay.

The Verge cites “sources familiar with Microsoft’s plans” in reporting that Hi-Fi Rush, Pentiment, Sea of Thieves, and Grounded are the four multi-platform titles Microsoft is referencing today.

“The teams that are building those [multi-platform] games have announced plans that are not too far away,” Spencer said, “but I think when they come out, it’ll make sense.”

But then there was this.

Spencer stressed during the podcast that this limited multi-platform move does not represent “a change to our fundamental exclusive strategy.” He added that “we’re making these decisions for some specific reasons,” citing “the long-term health of Xbox and a desire to “use what some of the other platforms have right now to help grow our franchises.”

To which my immediate reply is: what the hell is your exclusive strategy? Seriously, the messaging on this very important piece of the equation has been all over the damned place. And because of that, someone in Spencer’s position does not get to simply trot this line out there as if everyone in the gaming public is already on the same page as he is. In 2020, Spencer made comments suggesting that crossplatforming titles was not needed for Xbox to succeed with specific game franchises. Then another Xbox executive suggested that games would have timed Microsoft exclusives later that same year. Then, in 2021, Spencer announced that the next Elder Scrolls game would be a Microsoft exclusive. Fast forward roughly one year later and you have Spencer himself stating that exclusive titles were not the future for Xbox, just as Starfield was announced as a, you guessed it, Microsoft exclusive.

It’s in that bowl of tangled informational linguine that Spencer has the gall to state publicly that these latest plans don’t change Xbox’s “fundamental exclusive strategy.” And if you don’t understand why that is so infuriating, you’re lost.

And so this is just more Microsoft. Even attempts at being more open and communicative result in confusion and frustration.

  • ✇Techdirt
  • How Allowing Copyright On AI-Generated Works Could Destroy Creative IndustriesMike Masnick
    Generative AI continues to be the hot topic in the digital world – and beyond. A previous blog post noted that this has led to people finally asking the important question whether copyright is fit for the digital world. As far as AI is concerned, there are two sides to the question. The first is whether generative AI systems can be trained on copyright materials without the need for licensing. That has naturally dominated discussions, because many see an opportunity to impose what is effectively
     

How Allowing Copyright On AI-Generated Works Could Destroy Creative Industries

21. Únor 2024 v 00:38

Generative AI continues to be the hot topic in the digital world – and beyond. A previous blog post noted that this has led to people finally asking the important question whether copyright is fit for the digital world. As far as AI is concerned, there are two sides to the question. The first is whether generative AI systems can be trained on copyright materials without the need for licensing. That has naturally dominated discussions, because many see an opportunity to impose what is effectively a copyright tax on generative AI. The other question is whether the output of generative AI systems can be copyrighted. As another Walled Post explained, the current situation is unclear. In the US, purely AI-generated art cannot currently be copyrighted and forms part of the public domain, but it may be possible to copyright works that include significant human input.

Given the current interest in generative AI, it’s no surprise that there are lots of pundits out there pontificating on what it all means. I find Christopher S. Penn’s thoughts on the subject to be consistently insightful and worth reading, unlike those of many other commentators. Even better, his newsletter and blog are free. His most recent newsletter will be of particular interest to Walled Culture readers, and has a bold statement concerning AI and copyright:

We should unequivocally ensure machine-made content can never be protected under intellectual property laws, or else we’re going to destroy the entire creative economy.

His newsletter includes a short harmonized tune generated using AI. Penn points out that it is trivially easy to automate the process of varying that tune and its harmony using AI, in a way that scales to billions of harmonized tunes covering a large proportion of all possible songs:

If my billion songs are now copyrighted, then every musician who composes a song from today forward has to check that their composition isn’t in my catalog of a billion variations – and if it is (which, mathematically, it probably will be), they have to pay me.

Moreover, allowing copyright in this way would result in a computing arms race. Those with the deepest pockets could use more powerful hardware and software to produce more AI tunes faster than anyone else, allowing them to copyright them first:

That wipes out the music industry. That wipes out musical creativity, because suddenly there is no incentive to create and publish original music for commercial purposes, including making a living as a musician. You know you’ll just end up in a copyright lawsuit sooner or later with a company that had better technology than you.

That’s one good reason for not allowing music – or images, videos or text – generated by AI to be granted copyright. As Penn writes, doing so would just create a huge industry whose only purpose is generating a library of works that is used for suing human creators for alleged copyright infringement. The bullying and waste already caused by the similar patent troll industry shows why this is not something we would want. Here’s another reason why copyright for AI creations is a bad idea according to Penn:

If machine works remain non-copyrightable, there’s a strong disincentive for companies like Disney to use machine-made works. They won’t be able to enforce copyright on them, which makes those works less valuable than human-led works that they can fully protect. If machine works suddenly have the same copyright status as human-led works, then a corporation like Disney has much greater incentive to replace human creators as quickly as possible with machines, because the machines will be able to scale their created works to levels only limited by compute power.

This chimes with something that I have argued before: that generative AI could help to make human-generated art more valuable. The value of human creativity will be further enhanced if companies are unable to claim copyright in AI-generated works. It’s an important line of thinking, because it emphasizes that it is not in the interest of artists to allow copyright on AI-generated works, whatever Big Copyright might have them believe.

Follow me @glynmoody on Mastodon and on Bluesky. Originally published to Walled Culture.

  • ✇Techdirt
  • Section 702 Powers Back On The Ropes Thanks To Partisan InfightingTim Cushing
    I’m normally not a “ends justifies the means” sort of guy, but ever since some House Republicans started getting shitty about Section 702 surveillance after some of their own got swept up in the dragnet, I’ve become a bit more pragmatic. Section 702 is long overdue for reform. If it takes a bunch of conveniently angry legislators to do it, so be it. The NSA uses this executive authorization to sweep up millions of “foreign” communications. But if one side of these communications involves a US p
     

Section 702 Powers Back On The Ropes Thanks To Partisan Infighting

20. Únor 2024 v 22:41

I’m normally not a “ends justifies the means” sort of guy, but ever since some House Republicans started getting shitty about Section 702 surveillance after some of their own got swept up in the dragnet, I’ve become a bit more pragmatic. Section 702 is long overdue for reform. If it takes a bunch of conveniently angry legislators to do it, so be it.

The NSA uses this executive authorization to sweep up millions of “foreign” communications. But if one side of these communications involves a US person, the NSA is supposed to keep its eyes off of it. The same thing goes for the FBI. But the FBI has spent literal decades ignoring these restraints, preferring to dip into the NSA’s data pool as often as possible for the sole reason of converting a foreign-facing surveillance program into a handy means for domestic surveillance.

The FBI’s constant abuse of this program has seen it scolded by FISA judges, excoriated by legislators actually willing to stand up for their constituents’ rights, and habitually abused verbally at internet sites like this one.

Not that it has mattered. For years, the NSA (and, by extension, the FBI) has been given a blanket blessing of their spy programs by legislators who have been convinced nothing but a clean re-authorization is acceptable in terrorist times like these.

Fortunately for all of us, the future of Section 702 remains in a particularly hellish limbo. As Dell Cameron reports for Wired, Republicans are going to war against other Republicans, limiting the chances of Section 702 moving forward without significant alteration.

The latest botched effort at salvaging a controversial US surveillance program collapsed this week thanks to a sabotage campaign by the United States House Intelligence Committee (HPSCI), crushing any hope of unraveling the program’s fate before Congress pivots to prevent a government shutdown in March.

An agreement struck between rival House committees fell apart on Wednesday after one side of the dispute—represented by HPSCI—ghosted fellow colleagues at a crucial hearing while working to poison a predetermined plan to usher a “compromise bill” to the floor.

This makes it sound like this is a bad thing. It isn’t, even if those thwarting a clean re-auth have extremely dirty hands. Legislators should definitely take a long look at this surveillance power, especially when it’s been abused routinely by the FBI to engage in surveillance of US persons who are supposed to be beyond the reach of this foreign-facing dragnet.

Some in the House want the FBI to pay for what it did to Trump loyalists. Some in the House want the FBI to do whatever it wants, so long as it can claim it’s doing (our?) God’s work in its counterterrorism efforts. Excluded from the current infighting are people who actually give a damn about limiting surveillance abuses, shunted to the side by political opportunists, loudmouths, and far too many legislators who refuse to hold the FBI accountable.

What’s odd about this scuttling is the reason it happened. It had nothing to do with Section 702 and everything to do with the government’s predilection for buying data from brokers to avoid warrant requirements erected by Supreme Court rulings.

The impetus for killing the deal, WIRED has learned, was an amendment that would end the government’s ability to pay US companies for information rather than serving them with a warrant. This includes location data collected from cell phones that are capable in many cases of tracking people’s physical whereabouts almost constantly. The data is purportedly gathered for advertising purposes but is collected by data brokers and frequently sold to US spies and police agencies instead.

Senior aides say the HPSCI chair, Mike Turner, personally exploded the deal while refusing to appear for a hearing on Wednesday in which lawmakers were meant to decide the rules surrounding the vote. A congressional website shows that HPSCI staff had not filed one of the amendments meant to be discussed before the Rules Committee, suggesting that at no point in the day did Turner plan to attend.

And that’s where we are now: legislators refusing to authorize one form of domestic surveillance because it would rather give the feds a pass on a much more prevalent form of domestic surveillance. The former once ensnared some of Trump’s buddies. The latter has yet to do so.

The infighting continues, with one side being rallied by none of than Fox News, which prefers to cater to its base, rather than provide any reporting or analysis that might accurately portray current events. The spin being pushed by Fox claims the alterations added to the bill would somehow prevent the NSA (and, by extension, the FBI) from surveilling foreign terrorists.

Fox News report published Thursday morning, while accurately noting that it was Turner’s threat that forced Johnson to cancel the vote, goes on to cite “sources close to the Intelligence Committee” who offered analysis of the events. The sources claimed that Turner was compelled to abandon the deal because the “compromise bill” had been sneakily altered in a manner that “totally screws FISA in terms of its ability to be a national security tool.”

While redirecting blame away from Turner and his cohorts, the claim is both false and deceptive, relying on assertions that, while farcical perhaps to legal experts, would be impossible for the public at large (and most of the press) to parse alone.

Section 702 still has a good chance to survive intact. This infighting actually makes it much less likely any true reform will take place. Grandstanding has replaced oversight. But, at least for now, we can be assured the surveillance program will remain one step away from being ditched until House Republicans can reconcile their desire to protect people like Carter Page with their desire to treat everyone a little bit on the brown side as a potential terrorist.

  • ✇Techdirt
  • Elon Only Started Buying Up Twitter Shares After Twitter Refused To Ban Plane Tracking AccountMike Masnick
    Ever since he first started to make moves to purchase Twitter, Elon Musk has framed his interest in “rigorously adhering to” principles of free speech. As we’ve noted, you have to be ridiculously gullible to believe that’s true, given Elon’s long history of suppressing speech, but a new book about Elon’s purchase suggests that from the very start a major motivation in the purchase, was to silence accounts he disliked. According to an excerpt of a new book by reporter Kurt Wagner about the purcha
     

Elon Only Started Buying Up Twitter Shares After Twitter Refused To Ban Plane Tracking Account

20. Únor 2024 v 21:10

Ever since he first started to make moves to purchase Twitter, Elon Musk has framed his interest in “rigorously adhering to” principles of free speech. As we’ve noted, you have to be ridiculously gullible to believe that’s true, given Elon’s long history of suppressing speech, but a new book about Elon’s purchase suggests that from the very start a major motivation in the purchase, was to silence accounts he disliked.

According to an excerpt of a new book by reporter Kurt Wagner about the purchase (and called out by the SF Chronicle), Elon had reached out to then Twitter CEO Parag Agrawal to ask him to remove student Jack Sweeney’s ElonJet account (which publicly tracks the location of Elon’s private plane). It was only when Agrawal refused, that Elon started buying up shares in the site.

The excerpt slips in that point in a discussion about how Jack Dorsey arranged what turned out to be a disastrous meeting between Agrawal and Musk early in the process:

The day after, Dorsey sent Musk a private message in hopes of setting up a call with Parag Agrawal, whom Dorsey had hand-picked as his own replacement as CEO a few months earlier. “I want to make sure Parag is doing everything possible to build towards your goals until close,” Dorsey wrote to Musk. “He is really great at getting things done when tasked with specific direction.”

Dorsey drew up an agenda that included problems Twitter was working on, short-term action items and long-term priorities. He sent it to Musk for review, along with a Google Meet link. “Getting this nailed will increase velocity,” Dorsey wrote. He was clearly hoping his new pick for owner would like his old pick for CEO.

This was probably wishful thinking. Musk was already peeved with Agrawal, with whom he’d had a terse text exchange weeks earlier after Agrawal chastised Musk for some of his tweets. Musk had also unsuccessfully petitioned Agrawal to remove a Twitter account that was tracking his private plane; the billionaire started buying Twitter shares shortly after Agrawal denied his request.

In other words, for all his posturing about the need to purchase the site to support free speech, it appears that at least one major catalyzing moment was Twitter’s refusal to shut down an account Elon hated.

As we’ve pointed out again and again, historically, Twitter was pretty committed to setting rules and trying to enforce them with its moderation policies, and refusing to take down accounts unless they violated the rules. Sometimes this created somewhat ridiculous scenarios, but at least there were principles behind it. Nowadays, the principles seem to revolve entirely around Elon’s whims.

The case study of Sweeney’s ElonJet account seems to perfectly encapsulate all that. It was widely known that Elon had offered Sweeney $5k to take the account down. Sweeney had counter-offered $50k. That was in the fall of 2021. Given the timing of this latest report, it appears that Elon’s next move was to try to pressure Agrawal to take down the account. Agrawal rightly refused, because it did not violate the rules.

It was at that point he started to buy up shares, and to present himself (originally) as an activist investor. Eventually that shifted into his plan to buy the entire site outright, which he claimed was to support free speech, even though now it appears he was focused on removing ElonJet.

At one point, Elon had claimed that he would keep the ElonJet account up:

Image

But, also, as we now know, three weeks after that tweet, he had his brand new trust & safety boss, Ella Irwin, tell the trust & safety team to filter ElonJet heavily using the company’s “Visibility Filter” (VF) tool, which many people claim is “shadowbanning”):

Image

Less than two weeks later, he banned the account outright, claiming (ridiculously) that the account was “doxxing” him and publishing “assassination coordinates.”

Image

He then also banned Sweeney’s personal account, even as it wasn’t publishing such info. Followed by banning journalists who merely were mentioning that @ElonJet had been banned.

At this point it should have been abundantly clear that Musk was never interested in free speech on Twitter (now ExTwitter), but it’s fascinating to learn that one of the motivating factors in buying the site originally — even as he pretended it was about free speech — was really to silence a teenager’s account.

  • ✇Techdirt
  • Don’t Fall For The Latest Changes To The Dangerous Kids Online Safety Act Mike Masnick
    The authors of the dangerous Kids Online Safety Act (KOSA) unveiled an amended version last week, but it’s still an unconstitutional censorship bill that continues to empower state officials to target services and online content they do not like. We are asking everyone reading this to oppose this latest version, and to demand that their representatives oppose it—even if you have already done so.  KOSA remains a dangerous bill that would allow the government to decide what types of information ca
     

Don’t Fall For The Latest Changes To The Dangerous Kids Online Safety Act 

20. Únor 2024 v 19:52

The authors of the dangerous Kids Online Safety Act (KOSA) unveiled an amended version last week, but it’s still an unconstitutional censorship bill that continues to empower state officials to target services and online content they do not like. We are asking everyone reading this to oppose this latest version, and to demand that their representatives oppose it—even if you have already done so. 

KOSA remains a dangerous bill that would allow the government to decide what types of information can be shared and read online by everyone. It would still require an enormous number of websites, apps, and online platforms to filter and block legal, and important, speech. It would almost certainly still result in age verification requirements. Some of its provisions have changed over time, and its latest changes are detailed below. But those improvements do not cure KOSA’s core First Amendment problems. Moreover, a close review shows that state attorneys general still have a great deal of power to target online services and speech they do not like, which we think will harm children seeking access to basic health information and a variety of other content that officials deem harmful to minors.  

We’ll dive into the details of KOSA’s latest changes, but first we want to remind everyone of the stakes. KOSA is still a censorship bill and it will still harm a large number of minors who have First Amendment rights to access lawful speech online. It will endanger young people and impede the rights of everyone who uses the platforms, services, and websites affected by the bill. Based on our previous analyses, statements by its authors and various interest groups, as well as the overall politicization of youth education and online activity, we believe the following groups—to name just a few—will be endangered:  

  • LGBTQ+ Youth will be at risk of having content, educational material, and their own online identities erased.  
  • Young people searching for sexual health and reproductive rights information will find their search results stymied. 
  • Teens and children in historically oppressed and marginalized groups will be unable to locate information about their history and shared experiences. 
  • Activist youth on either side of the aisle, such as those fighting for changes to climate laws, gun laws, or religious rights, will be siloed, and unable to advocate and connect on platforms.  
  • Young people seeking mental health help and information will be blocked from finding it, because even discussions of suicide, depression, anxiety, and eating disorders will be hidden from them. 
  • Teens hoping to combat the problem of addiction—either their own, or that of their friends, families, and neighbors, will not have the resources they need to do so.  
  • Any young person seeking truthful news or information that could be considered depressing will find it harder to educate themselves and engage in current events and honest discussion. 
  • Adults in any of these groups who are unwilling to share their identities will find themselves shunted onto a second-class internet alongside the young people who have been denied access to this information. 

What’s Changed in the Latest (2024) Version of KOSA 

In its impact, the latest version of KOSA is not meaningfully different from those previous versions. The “duty of care” censorship section remains in the bill, though modified as we will explain below. The latest version removes the authority of state attorneys general to sue or prosecute people for not complying with the “duty of care.” But KOSA still permits these state officials to enforce other part of the bill based on their political whims and we expect those officials to use this new law to the same censorious ends as they would have of previous versions. And the legal requirements of KOSA are still only possible for sites to safely follow if they restrict access to content based on age, effectively mandating age verification.   

KOSA is still a censorship bill and it will still harm a large number of minors

Duty of Care is Still a Duty of Censorship 

Previously, KOSA outlined a wide collection of harms to minors that platforms had a duty to prevent and mitigate through “the design and operation” of their product. This includes self-harm, suicide, eating disorders, substance abuse, and bullying, among others. This seemingly anodyne requirement—that apps and websites must take measures to prevent some truly awful things from happening—would have led to overbroad censorship on otherwise legal, important topics for everyone as we’ve explained before.  

The updated duty of care says that a platform shall “exercise reasonable care in the creation and implementation of any design feature” to prevent and mitigate those harms. The difference is subtle, and ultimately, unimportant. There is no case law defining what is “reasonable care” in this context. This language still means increased liability merely for hosting and distributing otherwise legal content that the government—in this case the FTC—claims is harmful.  

Design Feature Liability 

The bigger textual change is that the bill now includes a definition of a “design feature,” which the bill requires platforms to limit for minors. The “design feature” of products that could lead to liability is defined as: 

any feature or component of a covered platform that will encourage or increase the frequency, time spent, or activity of minors on the covered platform, or activity of minors on the covered platform. 

Design features include but are not limited to 

(A) infinite scrolling or auto play; 

(B) rewards for time spent on the platform; 

(C) notifications; 

(D) personalized recommendation systems; 

(E) in-game purchases; or 

(F) appearance altering filters. 

These design features are a mix of basic elements and those that may be used to keep visitors on a site or platform. There are several problems with this provision. First, it’s not clear when offering basic features that many users rely on, such as notifications, by itself creates a harm. But that points to the fundamental problem of this provision. KOSA is essentially trying to use features of a service as a proxy to create liability for speech online that the bill’s authors do not like. But the list of harmful designs shows that the legislators backing KOSA want to regulate online content, not just design.   

For example, if an online service presented an endless scroll of math problems for children to complete, or rewarded children with virtual stickers and other prizes for reading digital children’s books, would lawmakers consider those design features harmful? Of course not. Infinite scroll and autoplay are generally not a concern for legislators. It’s that these lawmakers do not likesome lawful content that is accessible via online service’s features. 

What KOSA tries to do here then is to launder restrictions on content that lawmakers do not like through liability for supposedly harmful “design features.” But the First Amendment still prohibits Congress from indirectly trying to censor lawful speech it disfavors.  

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities.

Allowing the government to ban content designs is a dangerous idea. If the FTC decided that direct messages, or encrypted messages, were leading to harm for minors—under this language they could bring an enforcement action against a platform that allowed users to send such messages. 

Regardless of whether we like infinite scroll or auto-play on platforms, these design features are protected by the First Amendment; just like the design features we do like. If the government tried to limit an online newspaper from using an infinite scroll feature or auto-playing videos, that case would be struck down. KOSA’s latest variant is no different.   

Attorneys General Can Still Use KOSA to Enact Political Agendas 

As we mentioned above, the enforcement available to attorneys general has been narrowed to no longer include the duty of care. But due to the rule of construction and the fact that attorneys general can still enforce other portions of KOSA, this is cold comfort. 

For example, it is true enough that the amendments to KOSA prohibit a state from targeting an online service based on claims that in hosting LGBTQ content that it violated KOSA’s duty of care. Yet that same official could use another provision of KOSA—which allows them to file suits based on failures in a platform’s design—to target the same content. The state attorney general could simply claim that they are not targeting the LGBTQ content, but rather the fact that the content was made available to minors via notifications, recommendations, or other features of a service. 

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities. And KOSA leaves all of the bill’s censorial powers with the FTC, a five-person commission nominated by the president. This still allows a small group of federal officials appointed by the President to decide what content is dangerous for young people. Placing this enforcement power with the FTC is still a First Amendment problem: no government official, state or federal, has the power to dictate by law what people can read online.  

The Long Fight Against KOSA Continues in 2024 

For two years now, EFF has laid out the clear arguments against this bill. KOSA creates liability if an online service fails to perfectly police a variety of content that the bill deems harmful to minors. Services have little room to make any mistakes if some content is later deemed harmful to minors and, as a result, are likely to restrict access to a broad spectrum of lawful speech, including information about health issues like eating disorders, drug addiction, and anxiety.  

The fight against KOSA has amassed an enormous coalition of people of all ages and all walks of life who know that censorship is not the right approach to protecting people online, and that the promise of the internet is one that must apply equally to everyone, regardless of age. Some of the people who have advocated against KOSA from day one have now graduated high school or college. But every time this bill returns, more people learn why we must stop it from becoming law.   

We cannot afford to allow the government to decide what information is available online. Please contact your representatives today to tell them to stop the Kids Online Safety Act from moving forward. 

Republished from the EFF’s Deeplinks blog.

  • ✇Techdirt
  • Daily Deal: The Python & Django Web Development BundleGretchen Heckmann
    The Python and Django Web Development Bundle has 7 courses to help you learn how to build your own sites and apps. Courses cover the basics of Django and Python and then build upon those skills by having you create your own to do list app and user authentication app, and more. It’s on sale for $30. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our e
     

Daily Deal: The Python & Django Web Development Bundle

20. Únor 2024 v 19:48

The Python and Django Web Development Bundle has 7 courses to help you learn how to build your own sites and apps. Courses cover the basics of Django and Python and then build upon those skills by having you create your own to do list app and user authentication app, and more. It’s on sale for $30.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

  • ✇Techdirt
  • George Santos Files Very Silly Copyright Lawsuit Against Jimmy Kimmel Over His Cameo VideosMike Masnick
    Former Rep. George Santos, kicked out of Congress last year for being an irredeemable liar, has spent his time since expulsion pulling in the big bucks making videos on Cameo for anywhere between $350 and $500 a pop. Last year, Senator John Fetterman made news when he got Santos to record a Cameo video trolling disgraced, indicted colleague Senator Bob Menendez who refuses to resign. That video had Santos urging “Bobby” to “hang in there.” Earlier this month, Santos admitted that he’d surpassed
     

George Santos Files Very Silly Copyright Lawsuit Against Jimmy Kimmel Over His Cameo Videos

20. Únor 2024 v 18:32

Former Rep. George Santos, kicked out of Congress last year for being an irredeemable liar, has spent his time since expulsion pulling in the big bucks making videos on Cameo for anywhere between $350 and $500 a pop.

Last year, Senator John Fetterman made news when he got Santos to record a Cameo video trolling disgraced, indicted colleague Senator Bob Menendez who refuses to resign. That video had Santos urging “Bobby” to “hang in there.” Earlier this month, Santos admitted that he’d surpassed 1,200 videos in the last few months, bringing in a few hundred thousand dollars.

Apparently, a little over a dozen of those came from talk show host Jimmy Kimmel, who started a segment in December called “Will Santos Say It.” Kimmel submitted wacky Cameo requests and played some on the show. Back in December, Santos complained about this — mainly that he wasn’t getting paid enough for the videos.

Over the weekend, Santos actually sued Kimmel, along with ABC/Disney, claiming copyright infringement. Because, I’m sure, Disney doesn’t employ any copyright lawyers who will eat Santos and his lawyer for lunch and spit out the remains into bowls made out of Mickey Mouse.

The lawsuit is not good. The crux is that Kimmel (1) misrepresented himself and (2) purchased videos under a “personal” license instead of a “commercial” one, and therefore this is both fraud and copyright infringement.

It is likely neither.

On the copyright side, Kimmel has a strong fair use claim. He used them for commentary and criticism without harming the market for Santos’ Cameos (in fact, they likely increased it). The fraud part is just nonsense. Santos didn’t lose money out of this, he made money.

The lawsuit undermines its copyright claims by inserting Kimmel’s commentary, which helps to show how this is fair use (and amusing):

KIMMEL: Yeah so now this Cameo thing, according to George, is really paying off. He claims he’s made more money in seven days than he did in Congress for a year. And part of that money came from me. I sent him a bunch of crazy video requests because I wanted to see what he would read and what he wouldn’t read, and I showed some of them on the air on Thursday, um, and now he’s demanding […] to be paid a commercial rate. Could you imagine if I get sued by George Santos for a fraud? I mean how good would that be? It would be like a dream come true. So since I started buying his videos his rates went way up to $500 a piece. He should be thanking me for buying these videos. But I have a big stockpile you want to see one? Again George had no idea these requests were from me, I just wrote them and sent them in. So “Will Santos say it?” Here we go […] [CAMEOS #4 and #5 were then published]

The lawsuit also includes the five prompts that Kimmel (under made-up names) submitted to Santos that were later aired. Kimmel says he submitted more, and it’s unclear what happened with the others, if Santos’ legal threat made them go away or if he even made them.

Still, for your enjoyment, here are the prompts:

a. On or about December 6, 2023, at approximately 4:46 p.m. Kimmel, misrepresenting himself as “Chris Cates” made the following fraudulent representation to Santos: “George please congratulate my friend Gary Fortuna for winning the Clearwater Florida Beef Eating Contest. He ate almost 6 pounds of loose ground beef in under 30 minutes – which was a new record! He’s not feeling great right now but the doctor thinks he will be released from the hospital soon. Please wish him a speedy recovery!” (“Fake Request 1”)

b. On or about December 6, 2023 at approximately 4:55 p.m. Kimmel, misrepresenting himself as “Jane” made the following fraudulent representation to Santos: “George please congratulate my mom Brenda on the successful cloning of her beloved schnauzer Adolf. She and Doctor Haunschnaffer went through a lot of dogs in the trial runs but they finally got it to stick. Tell her to give Adolf a big belly rub for me!” (“Fake Request 2”)

c. On or about December 7, 2023, at approximately 12:18 p.m. Kimmel, misrepresenting himself as “Ron” made the following fraudulent representation to Santos: “My name is Ron. Please tell my wife to call me George. Not George my name is Ron. You are George. Just tell her to call me George. But again Ron. I haven’t seen Swoosie or the kids since my disco birthday and it’s not fair. She says I burned down the shed shooting off fireworks but I was trying to scare a bear away. It isn’t fair. I love my Swoosie and I just want our family together on Christmas or if not that Valentimes Day or Flag. Watch out for bears.” (“Fake Request 3”)

d. On or about December 7, 2023, at approximately 12:32 p.m. Kimmel, misrepresenting himself as “Uncle Joe” made the following fraudulent representation to Santos: “George can you please congratulate my legally blind niece Julia on passing her driving test. They said she couldn’t do it – even shouldn’t, but she’s taught herself to be able to drive safely using her other sense. She’s not a quitter! That said, the day after she got her license, she got in a really bad car accident so if you could also wish her a speedy recovery that would be amazing. She’s in a bodycast and is a very bummed out – but with help from Jesus and President Trump, soon she will be back on the road!” (“Fake Request 4”)

e. On or about December 7, 2023, at approximately 12:26 p.m. Kimmel, misrepresenting himself as “Christian” made the following fraudulent representation to Santos:: “Hey George. My friend Heath just came out as a Furry and I’d love for you to tell him that his friends and family all accept him. His “fursona” is a platypus mixed with a beaver. He calls it a Beav-apus. Can you say we all love you Beav-a-pus? He also just got the go ahead from Arby’s corporate to go to work in the outfit so we’re all so happy for him to be himself at work and at home. Could you also do a loud “Yiff yiff yiff!”? That’s the sound Beav-a-pus makes as Beav-a-pus. Thank you so much.” (“Fake Request 5”)

The presence of a recently disgraced Congressman makes some of those videos seem newsworthy on its own, adding to the fair use argument.

As noted above, Disney has a few lawyers who understand copyright. It seems likely that Santos is going to get ripped to shreds in court.

  • ✇Techdirt
  • False AI Obituary Spam The Latest Symptom Of Our Obsession With Mindless Automated Infotainment EngagementKarl Bode
    Last month we noted how deteriorating quality over at Google search and Google news was resulting in both platforms being flooded by AI-generated gibberish and nonsense, with money that should be going to real journalists instead being funneled to a rotating crop of lazy automated engagement farmers. This collapse of online informational integrity is happening at precisely the same time that U.S. journalism is effectively being lobotomized by a handful of hedge fund brunchlords for whom accurat
     

False AI Obituary Spam The Latest Symptom Of Our Obsession With Mindless Automated Infotainment Engagement

Od: Karl Bode
20. Únor 2024 v 14:26

Last month we noted how deteriorating quality over at Google search and Google news was resulting in both platforms being flooded by AI-generated gibberish and nonsense, with money that should be going to real journalists instead being funneled to a rotating crop of lazy automated engagement farmers.

This collapse of online informational integrity is happening at precisely the same time that U.S. journalism is effectively being lobotomized by a handful of hedge fund brunchlords for whom accurately informing the public has long been a distant afterthought.

It’s a moment in time where the financial incentives all point toward lazy automated ad engagement, and away from pesky things like the truth or public welfare. It costs companies money to implement systems at scale that can help clean up online information pollution, and it’s far more profitable to spend that time and those resources lazily maximizing engagement at any cost. The end result is everywhere you look.

The latest case in point: as hustlebros look to profit from automated engagement bait, The Verge notes that there has been a rise in automated obituary spam.

Like we’ve seen elsewhere in the field of journalism, engagement is all that matters, resulting in a flood of bizarre, automated zero-calorie gibberish where facts, truth, and public welfare simply don’t matter. The result, automated obituaries at unprecedented scale for people who aren’t dead. Like this poor widower, whose death was widely (and incorrectly) reported by dozens of trash automation sites:

“[The obituaries] had this real world impact where at least four people that I know of called [our] mutual friends, and thought that I had died with her, like we had a suicide pact or something,” says Vastag, who for a time was married to Mazur and remained close with her. “It caused extra distress to some of my friends, and that made me really angry.”

Much like the recent complaints over the deteriorating quality of Google News, and the deteriorating quality of Google search, Google sits nestled at the heart of the problem thanks to a refusal to meaningfully invest in combating “obituary scraping”:

“Google has long struggled to contain obituary spam — for years, low-effort SEO-bait websites have simmered in the background and popped to the top of search results after an individual dies. The sites then aggressively monetize the content by loading up pages with intrusive ads and profit when searchers click on results. Now, the widespread availability of generative AI tools appears to be accelerating the deluge of low-quality fake obituaries.”

Yes, managing this kind of flood of automated gibberish is, like content moderation, impossible to tackle perfectly (or anywhere close) at scale. At the same time, all of the financial incentives in the modern engagement infotainment economy point toward prioritizing the embrace of automated engagement bait, as opposed to spending time and resources policing information quality (even using AI).

As journalism collapses and a parade of engagement baiting automation (and rank political propaganda) fills the void, the American public’s head gets increasingly filled with pebbles, pudding, and hate. We’re in desperate need of a paradigm shift away from viewing absolutely everything (even human death) through the MBA lens of maximizing profitability and engagement at boundless scale at any cost.

At some point morals, ethics, and competent leadership in the online information space needs to make an appearance somewhere in the frame in a bid to protect public welfare and even the accurate documentation of history. It’s just decidedly unclear how we bridge the gap.

  • ✇Techdirt
  • Funniest/Most Insightful Comments Of The Week At TechdirtLeigh Beadon
    This week, our first place winner on the insightful side is an anonymous piece-by-piece reply to another comment about the reporter who was suspended from ExTwitter hours after publishing an article about it: “In other words, he either bot boosted an article about botting, or else the botting services are giving him a freebie.” You need to up your reading comprehension, dude. See: Séamas O’Reilly: I criticised ‘free speech absolutist’ Elon Musk’s X. My account was suspended where your 75% f
     

Funniest/Most Insightful Comments Of The Week At Techdirt

18. Únor 2024 v 21:00

This week, our first place winner on the insightful side is an anonymous piece-by-piece reply to another comment about the reporter who was suspended from ExTwitter hours after publishing an article about it:

“In other words, he either bot boosted an article about botting, or else the botting services are giving him a freebie.”

You need to up your reading comprehension, dude.

See: Séamas O’Reilly: I criticised ‘free speech absolutist’ Elon Musk’s X. My account was suspended where your 75% figure comes from.

“Five minutes since I’d posted that article on Twitter, 75% of the replies I’d received had themselves been spam bots trying to sell fake shite to my followers.”

Not something beneficial for O’Reilly, no matter how you spin it. And describing self-interest on the part of the bot runners as “a freebie” is like calling #GamerGate “constructive criticism”.

In second place, it’s Amazing Rando with a comment about ExTwitter and regulatory compliance:

That’s why it’s so important to have a team, maybe call them Trust & Safety or Compliance to keep you out of this sort of trouble!

For editor’s choice on the insightful side, we’ve got a pair of comments about Wired’s recent fact-deficient attack on Section 230, and attacks on Section 230 in general. First, it’s an anonymous one:

Has anybody else noted that the hinted at subtext of all these anti 230 rants is: ‘The internet should be full of opinions that I like, and those that I dislike should be removed’. or ‘How dare you stop me from harassing people by telling them how wrong they are’.

Next, it’s That One Guy with a similar point:

A most telling absence

And the ‘There are no honest and/or reality-based arguments against 230’ streak remains unbroken since it made it into law.

It’s so strange, if the law really was as terrible as it’s critics want people to believe it is you’d think by now they’d have come up with at least one good argument against it that isn’t based upon misconceptions or flat out lies…

Over on the funny side, our first place winner is MrWilson with a response to our post about the cop who got in a fight with an acorn:

Just more anti-police hate speech from Tim. That acorn was dangerous and after it saw what happened to the back window of the vehicle, it’ll think twice about scarin’ the bajeezus out of that there brave deputy. Bet you wouldn’t be so brave as to unload a clip when a violent acorn comes for you!

Acorns Can Assault Badges!

In second place, it’s smb with a reply to a comment praising Musk and denying reality:

You forgot: “XTwitter is better and more popular than ever!”

For editor’s choice on the funny side, we start out with a response from Stephen T. Stone to a comment speculating about the reasons for ExTwitter’s deletion of a reporter’s account:

If Elon wants to ban Twitter accounts that make him (and his run as owner of Twitter) look bad, he should start with his own.

Finally, it’s That One Guy again, replying to a comment that did say “ExTwitter is better than ever”:

Not the brag you think it is

‘Since the local chapter of the KKK started hanging out at the bar I go to it’s been better than ever!’

That’s all for this week, folks!

  • ✇Techdirt
  • This Week In Techdirt History: February 11th – 17thLeigh Beadon
    Five Years Ago This week in 2019, the EU was stalwartly moving forward with Article 13 as part of its terrible copyright directive. Trump was preparing to ban Huawei, Monster Energy lost its trademark fight with Mosta Pizza, and a lawsuit against Bloomberg brought the “hot news doctrine” back into the conversation. A report showed that ICE almost never punished its contractors despite many violations, while key FOSTA supporter Cindy McCain claimed credit for stopping sex trafficking after miside
     

This Week In Techdirt History: February 11th – 17th

17. Únor 2024 v 21:07

Five Years Ago

This week in 2019, the EU was stalwartly moving forward with Article 13 as part of its terrible copyright directive. Trump was preparing to ban Huawei, Monster Energy lost its trademark fight with Mosta Pizza, and a lawsuit against Bloomberg brought the “hot news doctrine” back into the conversation. A report showed that ICE almost never punished its contractors despite many violations, while key FOSTA supporter Cindy McCain claimed credit for stopping sex trafficking after misidentifying a child. A judge in Minnesota spent only minutes approving warrants to sweep up thousands of cellphone users, someone impersonated the New Jersey Attorney General to demand a takedown of 3D-printed gun instructions, and Sony was using copyright claims to take down its own anti-piracy propaganda.

Ten Years Ago

This week in 2014, government officials were leaking classified info to journalists in order to discredit Snowden for doing that very thing, while Snowden was expressing his willingness to answer questions from the European Parliament. US copyright lobbyists equated fair dealing with piracy, MPAA boss Chris Dodd was pretending to be ready to discuss copyright reform, details emerged about ASCAP screwing over Pandora, and a bunch of musicians joined forces to fight against compulsory licenses for remixes. This was also the week that the world was briefly confused, and amused, by the appearance and then rapid disappearance of Nathan Fielder’s “Dumb Starbucks”.

Fifteen Years Ago

This week in 2009, the biggest viral hit was the recording of Christian Bale rampaging on the set of a movie, and the director suggested Warner Bros might try to abuse copyright to suppress the clip. Dianne Feinstein was trying to sneak ISP copyright filtering into a broadband stimulus bill, the Pirate Bay trial in Sweden was set to be broadcast online, ASCAP was continuing its attacks against Lawrence Lessig and free culture, an EU Committee ignored all the research and approved copyright extension, and we learned about how US IP interests pressured Canada to join the WTO fight against China. We also had some questions about the curious fact that Google launched Android without multitouch functionality, while concerns were mounting over Google’s book search settlement.

❌
❌