FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Dear Taylor Swift: There Are Better Ways To Respond To Trump’s AI Images Of You Than A Lawsuit

We’ve written a ton about Taylor Swift’s various adventures in intellectual property law and the wider internet. Given her sheer popularity and presence in pop culture, that isn’t itself particularly surprising. What has been somewhat interesting about her as a Techdirt subject, though, has been how she has straddled the line between being a victim of overly aggressive intellectual property enforcement as well as being a perpetrator of the same. All of this is to say that Swift is not a stranger to negative outcomes in the digital realm, nor is she a stranger to being the legal aggressor.

Which is why the point of this post is to be something of an open letter to Her Swiftness to not listen to roughly half the internet that is clamoring for her to sue Donald Trump for sharing some AI-generated images on social media falsely implying that Swift had endorsed him. First, the facts.

Taylor Swift has yet to endorse any presidential candidate this election cycle. But former President Donald Trump says he accepts the superstar’s non-existent endorsement.

Trump posted “I accept!” on his Truth Social account, along with a carousel of (Swift) images – at least some of which appear to be AI-generated.

One of the AI-manipulated photos depicts Swift as Uncle Sam with the text, “Taylor wants you to vote for Donald Trump.” The other photos depict fans of Swift wearing “Swifties for Trump” T-shirts.

As the quote notes, not all of the images were AI generated “fakes.” At least one of them was from a very real woman, who is very much a Swift fan, wearing a “Swifties for Trump” shirt. There is likewise a social media campaign for supporters from the other side of the aisle, too, “Swifties for Kamala”. None of that is really much of an issue, of course. But the images shared by Trump on Truth Social implied far more than a community of her fans that also like him. So much so, in fact, that he appeared to accept an endorsement that never was.

In case you didn’t notice, immediately below that top left picture is a label that clearly marks the article and associated images as “satire.” The image of Swift doing the Uncle Sam routine to recruit people to back Trump is also obviously not something that came directly from Swift or her people. In fact, while she has not endorsed a candidate in this election cycle (more on that in a moment), Swift endorsed Biden in 2020 with some particularly biting commentary around why she would not vote for Trump.

Now, Trump sharing misleading information on social media is about as newsworthy as the fact that the sun will set tonight. But it is worth noting that social media exploded in response, with a ton of people online advocating Swift to “get her legal team involved” or “sue Trump!” And that is something she absolutely should not do. Some outlets have even suggested that Swift should sue under Tennesse’s new ELVIS Act, which both prohibits the use of people’s voice or image without their authorization, and which has never been tested in court.

Trump’s post might be all it takes to give Swift’s team grounds to sue Trump under Tennessee’s Ensuring Likeness Voice and Image Security Act, or ELVIS Act. The law protects against “just about any unauthorized simulation of a person’s voice or appearance,” said Joseph Fishman, a law professor at Vanderbilt University.

“It doesn’t matter whether an image is generated by AI or not, and it also doesn’t matter whether people are actually confused by it or not,” Fishman said. “In fact, the image doesn’t even need to be fake — it could be a real photo, just so long as the person distributing it knows the subject of the photo hasn’t authorized the use.”

Please don’t do this. First, it probably won’t work. Suing via an untested law that is very likely to run afoul of First Amendment protections is a great way to waste money. Trump also didn’t create the images, presumably, and is merely sharing or re-truthing them. That’s going to make making him liable for them a challenge.

But the larger point here is that all Swift really has to do here is respond, if she chooses, with her own political endorsement or thoughts. It’s not as though she didn’t do so in the last election cycle. If she’s annoyed at what Trump did and wants to punish him, she can solve that with more speech: her own. Hell, there aren’t a ton of people out there who can command an audience that rivals Donald Trump’s… but she almost certainly can!

Just point out that what he shared was fake. Mention, if she wishes, that she voted against him last time. If she likes, she might want to endorse a different candidate. Or she can merely leave it with a biting denial, such as:

“The images Donald Trump shared implied that I have endorsed him. I have not. In fact, I didn’t authorize him to use my image in any way and request that he does not in the future. On the other hand, Donald Trump has a history of not minding much when it comes to getting a woman’s consent, so I won’t get my hopes up too much.”

AI trained on photos from kids’ entire childhood without their consent

AI trained on photos from kids’ entire childhood without their consent

Enlarge (credit: RicardoImagen | E+)

Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW's report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed "less than 0.0001 percent" of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

Read 29 remaining paragraphs | Comments

Ctrl-Alt-Speech: Do You Really Want The Government In Your DMs?

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Meta’s AI Watermarking Plan Is Flimsy, at Best



In the past few months, we’ve seen a deepfake robocall of Joe Biden encouraging New Hampshire voters to “save your vote for the November election” and a fake endorsement of Donald Trump from Taylor Swift. It’s clear that 2024 will mark the first “AI election” in United States history.

With many advocates calling for safeguards against AI’s potential harms to our democracy, Meta (the parent company of Facebook and Instagram) proudly announced last month that it will label AI-generated content that was created using the most popular generative AI tools. The company said it’s “building industry-leading tools that can identify invisible markers at scale—specifically, the ‘AI generated’ information in the C2PA and IPTC technical standards.”

Unfortunately, social media companies will not solve the problem of deepfakes on social media this year with this approach. Indeed, this new effort will do very little to tackle the problem of AI-generated material polluting the election environment.

The most obvious weakness is that Meta’s system will work only if the bad actors creating deepfakes use tools that already put watermarks—that is, hidden or visible information about the origin of digital content—into their images. Most unsecured “open-source” generative AI tools don’t produce watermarks at all. (We use the term unsecured and put “open-source” in quotes to denote that many such tools don’t meet traditional definitions of open-source software, but still pose a threat because their underlying code or model weights have been made publicly available.) If new versions of these unsecured tools are released that do contain watermarks, the old tools will still be available and able to produce watermark-free content, including personalized and highly persuasive disinformation and nonconsensual deepfake pornography.

We are also concerned that bad actors can easily circumvent Meta’s labeling regimen even if they are using the AI tools that Meta says will be covered, which include products from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. Given that it takes about 2 seconds to remove a watermark from an image produced using the current C2PA watermarking standard that these companies have implemented, Meta’s promise to label AI-generated images falls flat.

website screengrab of an image with text When the authors uploaded an image they’d generated to a website that checks for watermarks, the site correctly stated that it was a synthetic image generated by an OpenAI tool. IEEE Spectrum

We know this because we were able to easily remove the watermarks Meta claims it will detect—and neither of us is an engineer. Nor did we have to write a single line of code or install any software.

First, we generated an image with OpenAI’s DALL-E 3. Then, to see if the watermark worked, we uploaded the image to the C2PA content credentials verification website. A simple and elegant interface showed us that this image was indeed made with OpenAI’s DALL-E 3. How did we then remove the watermark? By taking a screenshot. When we uploaded the screenshot to the same verification website, the verification site found no evidence that the image had been generated by AI. The same process worked when we made an image with Meta’s AI image generator and took a screenshot of it—and uploaded it to a website that detects the IPTC metadata that contains Meta’s AI “watermark.”

website screengrab of an image with text However, when the authors took a screenshot of the image and uploaded that screenshot to the same verification site, the site found no watermark and therefore no evidence that the image was AI generated. IEEE Spectrum

Is there a better way to identify AI-generated content?

Meta’s announcement states that it’s “working hard to develop classifiers that can help...to automatically detect AI-generated content, even if the content lacks invisible markers.” It’s nice that the company is working on it, but until it succeeds and shares this technology with the entire industry, we will be stuck wondering whether anything we see or hear online is real.

For a more immediate solution, the industry could adopt maximally indelible watermarks—meaning watermarks that are as difficult to remove as possible.

Today’s imperfect watermarks typically attach information to a file in the form of metadata. For maximally indelible watermarks to offer an improvement, they need to hide information imperceptibly in the actual pixels of images, the waveforms of audio (Google Deepmind claims to have done this with its proprietary SynthID watermark) or through slightly modified word frequency patterns in AI-generated text. We use the term “maximally” to acknowledge that there may never be a perfectly indelible watermark. This is not a problem just with watermarks though. The celebrated security expert Bruce Schneier notes that “computer security is not a solvable problem…. Security has always been an arms race, and always will be.”

In metaphorical terms, it’s instructive to consider automobile safety. No car manufacturer has ever produced a car that cannot crash. Yet that hasn’t stopped regulators from implementing comprehensive safety standards that require seatbelts, airbags, and backup cameras on cars. If we waited for safety technologies to be perfected before requiring implementation of the best available options, we would be much worse off in many domains.

There’s increasing political momentum to tackle deepfakes. Fifteen of the biggest AI companies—including almost every one mentioned in this article—signed on to the White House Voluntary AI Commitments last year, which included pledges to “develop robust mechanisms, including provenance and/or watermarking systems for audio or visual content” and to “develop tools or APIs to determine if a particular piece of content was created with their system.” Unfortunately, the White House did not set any timeline for the voluntary commitments.

Then, in October, the White House, in its AI Executive Order, defined AI watermarking as “the act of embedding information, which is typically difficult to remove, into outputs created by AI—including into outputs such as photos, videos, audio clips, or text—for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance.”

Next, at the Munich Security Conference on 16 February, a group of 20 tech companies (half of which had previously signed the voluntary commitments) signed onto a new “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” Without making any concrete commitments or providing any timelines, the accord offers a vague intention to implement some form of watermarking or content-provenance efforts. Although a standard is not specified, the accord lists both C2PA and SynthID as examples of technologies that could be adopted.

Could regulations help?

We’ve seen examples of robust pushback against deepfakes. Following the AI-generated Biden robocalls, the New Hampshire Department of Justice launched an investigation in coordination with state and federal partners, including a bipartisan task force made up of all 50 state attorneys general and the Federal Communications Commission. Meanwhile, in early February the FCC clarified that calls using voice-generation AI will be considered artificial and subject to restrictions under existing laws regulating robocalls.

Unfortunately, we don’t have laws to force action by either AI developers or social media companies. Congress and the states should mandate that all generative AI products embed maximally indelible watermarks in their image, audio, video, and text content using state-of-the-art technology. They should also address risks from unsecured “open-source” systems that can either have their watermarking functionality disabled or be used to remove watermarks from other content. Furthermore, any company that makes a generative AI tool should be encouraged to release a detector that can identify, with the highest accuracy possible, any content it produces. This proposal shouldn’t be controversial, as its rough outlines have already been agreed to by the signers of the voluntary commitments and the recent elections accord.

Standards organizations like C2PA, the National Institute of Standards and Technology, and the International Organization for Standardization should also move faster to build consensus and release standards for maximally indelible watermarks and content labeling in preparation for laws requiring these technologies. Google, as C2PA’s newest steering committee member, should also quickly move to open up its seemingly best-in-class SynthID watermarking technology to all members for testing.

Misinformation and voter deception are nothing new in elections. But AI is accelerating existing threats to our already fragile democracy. Congress must also consider what steps it can take to protect our elections more generally from those who are seeking to undermine them. That should include some basic steps, such as passing the Deceptive Practices and Voter Intimidation Act, which would make it illegal to knowingly lie to voters about the time, place, and manner of elections with the intent of preventing them from voting in the period before a federal election.

Congress has been woefully slow to take up comprehensive democracy reform in the face of recent shocks. The potential amplification of these shocks through abuse of AI ought to be enough to finally get lawmakers to act.

The FCC’s Ban on AI in Robocalls Won’t Be Enough



In the days before the U.S. Democratic Party’s New Hampshire primary election on 23 January, potential voters began receiving a call with AI-generated audio of a fake President Biden urging them not to vote until the general election in November. In Slovakia a Facebook post contained fake, AI-generated audio of a presidential candidate planning to steal the election—which may have tipped the election in another candidate’s favor. Recent elections in Indonesia and Taiwan have been marred by AI-generated misinformation, too.

In response to the faux-Biden robocall in New Hampshire, the U.S. Federal Communications Commission moved to make AI-generated voices in robocalls illegal on 8 February. But experts IEEE Spectrum spoke to aren’t convinced that the move will be enough, even as generative AI brings new twists to old robocall scams and offers opportunities to turbocharge efforts to defraud individuals.

The total lost to scams and spam in the United States in 2022 is thought to be US $39.5 billion, according to TrueCaller, which makes a caller ID and spam-blocking app. That same year, the average amount of money lost by people scammed in the United States was $431.26, according to a survey by Hiya, a company that provides call-protection and identity services. Hiya says that amount stands to go up as the usage of generative AI gains traction.

“In aggregate, it’s mind-boggling how much is lost to fraud perpetuated through robocalls,” says Eric Burger, the research director of the Commonwealth Cyber Initiative at Virginia Tech.

“I don’t think we can appreciate just how fast the telephone experience is going to change because of this.” —Jonathan Nelson, Hiya

AI Will Make It Easier for Scammers to Target Individuals

“The big fear with generative AI is it’s going to take custom-tailored scams and take them mainstream,” says Jonathan Nelson, director of product management at Hiya. In particular, he says, generative AI will make it easier to carry out spear-phishing attacks.

The Cost of Phone Fraud


The average amount of money lost by a phone-scam victim in 2022, in U.S. dollars:
  • United States: $431.26
  • UK: $324.04
  • Canada: $472.87
  • France: $360.62
  • Germany: $325.87
  • Spain: $282.35

Source: Hiya

Generally, phishing attacks aim to trick people into parting with personal information, such as passwords and financial information. Spear-phishing, however, is more targeted: The scammer knows exactly whom they’re targeting, and they’re hoping for a bigger payout through a more tailored approach. Now, with generative AI, Nelson says, a scammer can scrape social-media sites, draft text, and even clone a trusted voice to part unsuspecting individuals from their money en masse.

With the FCC’s unanimous vote to make generative AI in robocalls illegal, the question naturally turns to enforcement. That’s where the experts whom IEEE Spectrum spoke to are generally doubtful, although many also see it as a necessary first step. “It’s a helpful step,” says Daniel Weiner, the director of the Brennan Center’s Elections and Government Program, “but it’s not a full solution.” Weiner says that it’s difficult for the FCC to take a broader regulatory approach in the same vein as the general prohibition on deepfakes being mulled by the European Union, given the FCC’s scope of authority.

Burger, who was the FCC’s chief technology officer from 2017 to 2019, says that the agency’s vote will ultimately have an impact only if it starts enforcing the ban on robocalls more generally. Most types of robocalls have been prohibited since the agency instituted the Telephone Consumer Protection Act in 1991. (There are some exceptions, such as prerecorded messages from your dentist’s office, for example, reminding you of an upcoming appointment.)

“Enforcement doesn’t seem to be happening,” says Burger. “The politicians like to say, ‘We’re going after the bad guys,’ and they don’t—not with the vigor we’d like to see.”

Robocall Enforcement Tools May Not Be Enough Against AI

The key method to identify the source of a robocall—and therefore prevent bad actors from continuing to make them—is to trace the call back through the complex network of telecom infrastructure and identify the call’s originating point. Tracebacks used to be complicated affairs, as a call typically traverses infrastructure maintained by multiple network operators like AT&T and T-Mobile. However, in 2020, the FCC approved a mandate for network operators to begin implementing a protocol called STIR/SHAKEN that would, among other antirobocall measures, make one-step tracebacks possible.

“One-step traceback has been borne out,” says Burger. Traceback, for example, identified the source of the fake Biden calls targeting New Hampshire voters as a Texas-based company called Life Corporation. The problem, Burger says, is that the FCC, the U.S. Federal Bureau of Investigation, and state agencies aren’t providing the resources to make it possible to go after the sheer number of illegal robocall operations. Historically, the FCC has gone after only the very largest perpetrators.

“There is no stopping these calls,” says Hiya’s Nelson—at least not entirely. “Our job isn’t to stop them, it’s to make them unprofitable.” Hiya, like similar companies, aims to accomplish that goal by lowering the amount of successful fraud through protective services, including exposing where a call was created and by whom, to make it less likely that an individual will answer the call in the first place.

However, Nelson worries that generative AI will make the barrier to entry so low that those preventative actions will be less effective. For example, today’s scams still almost always require transferring the victim to a live agent in a call center to close out the scam successfully. With AI-generated voices, scam operators can eventually cut out the call center entirely.

“In aggregate, it’s mind-boggling how much is lost to fraud perpetuated through robocalls.” —Eric Burger, Virginia Tech

Nelson is also concerned that as generative AI improves, it will be harder for people to even recognize that they weren’t speaking to an actual person in the first place. “That’s where we’re going to start to lose our footing,” says Nelson. “We may have an increase in call recipients not realizing it’s a scam at all.” Scammers positioning themselves as fake charities, for example, could successfully solicit “donations” without donors ever realizing what actually happened.

“I don’t think we can appreciate just how fast the telephone experience is going to change because of this,” says Nelson.

One other complicating issue for enforcement is that the majority of illegal robocalls in the United States originate from beyond the country’s borders. The Industry Traceback Group found that in 2021, for example, 65 percent of all such calls were international in origin.

Burger points out that the FCC has taken steps to combat international robocalls. The agency made it possible for other carriers to refuse to pass along traffic from gateway providers—a term for network operators connecting domestic infrastructure to international infrastructure—that are originating scam calls. In December 2023, for example, the FCC ordered two companies, Solid Double and CallWin, to stop transmitting illegal robocalls or risk other carriers being required to refuse their traffic.

“Enforcement doesn’t seem to be happening. . . . not with the vigor we’d like to see.” —Eric Burger, Virginia Tech

The FCC’s recent action against generative AI in robocalls is the first of its kind, and it remains to be seen if regulatory bodies in other countries will follow. “I certainly think the FCC is setting a good example in swift and bold action in the scope of its regulatory authority,” says Weiner. However, he also notes that the FCC’s counterparts in other democracies will likely end up with more comprehensive results.

It’s hard to say how the FCC’s actions will stack up versus other regulators, according to Burger. As often as the FCC is way ahead of the curve—such as in spectrum sharing—it’s just as often way behind, such as the use of mid-band 5G.

Nelson says he expects to see revisions to the FCC’s decision within a couple of years, because it currently prevents companies from using generative AI for legitimate business practices.

It also remains to be seen whether the FCC’s vote will have any real effect. Burger points out that, in the case of calls like the fake Biden one, it was already illegal to place those robocalls and impersonate the president, so making another aspect of the call illegal likely won’t be a game-changer.

“By making it triply illegal, is that really going to deter people?” Burger says.

Will Smith parodies viral AI-generated video by actually eating spaghetti

The real Will Smith eating spaghetti, parodying an AI-generated video from 2023.

Enlarge / The real Will Smith eating spaghetti, parodying an AI-generated video from 2023. (credit: Will Smith / Getty Images / Benj Edwards)

On Monday, Will Smith posted a video on his official Instagram feed that parodied an AI-generated video of the actor eating spaghetti that went viral last year. With the recent announcement of OpenAI's Sora video synthesis model, many people have noted the dramatic jump in AI-video quality over the past year compared to the infamous spaghetti video. Smith's new video plays on that comparison by showing the actual actor eating spaghetti in a comical fashion and claiming that it is AI-generated.

Captioned "This is getting out of hand!", the Instagram video uses a split screen layout to show the original AI-generated spaghetti video created by a Reddit user named "chaindrop" in March 2023 on the top, labeled with the subtitle "AI Video 1 year ago." Below that, in a box titled "AI Video Now," the real Smith shows 11 video segments of himself actually eating spaghetti by slurping it up while shaking his head, pouring it into his mouth with his fingers, and even nibbling on a friend's hair. 2006's Snap Yo Fingers by Lil Jon plays in the background.

In the Instagram comments section, some people expressed confusion about the new (non-AI) video, saying, "I'm still in doubt if second video was also made by AI or not." In a reply, someone else wrote, "Boomers are gonna loose [sic] this one. Second one is clearly him making a joke but I wouldn’t doubt it in a couple months time it will get like that."

Read 2 remaining paragraphs | Comments

❌