FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Latest
  • Elon Musk's 'Election Interference'Elizabeth Nolan Brown
    A "White Dudes for Harris" Zoom call reportedly raised $4 million in donations for Vice President Kamala Harris' presidential campaign. After the call, the @dudes4Harris account on X was briefly suspended. Is this election interference? If we remain in reality, the answer is of course not. Even if X CEO Elon Musk ordered the account suspended because of its politics, there would be no (legal) wrongdoing here. X is a private platform, and it doesn
     

Elon Musk's 'Election Interference'

31. Červenec 2024 v 17:52
Elon Musk |  Credit: Tom Williams/CQ Roll Call/Newscom

A "White Dudes for Harris" Zoom call reportedly raised $4 million in donations for Vice President Kamala Harris' presidential campaign. After the call, the @dudes4Harris account on X was briefly suspended.

Is this election interference?

If we remain in reality, the answer is of course not.

Even if X CEO Elon Musk ordered the account suspended because of its politics, there would be no (legal) wrongdoing here. X is a private platform, and it doesn't have any obligation to be politically neutral. Explicitly suppressing pro-Harris content would be a bad business model, surely, but it would not be illegal. Musk and the platform formerly known as Twitter have no obligation to equally air conservative and progressive views or give equal treatment to Republican and Democratic candidates.

But there's no evidence that X was deliberately trying to thwart Harris organizers. The dudes4Harris account—which has no direct affiliation to the Harris campaign—was suspended after it promoted and held its Zoom call and was back the next day. That's a pretty bad plan if the goal was to stop its influence or fundraising. And there are all sorts of legitimate reasons why X may have suspended the account.

The account's suspension is "not that surprising," writes Techdirt Editor in Chief Mike Masnick (who, it should be noted, is intensely critical of X policies and Musk himself on many issues). "Shouldn't an account suddenly amassing a ton of followers with no clear official connection to the campaign and pushing people to donate maybe ring some internal alarm bells on any trust and safety team? It wouldn't be a surprise if it tripped some guardwires and was locked and/or suspended briefly while the account was reviewed. That's how this stuff works."

If we step out of reality into the partisan hysteria zone, however, then the account's temporary suspension was clearly an attempt by Musk to sway the 2024 election.

"Musk owns this platform, has endorsed [former President Donald] Trump, is deep into white identity grievance, and just shut down the account that was being used to push back against his core ideology and raise money for Trump's opponent. This is election interference, and it's hard to see it differently," posted political consultant Dante Atkins on X.

"X has SUSPENDED the White Dudes for Harris account (@dudes4harris) after it raised more than $4M for Kamala Harris. This is the real election interference!" Brett Meiselas, co-founder of the left-leaning MeidasTouch News, posted.

Versions of these sentiments are now all over X—which has also been accused of nefariously plotting against the KamalaHQ account and photographer Pete Souza. Some have even gone so far as to suggest that Musk is committing election interference merely by sharing misinformation about Harris or President Joe Biden, or by posting pro-Trump information from his personal account.

We're now firmly in "everything I don't like is election interference" territory. And we've been here before. In 2020, when social media platforms temporarily suppressed links to a story about Hunter Biden or suspended some conservative accounts, it was conservatives who cried foul, while many on the left mocked the idea that this was a plot by platforms to shape the election. Now that the proverbial shoe is on the other foot, progressives are making the same arguments that conservatives did back then.

Musk himself is not immune to this exercise in paranoia and confirmation bias. For whatever reason, Google allegedly wouldn't auto-populate search results with "Donald Trump" when Musk typed in "President Donald." So Musk posted a screenshot about this, asking "election interference?"

Again, in reality: no.

As many have pointed out, Google Search does indeed still auto-populate with Trump for them. So whatever was going on here may have simply been a temporary glitch. Or it may have been something specific to things Musk had previously typed into search.

Even if Google deliberately set out not to have Trump's name auto-populate, it wouldn't be election interference. It would be a weird and questionable business decision, not an illegal one. But the idea that the company would risk the backlash just to take so petty a step is silly. Note that Musk's allegation was not that Google was suppressing search results about Trump, just the auto-population of his name. What is the theory of action here—that people who were going to vote for Trump wouldn't after having to actually type out his name into Google Search? That they somehow wouldn't be able to find information about Trump without an auto-populated search term?

"Please. I beg of people: stop it. Stop it with the conspiracy theories," writes Masnick. "Stop it with the nonsense. If you can't find something you want on social media, it's not because a billionaire is trying to influence an election. It might just be because some antifraud system went haywire or something."

Yes. All of that.

But I suspect a lot of people know this and just don't care. Both sides have learned how to weaponize claims of election interference to harness attention, inspire anger, and garner clout.

Just a reminder: Actual election crimes include things like improperly laundering donations, trying to prevent people from voting, threatening people if they don't vote a certain way, providing false information on voter registration forms, voting more than once, or being an elected official who uses your power in a corrupt way to benefit a particular party or candidate. Trying to persuade people for or against certain candidates does not qualify, even if you're really rich or famous and even if your persuasion relies on misinformation.

Also, content moderation is impossibly difficult to do correctly. And tech companies have way more to lose than to gain by engaging in biased moderation.

So if you feel yourself wanting to fling claims of election interference at X, or Google, or Meta, or some other online platform: stop. Calm down. Take a breath, take a walk, whatever. This is a moral panic. Do not be its foot soldier.

More Sex & Tech News 

• The Kids Online Safety Act passed the Senate by a vote of 91-3 yesterday. Sens. Rand Paul (R–Ky.), Ron Wyden (D–Ore.), and Mike Lee (R–Utah) were the only ones who voted against it. (See more of this newsletter's coverage of KOSA here, here, and here.)

• A federal court has dismissed a case brought under the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) against user-generated porn websites that allegedly allowed the publication of videos featuring a teenager. The person bringing the case said the sites were guilty of "receipt" of the videos. But "receipt of materials or content is, as it were, simply the first step in any publishing regime; if so, then mere receipt of illicit material is not sufficient to preclude immunity under Section 230," the court held.

• An expansive definition of "child sex trafficking" is being wielded to suggest that dating websites and apps should check IDs.

• The AI search wars have begun.

Today's Image

Is this election interference? | Cincinnati, 2023
ENB/Reason

 

The post Elon Musk's 'Election Interference' appeared first on Reason.com.

  • ✇Techdirt
  • Vivek Ramaswamy Buys Pointless Buzzfeed Stake So He Can Pretend He’s ‘Fixing Journalism’Karl Bode
    We’ve noted repeatedly how the primary problem with U.S. media and journalism often isn’t the actual journalists, or even the sloppy automation being used to cut corners; it’s the terrible, trust fund brunchlords that fail upwards into positions of power. The kind of owners and managers who, through malice or sheer incompetence, turn the outlets they oversee into either outright propaganda mills (Newsweek), or money-burning, purposeless mush (Vice, Buzzfeed, The Messenger, etc., etc.) Very ofte
     

Vivek Ramaswamy Buys Pointless Buzzfeed Stake So He Can Pretend He’s ‘Fixing Journalism’

Od: Karl Bode
31. Květen 2024 v 14:30

We’ve noted repeatedly how the primary problem with U.S. media and journalism often isn’t the actual journalists, or even the sloppy automation being used to cut corners; it’s the terrible, trust fund brunchlords that fail upwards into positions of power. The kind of owners and managers who, through malice or sheer incompetence, turn the outlets they oversee into either outright propaganda mills (Newsweek), or money-burning, purposeless mush (Vice, Buzzfeed, The Messenger, etc., etc.)

Very often these collapses are framed with the narrative that doing journalism online somehow simply can’t be profitable; something quickly disproven every time a group of journalists go off to start their own media venture without a useless executive getting outsized compensation and setting money on fire (see: 404 Media and countless other successful worker-owned journalistic ventures).

Of course these kinds of real journalistic outlets still have to scrap and fight for every nickel. At the same time, there’s just an unlimited amount of money available if you want to participate in the right wing grievance propaganda engagement economy, telling white young males that all of their very worst instincts are correct (see: Rogan, Taibbi, Rufo, Greenwald, Tracey, Tate, Peterson, etc. etc. etc. etc.).

One key player in this far right delusion farm, failed Presidential opportunist Vivek Ramaswamy, recently tried to ramp up his own make believe efforts to “fix journalism.” He did so by purchasing an 8 percent stake in what’s left of Buzzfeed after it basically gave up on trying to do journalism last year.

Ramaswamy’s demands are silly toddler gibberish, demanding that the outlet pivot to video, and hire such intellectual heavyweights as Tucker Carlson and Aaron Rodgers:

“Mr. Ramaswamy is pushing BuzzFeed to add three new members to its board of directors, to hone its focus on audio and video content and to embrace “greater diversity of thought,” according to a copy of his letter shared with The New York Times.”

By “greater diversity of thought,” he means pushing facts-optional right wing grievance porn and propaganda pretending to be journalism, in a bid to further distract the public from issues of substance, and fill American heads with pudding.

But it sounds like Ramaswamy couldn’t even do that successfully. For one thing, Buzzfeed simply isn’t relevant as a news company any longer. Gone is the real journalism peppered between cutesy listicles, replaced mostly with mindless engagement bullshit. For another, Buzzfeed CEO Jonah Peretti (and affiliates) still hold 96 percent of the Class B stock, giving them 50 times voting rights of Ramaswamy.

So as Elizabeth Lopatto at The Verge notes, Ramaswamy is either trying to goose and then sell his stock, or is engaging in a hollow and performative PR exercise where he can pretend that he’s “fixing liberal media.” Or both. The entire venture is utterly purposeless and meaningless:

“You’ve picked Buzzfeed because the shares are cheap, and because you have a grudge against a historically liberal outlet. It doesn’t matter that Buzzfeed News no longer exists — you’re still mad that it famously published the Steele dossier and you want to replace a once-respected, Pulitzer-winning brand with a half-assed “creators” plan starring Tucker Carlson and Aaron Rodgers. Really piss on your enemies’ graves, right, babe?”

While Ramaswamy’s bid is purely decorative, it, of course, was treated as a very serious effort to “fix journalism” by other pseudo-news outlets like the NY Post, The Hill, and Fox Business. It’s part of the broader right wing delusion that the real problem with U.S. journalism isn’t that it’s improperly financed and broadly mismanaged by raging incompetents, but that it’s not dedicated enough to coddling wealth and power. Or telling terrible, ignorant people exactly what they want to hear.

Of course none of this is any dumber than what happens in the U.S. media sector every day, as the Vice bankruptcy or the $50 million dollar Messenger implosion so aptly illustrated. U.S. journalism isn’t just dying, the corpses of what remains are being abused by terrible, wealthy puppeteers with no ideas and nothing of substance to contribute (see the postmortem abuse of Newsweek or Sports Illustrated), and in that sense Vivek fits right in.

  • ✇Ars Technica - All content
  • Russia and China are using OpenAI tools to spread disinformationFinancial Times
    Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective." (credit: FT montage/NurPhoto via Getty Images) OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year. The San Francis
     

Russia and China are using OpenAI tools to spread disinformation

31. Květen 2024 v 15:47
OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective."

Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective." (credit: FT montage/NurPhoto via Getty Images)

OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.

The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAI’s policies prohibit the use of its models to deceive or mislead others.

The content focused on issues “including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” OpenAI said in the report.

Read 14 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Key misinformation “superspreaders” on Twitter: Older womenJohn Timmer
    Enlarge (credit: Alistair Berg) Misinformation is not a new problem, but there are plenty of indications that the advent of social media has made things worse. Academic researchers have responded by trying to understand the scope of the problem, identifying the most misinformation-filled social media networks, organized government efforts to spread false information, and even prominent individuals who are the sources of misinformation. All of that's potentially valuable data.
     

Key misinformation “superspreaders” on Twitter: Older women

30. Květen 2024 v 22:28
An older woman holding a coffee mug and staring at a laptop on her lap.

Enlarge (credit: Alistair Berg)

Misinformation is not a new problem, but there are plenty of indications that the advent of social media has made things worse. Academic researchers have responded by trying to understand the scope of the problem, identifying the most misinformation-filled social media networks, organized government efforts to spread false information, and even prominent individuals who are the sources of misinformation.

All of that's potentially valuable data. But it skips over another major contribution: average individuals who, for one reason or another, seem inspired to spread misinformation. A study released today looks at a large panel of Twitter accounts that are associated with US-based voters (the work was done back when X was still Twitter). It identifies a small group of misinformation superspreaders, which represent just 0.3 percent of the accounts but are responsible for sharing 80 percent of the links to fake news sites.

While you might expect these to be young, Internet-savvy individuals who automate their sharing, it turns out this population tends to be older, female, and very, very prone to clicking the "retweet" button.

Read 15 remaining paragraphs | Comments

❌
❌