FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇IEEE Spectrum
  • Is AI Search a Medical Misinformation Disaster?Eliza Strickland
    Last month when Google introduced its new AI search tool, called AI Overviews, the company seemed confident that it had tested the tool sufficiently, noting in the announcement that “people have already used AI Overviews billions of times through our experiment in Search Labs.” The tool doesn’t just return links to Web pages, as in a typical Google search, but returns an answer that it has generated based on various sources, which it links to below the answer. But immediately after the launch us
     

Is AI Search a Medical Misinformation Disaster?

13. Červen 2024 v 15:00


Last month when Google introduced its new AI search tool, called AI Overviews, the company seemed confident that it had tested the tool sufficiently, noting in the announcement that “people have already used AI Overviews billions of times through our experiment in Search Labs.” The tool doesn’t just return links to Web pages, as in a typical Google search, but returns an answer that it has generated based on various sources, which it links to below the answer. But immediately after the launch users began posting examples of extremely wrong answers, including a pizza recipe that included glue and the interesting fact that a dog has played in the NBA.

A woman with brown hair in a black dress Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory.

While the pizza recipe is unlikely to convince anyone to squeeze on the Elmer’s, not all of AI Overview’s extremely wrong answers are so obvious—and some have the potential to be quite harmful. Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory and has a new book out about the online propagandists who “turn lies into reality.” She has studied the spread of medical misinformation via social media, so IEEE Spectrum spoke to her about whether AI search is likely to bring an onslaught of erroneous medical advice to unwary users.

I know you’ve been tracking disinformation on the Web for many years. Do you expect the introduction of AI-augmented search tools like Google’s AI Overviews to make the situation worse or better?

Renée DiResta: It’s a really interesting question. There are a couple of policies that Google has had in place for a long time that appear to be in tension with what’s coming out of AI-generated search. That’s made me feel like part of this is Google trying to keep up with where the market has gone. There’s been an incredible acceleration in the release of generative AI tools, and we are seeing Big Tech incumbents trying to make sure that they stay competitive. I think that’s one of the things that’s happening here.

We have long known that hallucinations are a thing that happens with large language models. That’s not new. It’s the deployment of them in a search capacity that I think has been rushed and ill-considered because people expect search engines to give them authoritative information. That’s the expectation you have on search, whereas you might not have that expectation on social media.

There are plenty of examples of comically poor results from AI search, things like how many rocks we should eat per day [a response that was drawn for an Onion article]. But I’m wondering if we should be worried about more serious medical misinformation. I came across one blog post about Google’s AI Overviews responses about stem-cell treatments. The problem there seemed to be that the AI search tool was sourcing its answers from disreputable clinics that were offering unproven treatments. Have you seen other examples of that kind of thing?

DiResta: I have. It’s returning information synthesized from the data that it’s trained on. The problem is that it does not seem to be adhering to the same standards that have long gone into how Google thinks about returning search results for health information. So what I mean by that is Google has, for upwards of 10 years at this point, had a search policy called Your Money or Your Life. Are you familiar with that?

I don’t think so.

DiResta: Your Money or Your Life acknowledges that for queries related to finance and health, Google has a responsibility to hold search results to a very high standard of care, and it’s paramount to get the information correct. People are coming to Google with sensitive questions and they’re looking for information to make materially impactful decisions about their lives. They’re not there for entertainment when they’re asking a question about how to respond to a new cancer diagnosis, for example, or what sort of retirement plan they should be subscribing to. So you don’t want content farms and random Reddit posts and garbage to be the results that are returned. You want to have reputable search results.

That framework of Your Money or Your Life has informed Google’s work on these high-stakes topics for quite some time. And that’s why I think it’s disturbing for people to see the AI-generated search results regurgitating clearly wrong health information from low-quality sites that perhaps happened to be in the training data.

So it seems like AI overviews is not following that same policy—or that’s what it appears like from the outside?

DiResta: That’s how it appears from the outside. I don’t know how they’re thinking about it internally. But those screenshots you’re seeing—a lot of these instances are being traced back to an isolated social media post or a clinic that’s disreputable but exists—are out there on the Internet. It’s not simply making things up. But it’s also not returning what we would consider to be a high-quality result in formulating its response.

I saw that Google responded to some of the problems with a blog post saying that it is aware of these poor results and it’s trying to make improvements. And I can read you the one bullet point that addressed health. It said, “For topics like news and health, we already have strong guardrails in place. In the case of health, we launched additional triggering refinements to enhance our quality protections.” Do you know what that means?

DiResta: That blog posts is an explanation that [AI Overviews] isn’t simply hallucinating—the fact that it’s pointing to URLs is supposed to be a guardrail because that enables the user to go and follow the result to its source. This is a good thing. They should be including those sources for transparency and so that outsiders can review them. However, it is also a fair bit of onus to put on the audience, given the trust that Google has built up over time by returning high-quality results in its health information search rankings.

I know one topic that you’ve tracked over the years has been disinformation about vaccine safety. Have you seen any evidence of that kind of disinformation making its way into AI search?

DiResta: I haven’t, though I imagine outside research teams are now testing results to see what appears. Vaccines have been so much a focus of the conversation around health misinformation for quite some time, I imagine that Google has had people looking specifically at that topic in internal reviews, whereas some of these other topics might be less in the forefront of the minds of the quality teams that are tasked with checking if there are bad results being returned.

What do you think Google’s next moves should be to prevent medical misinformation in AI search?

DiResta: Google has a perfectly good policy to pursue. Your Money or Your Life is a solid ethical guideline to incorporate into this manifestation of the future of search. So it’s not that I think there’s a new and novel ethical grounding that needs to happen. I think it’s more ensuring that the ethical grounding that exists remains foundational to the new AI search tools.

  • ✇Techdirt
  • Vivek Ramaswamy Buys Pointless Buzzfeed Stake So He Can Pretend He’s ‘Fixing Journalism’Karl Bode
    We’ve noted repeatedly how the primary problem with U.S. media and journalism often isn’t the actual journalists, or even the sloppy automation being used to cut corners; it’s the terrible, trust fund brunchlords that fail upwards into positions of power. The kind of owners and managers who, through malice or sheer incompetence, turn the outlets they oversee into either outright propaganda mills (Newsweek), or money-burning, purposeless mush (Vice, Buzzfeed, The Messenger, etc., etc.) Very ofte
     

Vivek Ramaswamy Buys Pointless Buzzfeed Stake So He Can Pretend He’s ‘Fixing Journalism’

Od: Karl Bode
31. Květen 2024 v 14:30

We’ve noted repeatedly how the primary problem with U.S. media and journalism often isn’t the actual journalists, or even the sloppy automation being used to cut corners; it’s the terrible, trust fund brunchlords that fail upwards into positions of power. The kind of owners and managers who, through malice or sheer incompetence, turn the outlets they oversee into either outright propaganda mills (Newsweek), or money-burning, purposeless mush (Vice, Buzzfeed, The Messenger, etc., etc.)

Very often these collapses are framed with the narrative that doing journalism online somehow simply can’t be profitable; something quickly disproven every time a group of journalists go off to start their own media venture without a useless executive getting outsized compensation and setting money on fire (see: 404 Media and countless other successful worker-owned journalistic ventures).

Of course these kinds of real journalistic outlets still have to scrap and fight for every nickel. At the same time, there’s just an unlimited amount of money available if you want to participate in the right wing grievance propaganda engagement economy, telling white young males that all of their very worst instincts are correct (see: Rogan, Taibbi, Rufo, Greenwald, Tracey, Tate, Peterson, etc. etc. etc. etc.).

One key player in this far right delusion farm, failed Presidential opportunist Vivek Ramaswamy, recently tried to ramp up his own make believe efforts to “fix journalism.” He did so by purchasing an 8 percent stake in what’s left of Buzzfeed after it basically gave up on trying to do journalism last year.

Ramaswamy’s demands are silly toddler gibberish, demanding that the outlet pivot to video, and hire such intellectual heavyweights as Tucker Carlson and Aaron Rodgers:

“Mr. Ramaswamy is pushing BuzzFeed to add three new members to its board of directors, to hone its focus on audio and video content and to embrace “greater diversity of thought,” according to a copy of his letter shared with The New York Times.”

By “greater diversity of thought,” he means pushing facts-optional right wing grievance porn and propaganda pretending to be journalism, in a bid to further distract the public from issues of substance, and fill American heads with pudding.

But it sounds like Ramaswamy couldn’t even do that successfully. For one thing, Buzzfeed simply isn’t relevant as a news company any longer. Gone is the real journalism peppered between cutesy listicles, replaced mostly with mindless engagement bullshit. For another, Buzzfeed CEO Jonah Peretti (and affiliates) still hold 96 percent of the Class B stock, giving them 50 times voting rights of Ramaswamy.

So as Elizabeth Lopatto at The Verge notes, Ramaswamy is either trying to goose and then sell his stock, or is engaging in a hollow and performative PR exercise where he can pretend that he’s “fixing liberal media.” Or both. The entire venture is utterly purposeless and meaningless:

“You’ve picked Buzzfeed because the shares are cheap, and because you have a grudge against a historically liberal outlet. It doesn’t matter that Buzzfeed News no longer exists — you’re still mad that it famously published the Steele dossier and you want to replace a once-respected, Pulitzer-winning brand with a half-assed “creators” plan starring Tucker Carlson and Aaron Rodgers. Really piss on your enemies’ graves, right, babe?”

While Ramaswamy’s bid is purely decorative, it, of course, was treated as a very serious effort to “fix journalism” by other pseudo-news outlets like the NY Post, The Hill, and Fox Business. It’s part of the broader right wing delusion that the real problem with U.S. journalism isn’t that it’s improperly financed and broadly mismanaged by raging incompetents, but that it’s not dedicated enough to coddling wealth and power. Or telling terrible, ignorant people exactly what they want to hear.

Of course none of this is any dumber than what happens in the U.S. media sector every day, as the Vice bankruptcy or the $50 million dollar Messenger implosion so aptly illustrated. U.S. journalism isn’t just dying, the corpses of what remains are being abused by terrible, wealthy puppeteers with no ideas and nothing of substance to contribute (see the postmortem abuse of Newsweek or Sports Illustrated), and in that sense Vivek fits right in.

  • ✇Ars Technica - All content
  • Russia and China are using OpenAI tools to spread disinformationFinancial Times
    Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective." (credit: FT montage/NurPhoto via Getty Images) OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year. The San Francis
     

Russia and China are using OpenAI tools to spread disinformation

31. Květen 2024 v 15:47
OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective."

Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective." (credit: FT montage/NurPhoto via Getty Images)

OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.

The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAI’s policies prohibit the use of its models to deceive or mislead others.

The content focused on issues “including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” OpenAI said in the report.

Read 14 remaining paragraphs | Comments

  • ✇Techdirt
  • Ctrl-Alt-Speech: Between A Rock And A Hard PolicyLeigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware) T
     

Ctrl-Alt-Speech: Between A Rock And A Hard Policy

11. Květen 2024 v 00:25

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

  • ✇Techdirt
  • Ctrl-Alt-Speech: Between A Rock And A Hard PolicyLeigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware) T
     

Ctrl-Alt-Speech: Between A Rock And A Hard Policy

11. Květen 2024 v 00:25

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

  • ✇Techdirt
  • Ctrl-Alt-Speech: Between A Rock And A Hard PolicyLeigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware) T
     

Ctrl-Alt-Speech: Between A Rock And A Hard Policy

11. Květen 2024 v 00:25

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

❌
❌