FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Ars Technica - All content
  • OpenAI has the tech to watermark ChatGPT text—it just won’t release itSamuel Axon
    Enlarge (credit: Getty Images) According to The Wall Street Journal, there's internal conflict at OpenAI over whether or not to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT or not. To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the
     

OpenAI has the tech to watermark ChatGPT text—it just won’t release it

6. Srpen 2024 v 00:12
OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen.

Enlarge (credit: Getty Images)

According to The Wall Street Journal, there's internal conflict at OpenAI over whether or not to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT or not.

To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the company's internal testing has shown that it does not negatively affect the quality of outputs. The detector would be accurate 99.9 percent of the time. It's important to note that the watermark would be a pattern in the text itself, meaning it would be preserved if the user copies and pastes the text or even if they make modest edits to it.

Some OpenAI employees have campaigned for the tool's release, but others believe that would be the wrong move, citing a few specific problems.

Read 8 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Sony Music opts out of AI training for its entire catalogFinancial Times
    Enlarge / The Sony Music letter expressly prohibits artificial intelligence developers from using its music — which includes artists such as Beyoncé. (credit: Kevin Mazur/WireImage for Parkwood via Getty Images) Sony Music is sending warning letters to more than 700 artificial intelligence developers and music streaming services globally in the latest salvo in the music industry’s battle against tech groups ripping off artists. The Sony Music letter, which has been seen by t
     

Sony Music opts out of AI training for its entire catalog

17. Květen 2024 v 15:16
picture of Beyonce who is a Sony artist

Enlarge / The Sony Music letter expressly prohibits artificial intelligence developers from using its music — which includes artists such as Beyoncé. (credit: Kevin Mazur/WireImage for Parkwood via Getty Images)

Sony Music is sending warning letters to more than 700 artificial intelligence developers and music streaming services globally in the latest salvo in the music industry’s battle against tech groups ripping off artists.

The Sony Music letter, which has been seen by the Financial Times, expressly prohibits AI developers from using its music—which includes artists such as Harry Styles, Adele and Beyoncé—and opts out of any text and data mining of any of its content for any purposes such as training, developing or commercializing any AI system.

Sony Music is sending the letter to companies developing AI systems including OpenAI, Microsoft, Google, Suno, and Udio, according to those close to the group.

Read 12 remaining paragraphs | Comments

  • ✇Techdirt
  • CEO: ‘AI’ Power Drain Could Cause Data Centers To Run Out Of Power Within Two YearsKarl Bode
    By now it’s been made fairly clear that the bedazzling wonderment that is “AI” doesn’t come cheap. Story after story have highlighted how the technology consumes massive amounts of electricity and water, and we’re not really adapting to keep pace. This is also occurring alongside a destabilizing climate crisis that’s already putting a capacity and financial strain on aging electrical infrastructure. A new report from the International Energy Agency (IEA) indicates that the 460 terawatt-hours (T
     

CEO: ‘AI’ Power Drain Could Cause Data Centers To Run Out Of Power Within Two Years

Od: Karl Bode
10. Květen 2024 v 14:24

By now it’s been made fairly clear that the bedazzling wonderment that is “AI” doesn’t come cheap. Story after story have highlighted how the technology consumes massive amounts of electricity and water, and we’re not really adapting to keep pace. This is also occurring alongside a destabilizing climate crisis that’s already putting a capacity and financial strain on aging electrical infrastructure.

A new report from the International Energy Agency (IEA) indicates that the 460 terawatt-hours (TWh) consumed by data centers in 2022 represented 2% of all global electricity usage, mostly driven by data centers and data center cooling. AI and crypto mining is expected to double that consumption by 2026.

Marc Ganzi, CEO of data center company DigitalBridge, isn’t really being subtle about his warnings. He claims that data centers are going to start running out of power within the next 18-24 months:

“We started talking about this over two years ago at the Berlin Infrastructure Conference when I told the investor world, we’re running out of power in five years. Well, I was wrong about that. We’re kind of running out of power in the next 18 to 24 months.”

Of course when these guys say “we” are going to run out of power, they really mean you (the plebs) will be running out of power. They’ll find solutions to address their need for unlimited power, and the strain will likely be shifted to areas, companies, and residents with far less robust lobbying budgets.

Data centers can move operations closer to natural gas, hydropower sources, or nuclear plants. Some are even using decommissioned Navy ships to exploit liquid cooling. But a report by the financial analysts at TD Cowen says there’s now a 3+ year lead time on bringing new power connections to data centers. It’s a 7 year wait in Silicon Valley; 8 years in markets like Frankfurt, London, Amsterdam, Paris and Dublin.

Network engineers have seen this problem coming for years. Yet crypto and AI power consumption, combined with the strain of climate dysregulation, still isn’t really a problem the sector is prepared for. And when the blame comes, the VC hype bros who got out over their skis, or utilities that failed to modernize for modern demand and climate stability issues won’t blame themselves, but regulation:

“[Cisco VP Denise] Lee said that, now, two major trends are getting ready to crash into each other: Cutting-edge AI is supercharging demand for power-hungry data center processing, while slow-moving power utilities are struggling to keep up with demand amid outdated technologies and voluminous regulations.”

While I’m sure utilities and data centers certainly face some annoying regulations, the real problem rests on the back of technology hype cycles that don’t really care about the real-world impact of their hyper-scaled profit seeking. As always, the real-world impact of the relentless pursuit of unlimited wealth and impossible scale is somebody else’s problem to figure out later, likely at significant taxpayer cost.

This story is playing out to a backdrop of a total breakdown of federal regulatory guidance. Bickering state partisans are struggling to coordinate vastly different and often incompatible visions of our energy future. While at the same time a corrupt Supreme Court prepares several pro-corporate rulings designed to dismantle what’s left of coherent federal regulatory independence.

I would suspect the crypto and AI-hyping VCs (and the data centers that profit off of the relentless demand for unlimited computational power and energy) will be fine. Not so sure about everybody else, though.

  • ✇Techdirt
  • CEO: ‘AI’ Power Drain Could Cause Data Centers To Run Out Of Power Within Two YearsKarl Bode
    By now it’s been made fairly clear that the bedazzling wonderment that is “AI” doesn’t come cheap. Story after story have highlighted how the technology consumes massive amounts of electricity and water, and we’re not really adapting to keep pace. This is also occurring alongside a destabilizing climate crisis that’s already putting a capacity and financial strain on aging electrical infrastructure. A new report from the International Energy Agency (IEA) indicates that the 460 terawatt-hours (T
     

CEO: ‘AI’ Power Drain Could Cause Data Centers To Run Out Of Power Within Two Years

Od: Karl Bode
10. Květen 2024 v 14:24

By now it’s been made fairly clear that the bedazzling wonderment that is “AI” doesn’t come cheap. Story after story have highlighted how the technology consumes massive amounts of electricity and water, and we’re not really adapting to keep pace. This is also occurring alongside a destabilizing climate crisis that’s already putting a capacity and financial strain on aging electrical infrastructure.

A new report from the International Energy Agency (IEA) indicates that the 460 terawatt-hours (TWh) consumed by data centers in 2022 represented 2% of all global electricity usage, mostly driven by data centers and data center cooling. AI and crypto mining is expected to double that consumption by 2026.

Marc Ganzi, CEO of data center company DigitalBridge, isn’t really being subtle about his warnings. He claims that data centers are going to start running out of power within the next 18-24 months:

“We started talking about this over two years ago at the Berlin Infrastructure Conference when I told the investor world, we’re running out of power in five years. Well, I was wrong about that. We’re kind of running out of power in the next 18 to 24 months.”

Of course when these guys say “we” are going to run out of power, they really mean you (the plebs) will be running out of power. They’ll find solutions to address their need for unlimited power, and the strain will likely be shifted to areas, companies, and residents with far less robust lobbying budgets.

Data centers can move operations closer to natural gas, hydropower sources, or nuclear plants. Some are even using decommissioned Navy ships to exploit liquid cooling. But a report by the financial analysts at TD Cowen says there’s now a 3+ year lead time on bringing new power connections to data centers. It’s a 7 year wait in Silicon Valley; 8 years in markets like Frankfurt, London, Amsterdam, Paris and Dublin.

Network engineers have seen this problem coming for years. Yet crypto and AI power consumption, combined with the strain of climate dysregulation, still isn’t really a problem the sector is prepared for. And when the blame comes, the VC hype bros who got out over their skis, or utilities that failed to modernize for modern demand and climate stability issues won’t blame themselves, but regulation:

“[Cisco VP Denise] Lee said that, now, two major trends are getting ready to crash into each other: Cutting-edge AI is supercharging demand for power-hungry data center processing, while slow-moving power utilities are struggling to keep up with demand amid outdated technologies and voluminous regulations.”

While I’m sure utilities and data centers certainly face some annoying regulations, the real problem rests on the back of technology hype cycles that don’t really care about the real-world impact of their hyper-scaled profit seeking. As always, the real-world impact of the relentless pursuit of unlimited wealth and impossible scale is somebody else’s problem to figure out later, likely at significant taxpayer cost.

This story is playing out to a backdrop of a total breakdown of federal regulatory guidance. Bickering state partisans are struggling to coordinate vastly different and often incompatible visions of our energy future. While at the same time a corrupt Supreme Court prepares several pro-corporate rulings designed to dismantle what’s left of coherent federal regulatory independence.

I would suspect the crypto and AI-hyping VCs (and the data centers that profit off of the relentless demand for unlimited computational power and energy) will be fine. Not so sure about everybody else, though.

  • ✇Techdirt
  • CEO: ‘AI’ Power Drain Could Cause Data Centers To Run Out Of Power Within Two YearsKarl Bode
    By now it’s been made fairly clear that the bedazzling wonderment that is “AI” doesn’t come cheap. Story after story have highlighted how the technology consumes massive amounts of electricity and water, and we’re not really adapting to keep pace. This is also occurring alongside a destabilizing climate crisis that’s already putting a capacity and financial strain on aging electrical infrastructure. A new report from the International Energy Agency (IEA) indicates that the 460 terawatt-hours (T
     

CEO: ‘AI’ Power Drain Could Cause Data Centers To Run Out Of Power Within Two Years

Od: Karl Bode
10. Květen 2024 v 14:24

By now it’s been made fairly clear that the bedazzling wonderment that is “AI” doesn’t come cheap. Story after story have highlighted how the technology consumes massive amounts of electricity and water, and we’re not really adapting to keep pace. This is also occurring alongside a destabilizing climate crisis that’s already putting a capacity and financial strain on aging electrical infrastructure.

A new report from the International Energy Agency (IEA) indicates that the 460 terawatt-hours (TWh) consumed by data centers in 2022 represented 2% of all global electricity usage, mostly driven by data centers and data center cooling. AI and crypto mining is expected to double that consumption by 2026.

Marc Ganzi, CEO of data center company DigitalBridge, isn’t really being subtle about his warnings. He claims that data centers are going to start running out of power within the next 18-24 months:

“We started talking about this over two years ago at the Berlin Infrastructure Conference when I told the investor world, we’re running out of power in five years. Well, I was wrong about that. We’re kind of running out of power in the next 18 to 24 months.”

Of course when these guys say “we” are going to run out of power, they really mean you (the plebs) will be running out of power. They’ll find solutions to address their need for unlimited power, and the strain will likely be shifted to areas, companies, and residents with far less robust lobbying budgets.

Data centers can move operations closer to natural gas, hydropower sources, or nuclear plants. Some are even using decommissioned Navy ships to exploit liquid cooling. But a report by the financial analysts at TD Cowen says there’s now a 3+ year lead time on bringing new power connections to data centers. It’s a 7 year wait in Silicon Valley; 8 years in markets like Frankfurt, London, Amsterdam, Paris and Dublin.

Network engineers have seen this problem coming for years. Yet crypto and AI power consumption, combined with the strain of climate dysregulation, still isn’t really a problem the sector is prepared for. And when the blame comes, the VC hype bros who got out over their skis, or utilities that failed to modernize for modern demand and climate stability issues won’t blame themselves, but regulation:

“[Cisco VP Denise] Lee said that, now, two major trends are getting ready to crash into each other: Cutting-edge AI is supercharging demand for power-hungry data center processing, while slow-moving power utilities are struggling to keep up with demand amid outdated technologies and voluminous regulations.”

While I’m sure utilities and data centers certainly face some annoying regulations, the real problem rests on the back of technology hype cycles that don’t really care about the real-world impact of their hyper-scaled profit seeking. As always, the real-world impact of the relentless pursuit of unlimited wealth and impossible scale is somebody else’s problem to figure out later, likely at significant taxpayer cost.

This story is playing out to a backdrop of a total breakdown of federal regulatory guidance. Bickering state partisans are struggling to coordinate vastly different and often incompatible visions of our energy future. While at the same time a corrupt Supreme Court prepares several pro-corporate rulings designed to dismantle what’s left of coherent federal regulatory independence.

I would suspect the crypto and AI-hyping VCs (and the data centers that profit off of the relentless demand for unlimited computational power and energy) will be fine. Not so sure about everybody else, though.

  • ✇Techdirt
  • Nurses Say Hospital Adoption Of Half-Cooked ‘AI’ Is RecklessKarl Bode
    We’ve noted repeatedly that while “AI” (language learning models) hold a lot of potential, the rushed implementation of half-assed early variants are causing no shortage of headaches across journalism, media, health care, and other sectors. In part because the kind of terrible brunchlord managers in charge of many institutions primarily see AI as a way to cut corners and attack labor. It’s been a particular problem in healthcare, where broken “AI” is being layered on top of already broken system
     

Nurses Say Hospital Adoption Of Half-Cooked ‘AI’ Is Reckless

Od: Karl Bode
2. Květen 2024 v 14:22

We’ve noted repeatedly that while “AI” (language learning models) hold a lot of potential, the rushed implementation of half-assed early variants are causing no shortage of headaches across journalism, media, health care, and other sectors. In part because the kind of terrible brunchlord managers in charge of many institutions primarily see AI as a way to cut corners and attack labor.

It’s been a particular problem in healthcare, where broken “AI” is being layered on top of already broken systems. Like in insurance, where error-prone automation, programmed from the ground up to prioritize money over health, is incorrectly denying essential insurance coverage to the elderly.

Last week, hundreds of nurses protested the implementation of sloppy AI into hospital systems in front of Kaiser Permanente. Their primary concern: that systems incapable of empathy are being integrated into an already dysfunctional sector without much thought toward patient care:

“No computer, no AI can replace a human touch,” said Amy Grewal, a registered nurse. “It cannot hold your loved one’s hand. You cannot teach a computer how to have empathy.”

There are certainly roles automation can play in easing strain on a sector full of burnout after COVID, particularly when it comes to administrative tasks. The concern, as with other industries dominated by executives with poor judgement, is that this is being used as a justification by for-profit hospital systems to cut corners further. From a National Nurses United blog post (spotted by 404 Media):

“Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

Kaiser Permanente, for its part, insists it’s simply leveraging “state-of-the-art tools and technologies that support our mission of providing high-quality, affordable health care to best meet our members’ and patients’ needs.” The company claims its “Advance Alert” AI monitoring system — which algorithmically analyzes patient data every hour — has the potential to save upwards of 500 lives a year.

The problem is that healthcare giants’ primary obligation no longer appears to reside with patients, but with their financial results. And, that’s even true in non-profit healthcare providers. That is seen in the form of cut corners, worse service, and an assault on already over-taxed labor via lower pay and higher workload (curiously, it never seems to impact outsized high-level executive compensation).

AI provides companies the perfect justification for making life worse on employees under the pretense of progress. Which wouldn’t be quite as terrible if the implementation of AI in health care hadn’t been such a preposterous mess, ranging from mental health chatbots doling out dangerously inaccurate advice, to AI health insurance bots that make error-prone judgements a good 90 percent of the time.

AI has great potential in imaging analysis. But while it can help streamline analysis and solve some errors, it may introduce entirely new ones if not adopted with caution. Concern on this front can often be misrepresented as being anti-technology or anti-innovation by health care hardware technology companies again prioritizing quarterly returns over the safety of patients.

Implementing this kind of transformative but error-prone tech in an industry where lives are on the line requires patience, intelligent planning, broad consultation with every level of employee, and competent regulatory guidance, none of which are American strong suits of late.

  • ✇I, Cringely
  • If you want to reduce ChatGPT mediocrity, do it promptlyRobert X. Cringely
    My son Cole, pictured here as a goofy kid many years ago, is now six feet six inches tall and in college. Cole needed a letter of recommendation recently so he turned to an old family friend who, in turn, used ChatGPT to generate the letter, which he thought was remarkably good. As a guy who pretends to write for a living, I read it differently. ChatGPT’s letter was facile but empty, the type of letter you would write for someone you’d never met. It said almost nothing about Cole other than that
     

If you want to reduce ChatGPT mediocrity, do it promptly

7. Únor 2023 v 12:32

My son Cole, pictured here as a goofy kid many years ago, is now six feet six inches tall and in college. Cole needed a letter of recommendation recently so he turned to an old family friend who, in turn, used ChatGPT to generate the letter, which he thought was remarkably good. As a guy who pretends to write for a living, I read it differently. ChatGPT’s letter was facile but empty, the type of letter you would write for someone you’d never met. It said almost nothing about Cole other than that he’s a good kid. Artificial Intelligence is good for certain things, but blind letters of reference aren’t among them.

The key problem here has to do with Machine Learning. ChatGPT’s language model is nuanced, but contains no data at all specific to either my friend the lazy reference writer or my son the reference needer. Even if ChatGPT was allowed access to my old friend’s email boxes, it would only learn about his style and almost nothing about Cole, with whom he’s communicated, I think, twice.

If you think ChatGPT is the answer to some unmet personal need, it probably isn’t unless mediocrity is good enough or you are willing to share lots of private data — an option that I don’t think ChatGPT yet provides.

Then yesterday I learned a lesson from super-lawyer Neal Katyal who tweeted that he asked ChatGPT to write a specific 1000-word essay “in the style of Neal Katyal.” The result, he explained, was an essay that was largely wrong on the facts but read like he had written it.

What I learned from this was that there is a valuable business in writing prompts for Large Language Models like ChatGPT (many more are coming). I was stunned that it only required adding the words “in the style of Bob Cringely” to clone me. Until then I thought personalizing LLMs cost thousands, maybe millions (ChatGPT reportedly cost $2.25 million to train).

So where Google long ago trained us how to write queries, these Large Language Models will soon train us to write prompts to achieve our AI goals. In these cases we’re asking ChatGPT or Google’s Bard or Baidu’s Ernie or whatever LLM to temporarily forget about something, but that’s unlikely to give the LLMs better overall judgement.

Part of the problem with prompt-engineering is it is completely at the spell-casting / magical incantation phase: no one really understands the underlying general principles behind what makes a good prompt for getting a given kind of answer – work here is very preliminary and will probably vary greatly from LLM to LLM.

A logical solution to this problem might be to write a prompt that excludes unwanted information like racism while simultaneously including local data from your PC (called fine-tuning in the LLM biz), which would require API calls that to my knowledge haven’t yet been published. But once they are published, just imagine the new tools that could be created.

I believe there is a big opportunity to apply Artificial Intelligence to teaching, for example. While this also means applying AI to education in general, my desired path is through teachers, who I see as having been failed by educational IT, which makes their jobs harder, not easier.  No wonder teachers hate IT.

The application of Information Technology to primary and secondary education has mainly involved scheduling and records. The master class schedule is in a computer. Grades are in another. And graduation requirements are handled by a database that spans the two, integrating attendance. Whether this is one vendor or up to four, the idea is generally to give the principal and school board daily snapshots of where everything stands. In this model the only place for teachers is data entry.

These systems require MORE teacher work, not less. And it leads to resentment and disappointment all around. It’s garbage-in, garbage-out as IT systems try to impose daily metrics on activities that were traditionally measured in weeks. I as a parent get mad when the system says my kid is failing when in fact it means someone forgot to upload grades or even forgot to grade work at all.

If report cards come out every six weeks it would be nice to know halfway through that my kid was struggling, but current systems we have been exposed to don’t do that. All they do is advertise in excruciating and useless detail that the system, itself, isn’t working right.

How could IT actually help teachers?

Look at Snorkel AI in Redwood City, CA for example. They are developing super-low-cost Machine Learning tools for Enterprise, not education, mainly because for education they can’t identify a customer.

I think the customer here is the teacher. This may sound odd, but understand that teachers aren’t well-served by IT to this point because they aren’t viewed as customers. They have no clout in the system. I chose to use the word clout rather than power or money because it better characterizes the teacher’s position as someone essential to the process but also both a source of thrust and drag.

I envision a new system where teachers can run their paperwork (both cellulose-based and electronic) through an AI that does a combination of automatically storing and classifying everything while also taking a first hack at grading. The AI comes to reflect mainly the values and methods of the individual teacher, which is new, and might keep more of them from quitting.

Next column: AI and Moore’s Law.

The post If you want to reduce ChatGPT mediocrity, do it promptly first appeared on I, Cringely.






Digital Branding
Web Design Marketing

❌
❌