FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Nurses Say Hospital Adoption Of Half-Cooked ‘AI’ Is Reckless

Od: Karl Bode

We’ve noted repeatedly that while “AI” (language learning models) hold a lot of potential, the rushed implementation of half-assed early variants are causing no shortage of headaches across journalism, media, health care, and other sectors. In part because the kind of terrible brunchlord managers in charge of many institutions primarily see AI as a way to cut corners and attack labor.

It’s been a particular problem in healthcare, where broken “AI” is being layered on top of already broken systems. Like in insurance, where error-prone automation, programmed from the ground up to prioritize money over health, is incorrectly denying essential insurance coverage to the elderly.

Last week, hundreds of nurses protested the implementation of sloppy AI into hospital systems in front of Kaiser Permanente. Their primary concern: that systems incapable of empathy are being integrated into an already dysfunctional sector without much thought toward patient care:

“No computer, no AI can replace a human touch,” said Amy Grewal, a registered nurse. “It cannot hold your loved one’s hand. You cannot teach a computer how to have empathy.”

There are certainly roles automation can play in easing strain on a sector full of burnout after COVID, particularly when it comes to administrative tasks. The concern, as with other industries dominated by executives with poor judgement, is that this is being used as a justification by for-profit hospital systems to cut corners further. From a National Nurses United blog post (spotted by 404 Media):

“Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

Kaiser Permanente, for its part, insists it’s simply leveraging “state-of-the-art tools and technologies that support our mission of providing high-quality, affordable health care to best meet our members’ and patients’ needs.” The company claims its “Advance Alert” AI monitoring system — which algorithmically analyzes patient data every hour — has the potential to save upwards of 500 lives a year.

The problem is that healthcare giants’ primary obligation no longer appears to reside with patients, but with their financial results. And, that’s even true in non-profit healthcare providers. That is seen in the form of cut corners, worse service, and an assault on already over-taxed labor via lower pay and higher workload (curiously, it never seems to impact outsized high-level executive compensation).

AI provides companies the perfect justification for making life worse on employees under the pretense of progress. Which wouldn’t be quite as terrible if the implementation of AI in health care hadn’t been such a preposterous mess, ranging from mental health chatbots doling out dangerously inaccurate advice, to AI health insurance bots that make error-prone judgements a good 90 percent of the time.

AI has great potential in imaging analysis. But while it can help streamline analysis and solve some errors, it may introduce entirely new ones if not adopted with caution. Concern on this front can often be misrepresented as being anti-technology or anti-innovation by health care hardware technology companies again prioritizing quarterly returns over the safety of patients.

Implementing this kind of transformative but error-prone tech in an industry where lives are on the line requires patience, intelligent planning, broad consultation with every level of employee, and competent regulatory guidance, none of which are American strong suits of late.

❌