FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

FDA’s review of MDMA for PTSD highlights study bias and safety concerns

Od: Beth Mole
MDMA is now in the FDA's hands.

Enlarge / MDMA is now in the FDA's hands. (credit: Getty | PYMCA/Avalon)

The safety and efficacy data on the use of MDMA (aka ecstasy) for post-traumatic stress disorder therapy is "challenging to interpret," the Food and Drug Administration said in a briefing document posted Friday. The agency noted significant flaws in the design of the underlying clinical trials as well as safety concerns for the drug, particularly cardiovascular harms.

On Tuesday, June 4, the FDA will convene an advisory committee that will review the evidence and vote on MDMA's efficacy and whether its benefits outweigh its risks. The FDA does not have to follow the committee's recommendations, but it often does. If the FDA subsequently approves MDMA as part of treatment for PTSD, it would mark a significant shift in the federal government's stance on MDMA, as well as psychedelics, generally. Currently, the US Drug Enforcement Administration considers MDMA a Schedule I drug, defined as one with "no currently accepted medical use and a high potential for abuse." It would also offer a new treatment option for patients with PTSD, a disabling psychiatric condition with few treatment options currently.

As Ars has reported previously, the submission of MDMA for approval is based on two clinical trials. The first trial, published in Nature Medicine in 2021, involved 90 participants with moderate PTSD and found that MDMA-assisted psychotherapy significantly improved Clinician-Administered PTSD Scale for DSM-5 (CAPS-5) scores compared with participants who were given psychotherapy along with a placebo. In the second study, published in September in Nature Medicine, the finding held up among 104 participants with moderate or severe PTSD (73 percent had severe PTSD).

Read 7 remaining paragraphs | Comments

NPR's Katherine Maher Is Not Taking Questions About Her Tweets

Katherine Maher | Robby Soave / Reason

Who is Katherine Maher, and what does she really believe? The embattled NPR CEO had the opportunity on Wednesday to set the record straight regarding her views on intellectual diversity, "white silence," and whether Hillary Clinton (of all people) committed nonbinary erasure when she used the phrase boys and girls.

Unfortunately, during a recent appearance at the Carnegie Endowment for International Peace to discuss the journalism industry's war on disinformation, she repeatedly declined to give straight answers—instead offering up little more than platitudes about workplace best practices. I attended the event and submitted questions that the organizers effectively ignored.

That's a shame, because Maher's views certainly require clarity—especially now that longtime editor Uri Berliner has resigned from NPR and called out the publicly funded radio channel's CEO. In his parting statement, Berliner slammed Maher, saying that her "divisive views confirm the very problems" that he wrote about in his much-discussed article for Bari Weiss' Free Press.

Berliner's tell-all mostly took aim at specific examples of NPR being led astray by its deference to progressive shibboleths: the Hunter Biden laptop, COVID-19, etc. He implored his new boss—Maher's tenure as CEO had only begun about four weeks ago—to correct NPR's lack of viewpoint diversity. That's probably a tall order, since Maher had once tweeted that ideological diversity is "often a dog whistle for anti-feminist, anti-POC stories."

That Silicon Valley v Russia thread was pretty funny — until it got onto ideological diversity. In case it's not evident, in these parts that's often a dog whistle for anti-feminist, anti-POC stories about meritocracy. Maybe's not what the author meant. But idk, maybe it is?

— Katherine Maher (@krmaher) July 6, 2018

Indeed, Maher's past tweets would be hard to distinguish from satire if one randomly stumbled across them. Her earnest, uncompromising wokeness—land acknowledgments, condemnations of Western holidays, and so on—sounds like they were written by parody accounts such as The Babylon Bee or Titania McGrath. In her 2022 TED Talk, she faulted Wikipedia, where she worked at the time, for being a Eurocentric written reference that fails to take into account the oral histories of other peoples. More seriously, she seems to view the First Amendment as an inconvenient barrier for tackling "bad information" and "influence peddlers" online.

But interestingly, she did not reiterate any of these views during her appearance at the Carnegie Endowment on Wednesday. On the contrary, she gave entirely nonspecific answers about diversity in the newsroom. In fact, she barely said anything concrete about the subject of the discussion: disinformation.

When asked by event organizer Jon Bateman, a Carnegie senior fellow, to address the Berliner controversy, she said that she had never met him and was not responsible for the editorial policies of the newsroom.

"The newsroom is entirely independent," she said. "My responsibility is to ensure that we have the resources to do this work. We have a mandate to serve all Americans."

She repeated these lines over and over again. When asked more specifically about whether she thinks NPR is succeeding or failing at making different viewpoints welcome, she pointed to the audience and said that her mission was to expand the outlet's reach.

"Are we growing our audiences?" she asked. "That is so much more representative of how we are doing our job, because I am not in the newsroom."

Many of the people who are in the newsroom clearly had it out for Berliner. In a letter to Maher, signed by 50 NPR staffers, they called on her to make use of NPR's "DEI accountability committee" to silence internal criticism. Does Maher believe that a diversity, equity, and inclusion task force should vigorously root out heresy?

At the event, Maher did not directly take audience questions. Instead, audience members were asked to write out their questions and submit them via QR code. I asked her whether she stood by her previous tweet that maligned the concept of ideological diversity, as well as the other tweets that had recently made the news. Frustratingly, she offered no further clarity on these subjects.

 

This Week on Free Media

In the latest episode of our new media criticism show for Reason, Amber Duke and I discussed the Berliner situation in detail. We also reacted to a Bill Maher monologue on problems with liberal governance, tackled MSNBC's contempt for laundry-related liberty, and chided Sen. Tom Cotton (R–Ark.) for encouraging drivers to throw in-the-way protesters off bridges.

 

 

This Week on Rising

Briahna Joy Gray and I argued about the Berliner situation—and much else—on Rising this week. Watch below.

 

Worth Watching (Follow-Up)

I have finally finished Netflix's 3 Body Problem, which went off the rails a bit in its last few episodes. I still highly recommend the fifth episode, "Judgment Day," for including one of the most haunting television sequences of the year thus far.

But I have questions about the aliens. (Spoilers to follow.)

In 3 Body Problem, a group of scientists must prepare Earth for war against the San Ti, an advanced alien race that will arrive in 400 years. The San Ti have sent advanced technology to Earth that allows them to closely monitor humans and co-opt technology—screens, phones, presumably weapons systems—for their own use. We are led to believe that the San Ti want to kill humans because unlike them, we are liars. Eccentric oil CEO Mike Evans (Jonathan Pryce), a human fifth columnist who communicates with the San Ti, appears to doom our species when he tells the aliens the story of Little Red Riding Hood. The San Ti are so offended by the Big Bad Wolf's deceptions that they decide earthlings can't ever be trusted, and should instead be destroyed. "We cannot coexist with liars," says the San Ti's emissary. "We are afraid of you."

The scene in which Evans realizes what he has done makes for gripping television but… I'm sorry, it's nonsensical. Clearly the San Ti already understand deception, misdirection, and the difference between a made-up story and what's really happening. After all, they were the ones who equipped Evans and his collaborators with the virtual reality video game technology they use to recruit more members. The game does not literally depict the fate of the San Ti's home world; it uses metaphor, exaggeration, and human imagery to convey San Ti history. It doesn't make any sense that they would be utterly flummoxed by the Big Bad Wolf.

Then, in the season finale, the San Ti use trickery to taunt the human leader of the resistance. They are the liars, but no one ever calls them out on this.

The post NPR's Katherine Maher Is Not Taking Questions About Her Tweets appeared first on Reason.com.

AI Is Being Built on Dated, Flawed Motion-Capture Data



Diversity of thought in industrial design is crucial: If no one thinks to design a technology for multiple body types, people can get hurt. The invention of seat belts is an oft-cited example of this phenomenon, as they were designed based on crash dummies that had traditionally male proportions, reflecting the bodies of the team members working on them.

The same phenomenon is now at work in the field of motion-capture technology. Throughout history, scientists have endeavored to understand how the human body moves. But how do we define the human body? Decades ago many studies assessed “healthy male” subjects; others used surprising models like dismembered cadavers. Even now, some modern studies used in the design of fall-detection technology rely on methods like hiring stunt actors who pretend to fall.

Over time, a variety of flawed assumptions have become codified into standards for motion-capture data that’s being used to design some AI-based technologies. These flaws mean that AI-based applications may not be as safe for people who don’t fit a preconceived “typical” body type, according to new work recently published as a preprint and set to be presented at the Conference on Human Factors in Computing Systems in May.

“We dug into these so-called gold standards being used for all kinds of studies and designs, and many of them had errors or were focused on a very particular type of body,” says Abigail Jacobs, coauthor of the study and an assistant professor at the University of Michigan’s School of Information and the Center for the Study of Complex Systems. “We want engineers to be aware of how these social aspects become coded into the technical—hidden in mathematical models that seem objective or infrastructural.”

It’s an important moment for AI-based systems, Jacobs says, as we may still have time to catch and avoid potentially dangerous assumptions from being codified into applications informed by AI.

Motion-capture systems create representations of bodies by collecting data from sensors placed on the subjects, logging how these bodies move through space. These schematics become part of the tools that researchers use, such as open-source libraries of movement data and measurement systems that are meant to provide baseline standards for how human bodies move. Developers are increasingly using these baselines to build all manner of AI-based applications: fall-detection algorithms for smartwatches and other wearables, self-driving vehicles that need to detect pedestrians, computer-generated imagery for movies and video games, manufacturing equipment that interacts safely with human workers, and more.

“Many researchers don’t have access to advanced motion-capture labs to collect data, so we’re increasingly relying on benchmarks and standards to build new tech,” Jacobs says. “But when these benchmarks don’t include representations of all bodies, especially those people who are likely to be involved in real-world use cases—like elderly people who may fall—these standards can be quite flawed.”

She hopes we can learn from past mistakes, such as cameras that didn’t accurately capture all skin tones and seat belts and airbags that didn’t protect people of all shapes and sizes in car crashes.

The Cadaver in the Machine

Jacobs and her collaborators from Cornell University, Intel, and the University of Virginia performed a systematic literature review of 278 motion-capture-related studies. In most cases, they concluded, motion-capture systems captured the motion of “those who are male, white, ‘able-bodied,’ and of unremarkable weight.”

And sometimes these white male bodies were dead. In reviewing works dating back to the 1930s and running through three historical eras of motion-capture science, the researchers studied projects that were influential in how scientists of the time understood the movement of body segments. A seminal 1955 study funded by the Air Force, for example, used overwhelmingly white, male, and slender or athletic bodies to create the optimal cockpit based on pilots’ range of motion. That study also gathered data from eight dismembered cadavers.

A full 20 years later, a study prepared for the National Highway Traffic Safety Administration used similar methods: Six dismembered male cadavers were used to inform the design of impact-protection systems in vehicles.

In most of the 278 studies reviewed, motion-capture systems captured the motion of “those who are male, white, ‘able-bodied,’ and of unremarkable weight.”

Although those studies are many decades old, these assumptions became baked in over time. Jacobs and her colleagues found many examples of these outdated inferences being passed down to later studies and ultimately still influencing modern motion-capture studies.

“If you look at technical documents of a modern system in production, they’ll explain the ‘traditional baseline standards’ they’re using,” Jacobs says. “By digging through that, you quickly start hopping through time: OK, that’s based on this prior study, which is based on this one, which is based on this one, and eventually we’re back to the Air Force study designing cockpits with frozen cadavers.”

The components that underpin technological best practices are “man-made—intentional emphasis on man, rather than human—often preserving biases and inaccuracies from the past,” says Kasia Chmielinski, project lead of the Data Nutrition Project and a fellow at Stanford University’s Digital Civil Society Lab. “Thus historical errors often inform the ‘neutral’ basis of our present-day technological systems. This can lead to software and hardware that does not work equally for all populations, experiences, or purposes.”

These problems may hinder engineers who want to make things right, Chmielinski says. “Since many of these issues are baked into the foundational elements of the system, teams innovating today may not have quick recourse to address bias or error, even if they want to,” they say. “If you’re building an application that uses third-party sensors, and the sensors themselves have a bias in what they detect or do not detect, what is the appropriate recourse?”

Jacobs says that engineers must interrogate their sources of “ground truth” and confirm that the gold standards they measure against are, in fact, gold. Technicians must consider these social evaluations to be part of their jobs in order to design technologies for all.

“If you go in saying, ‘I know that human assumptions get built in and are often hidden or obscured,’ that will inform how you choose what’s in your dataset and how you report it in your work,” Jacobs says. “It’s sociotechnical, and technologists need that lens to be able to say: My system does what I say it does, and it doesn’t create undue harm.”

❌