FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Ars Technica - All content
  • Sam Altman accused of being shady about OpenAI’s safety effortsAshley Belanger
    Enlarge / Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024. (credit: Bloomberg / Contributor | Bloomberg) OpenAI is facing increasing pressure to prove it's not hiding AI risks after whistleblowers alleged to the US Securities and Exchange Commission (SEC) that the AI company's non-disclosure agreements had illegally silenced employees fro
     

Sam Altman accused of being shady about OpenAI’s safety efforts

2. Srpen 2024 v 20:08
Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024.

Enlarge / Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024. (credit: Bloomberg / Contributor | Bloomberg)

OpenAI is facing increasing pressure to prove it's not hiding AI risks after whistleblowers alleged to the US Securities and Exchange Commission (SEC) that the AI company's non-disclosure agreements had illegally silenced employees from disclosing major safety concerns to lawmakers.

In a letter to OpenAI yesterday, Senator Chuck Grassley (R-Iowa) demanded evidence that OpenAI is no longer requiring agreements that could be "stifling" its "employees from making protected disclosures to government regulators."

Specifically, Grassley asked OpenAI to produce current employment, severance, non-disparagement, and non-disclosure agreements to reassure Congress that contracts don't discourage disclosures. That's critical, Grassley said, so that it will be possible to rely on whistleblowers exposing emerging threats to help shape effective AI policies safeguarding against existential AI risks as technologies advance.

Read 27 remaining paragraphs | Comments

❌
❌