FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Ars Technica - All content
  • Elon Musk sues OpenAI, Sam Altman for making a “fool” out of himAshley Belanger
    Enlarge / Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began. (credit: Michael Kovac / Contributor | Getty Images North America) After withdrawing his lawsuit in June for unknown reasons, Elon Musk has revived a complaint accusing OpenAI and its CEO Sam Altman of fraudulently inducing Musk to contribute $44 million in seed funding by promising that OpenAI would always open-source its technology and prioritize serv
     

Elon Musk sues OpenAI, Sam Altman for making a “fool” out of him

5. Srpen 2024 v 19:49
Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began.

Enlarge / Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began. (credit: Michael Kovac / Contributor | Getty Images North America)

After withdrawing his lawsuit in June for unknown reasons, Elon Musk has revived a complaint accusing OpenAI and its CEO Sam Altman of fraudulently inducing Musk to contribute $44 million in seed funding by promising that OpenAI would always open-source its technology and prioritize serving the public good over profits as a permanent nonprofit.

Instead, Musk alleged that Altman and his co-conspirators—"preying on Musk’s humanitarian concern about the existential dangers posed by artificial intelligence"—always intended to "betray" these promises in pursuit of personal gains.

As OpenAI's technology advanced toward artificial general intelligence (AGI) and strove to surpass human capabilities, "Altman set the bait and hooked Musk with sham altruism then flipped the script as the non-profit’s technology approached AGI and profits neared, mobilizing Defendants to turn OpenAI, Inc. into their personal piggy bank and OpenAI into a moneymaking bonanza, worth billions," Musk's complaint said.

Read 29 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Sam Altman accused of being shady about OpenAI’s safety effortsAshley Belanger
    Enlarge / Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024. (credit: Bloomberg / Contributor | Bloomberg) OpenAI is facing increasing pressure to prove it's not hiding AI risks after whistleblowers alleged to the US Securities and Exchange Commission (SEC) that the AI company's non-disclosure agreements had illegally silenced employees fro
     

Sam Altman accused of being shady about OpenAI’s safety efforts

2. Srpen 2024 v 20:08
Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024.

Enlarge / Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024. (credit: Bloomberg / Contributor | Bloomberg)

OpenAI is facing increasing pressure to prove it's not hiding AI risks after whistleblowers alleged to the US Securities and Exchange Commission (SEC) that the AI company's non-disclosure agreements had illegally silenced employees from disclosing major safety concerns to lawmakers.

In a letter to OpenAI yesterday, Senator Chuck Grassley (R-Iowa) demanded evidence that OpenAI is no longer requiring agreements that could be "stifling" its "employees from making protected disclosures to government regulators."

Specifically, Grassley asked OpenAI to produce current employment, severance, non-disparagement, and non-disclosure agreements to reassure Congress that contracts don't discourage disclosures. That's critical, Grassley said, so that it will be possible to rely on whistleblowers exposing emerging threats to help shape effective AI policies safeguarding against existential AI risks as technologies advance.

Read 27 remaining paragraphs | Comments

❌
❌