FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Android Authority
  • OpenAI has developed a 99.9% accuracy tool to detect ChatGPT content, but you are safe for nowVinayak Guha
    OpenAI has developed a method to detect when someone uses ChatGPT to write essays or assignments. The method utilizes a watermarking system that is 99.9% effective at identifying AI-generated text. However, the tool has not yet been rolled out due to internal concerns and mixed reactions within the company. When OpenAI launched ChatGPT towards the end of 2022, educators expressed concerns that students would use the platform to cheat on assignments and tests. To prevent this, numerous comp
     

OpenAI has developed a 99.9% accuracy tool to detect ChatGPT content, but you are safe for now

6. Srpen 2024 v 04:07
  • OpenAI has developed a method to detect when someone uses ChatGPT to write essays or assignments.
  • The method utilizes a watermarking system that is 99.9% effective at identifying AI-generated text.
  • However, the tool has not yet been rolled out due to internal concerns and mixed reactions within the company.

When OpenAI launched ChatGPT towards the end of 2022, educators expressed concerns that students would use the platform to cheat on assignments and tests. To prevent this, numerous companies have rolled out AI detection tools, but they haven’t been the best at producing reliable results.

OpenAI has now revealed that it has developed a method to detect when someone uses ChatGPT to write (via The Washington Post). The technology is said to be 99.9% effective and essentially uses a system capable of predicting what word or phrase (called “token”) would come next in a sentence. The AI-detection tool slightly alters the tokens, which then leaves a watermark. This watermark is undetectable to the human eye but can be spotted by the tool in question.

  • ✇Boing Boing
  • OpenAI could watermark the text ChatGPT generates, but hasn'tRob Beschizza
    OpenAI has developed a system for "watermarking" the output that ChatGPT generates, reports The Wall Street Journal, but has chosen not to deploy it. Google has deployed such a system with Gemini. OpenAI has a method to reliably detect when someone uses ChatGPT to write an essay or research paper. — Read the rest The post OpenAI could watermark the text ChatGPT generates, but hasn't appeared first on Boing Boing.
     

OpenAI could watermark the text ChatGPT generates, but hasn't

5. Srpen 2024 v 15:19
Phot: ltummy / Shutterstock

OpenAI has developed a system for "watermarking" the output that ChatGPT generates, reports The Wall Street Journal, but has chosen not to deploy it. Google has deployed such a system with Gemini.

OpenAI has a method to reliably detect when someone uses ChatGPT to write an essay or research paper.

Read the rest

The post OpenAI could watermark the text ChatGPT generates, but hasn't appeared first on Boing Boing.

  • ✇Ars Technica - All content
  • OpenAI has the tech to watermark ChatGPT text—it just won’t release itSamuel Axon
    Enlarge (credit: Getty Images) According to The Wall Street Journal, there's internal conflict at OpenAI over whether or not to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT or not. To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the
     

OpenAI has the tech to watermark ChatGPT text—it just won’t release it

6. Srpen 2024 v 00:12
OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen.

Enlarge (credit: Getty Images)

According to The Wall Street Journal, there's internal conflict at OpenAI over whether or not to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT or not.

To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the company's internal testing has shown that it does not negatively affect the quality of outputs. The detector would be accurate 99.9 percent of the time. It's important to note that the watermark would be a pattern in the text itself, meaning it would be preserved if the user copies and pastes the text or even if they make modest edits to it.

Some OpenAI employees have campaigned for the tool's release, but others believe that would be the wrong move, citing a few specific problems.

Read 8 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Elon Musk sues OpenAI, Sam Altman for making a “fool” out of himAshley Belanger
    Enlarge / Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began. (credit: Michael Kovac / Contributor | Getty Images North America) After withdrawing his lawsuit in June for unknown reasons, Elon Musk has revived a complaint accusing OpenAI and its CEO Sam Altman of fraudulently inducing Musk to contribute $44 million in seed funding by promising that OpenAI would always open-source its technology and prioritize serv
     

Elon Musk sues OpenAI, Sam Altman for making a “fool” out of him

5. Srpen 2024 v 19:49
Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began.

Enlarge / Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began. (credit: Michael Kovac / Contributor | Getty Images North America)

After withdrawing his lawsuit in June for unknown reasons, Elon Musk has revived a complaint accusing OpenAI and its CEO Sam Altman of fraudulently inducing Musk to contribute $44 million in seed funding by promising that OpenAI would always open-source its technology and prioritize serving the public good over profits as a permanent nonprofit.

Instead, Musk alleged that Altman and his co-conspirators—"preying on Musk’s humanitarian concern about the existential dangers posed by artificial intelligence"—always intended to "betray" these promises in pursuit of personal gains.

As OpenAI's technology advanced toward artificial general intelligence (AGI) and strove to surpass human capabilities, "Altman set the bait and hooked Musk with sham altruism then flipped the script as the non-profit’s technology approached AGI and profits neared, mobilizing Defendants to turn OpenAI, Inc. into their personal piggy bank and OpenAI into a moneymaking bonanza, worth billions," Musk's complaint said.

Read 29 remaining paragraphs | Comments

  • ✇GAME PRESS
  • Zaměstnance OpenAI sužují obavy o celkovou bezpečnostMobile Press
    Společnost OpenAI je lídrem v závodě o vývoj umělé inteligence, která je stejně inteligentní jako člověk. Přesto se v tisku a v podcastech stále objevují zaměstnanci, kteří vyjadřují vážné obavy o bezpečnost v této neziskové výzkumné laboratoři s rozpočtem 80 miliard dolarů. Nejnovější informace přinesl deník The Washington Post, kde anonymní zdroj tvrdí, že OpenAI uspěchala bezpečnostní testy a oslavovala svůj produkt dříve, než zajistila jeho bezpečnost. „Afterparty po startu naplánovali ješt
     

Zaměstnance OpenAI sužují obavy o celkovou bezpečnost

15. Červenec 2024 v 08:20

zamestnance-openai-suzuji-obavy-o-celkovou-bezpecnost

OpenAI

Společnost OpenAI je lídrem v závodě o vývoj umělé inteligence, která je stejně inteligentní jako člověk. Přesto se v tisku a v podcastech stále objevují zaměstnanci, kteří vyjadřují vážné obavy o bezpečnost v této neziskové výzkumné laboratoři s rozpočtem 80 miliard dolarů.

Nejnovější informace přinesl deník The Washington Post, kde anonymní zdroj tvrdí, že OpenAI uspěchala bezpečnostní testy a oslavovala svůj produkt dříve, než zajistila jeho bezpečnost.

„Afterparty po startu naplánovali ještě předtím, než věděli, zda je start bezpečný,“ řekl deníku The Washington Post anonymní zaměstnanec.V podstatě jsme v tomto procesu selhali.

V OpenAI se objevují velké problémy s bezpečností – a zdá se, že stále přibývají. Současní i bývalí zaměstnanci OpenAI nedávno podepsali otevřený dopis, v němž požadují zlepšení bezpečnostních a transparentních postupů startupu, a to nedlouho poté, co byl po odchodu spoluzakladatele Ilji Sutskevera rozpuštěn bezpečnostní tým. Jan Leike, klíčový výzkumný pracovník OpenAI, krátce poté rezignoval a ve svém příspěvku uvedl, že „bezpečnostní kultura a procesy ustoupily do pozadí před nablýskanými produkty“ ve společnosti.

Bezpečnost je jádrem stanov OpenAI, v nichž je klauzule, která tvrdí, že OpenAI bude pomáhat ostatním organizacím s pokrokem v oblasti bezpečnosti, pokud AGI dosáhne u konkurence, namísto toho, aby pokračovala v soutěži. Tvrdí, že se věnuje řešení bezpečnostních problémů, které jsou vlastní tak velkému a složitému systému. OpenAI dokonce v zájmu bezpečnosti udržuje své proprietární modely v soukromí, místo aby byly otevřené (což způsobuje ústrky a žaloby). Varování vyznívají, jako by bezpečnost byla upozaděna, přestože je pro kulturu a strukturu společnosti tak důležitá.

„Jsme hrdí na to, že poskytujeme nejschopnější a nejbezpečnější systémy umělé inteligence, a věříme v náš vědecký přístup k řešení rizik,“ uvedla mluvčí společnosti OpenAI Taya Christiansonová v prohlášení pro server The Verge. „Vzhledem k významu této technologie je velmi důležitá důsledná diskuse a my budeme i nadále spolupracovat s vládami, občanskou společností a dalšími komunitami po celém světě ve službách našeho poslání.“

Podle společnosti OpenAI a dalších odborníků, kteří se zabývají touto novou technologií, je bezpečnost v sázce. „Současný hraniční vývoj umělé inteligence představuje naléhavé a rostoucí riziko pro národní bezpečnost,“ uvádí se ve zprávě, kterou si v březnu nechalo vypracovat americké ministerstvo zahraničí. „Vzestup pokročilé AI a AGI [umělé obecné inteligence] má potenciál destabilizovat globální bezpečnost způsobem připomínajícím zavedení jaderných zbraní.

Alarmující sirény ve společnosti OpenAI se ozývají také po loňském převratu v představenstvu, který nakrátko sesadil generálního ředitele Sama Altmana. Představenstvo uvedlo, že byl odvolán kvůli tomu, že „nebyl ve své komunikaci důsledně upřímný“, což vedlo k vyšetřování, které zaměstnance příliš neuklidnilo.

Mluvčí společnosti OpenAI Lindsey Heldová řekla listu Post, že při spuštění GPT-4o se „nešetřilo na bezpečnosti“, ale jiný nejmenovaný zástupce společnosti připustil, že časový harmonogram bezpečnostního přezkumu byl zkrácen na jediný týden.Přehodnocujeme celý náš způsob,“ řekl anonymní zástupce listu Post. „Tohle prostě nebyl nejlepší způsob, jak to udělat.

Tváří v tvář kontroverzím (vzpomínáte na incident s Her?) se společnost OpenAI pokusila rozptýlit obavy několika dobře načasovanými oznámeními. Tento týden oznámila, že se spojila s Národní laboratoří Los Alamos, aby prozkoumala, jak mohou pokročilé modely umělé inteligence, jako je GPT-4o, bezpečně pomáhat v biologickém výzkumu, a ve stejném oznámení opakovaně poukázala na vlastní bezpečnostní výsledky Los Alamos. Následující den anonymní mluvčí sdělil agentuře Bloomberg, že společnost OpenAI vytvořila interní měřítko pro sledování pokroku, kterého její velké jazykové modely dosahují na cestě k umělé obecné inteligenci.

Oznámení společnosti OpenAI zaměřená na bezpečnost z tohoto týdne se jeví jako obranná zástěrka tváří v tvář rostoucí kritice jejích bezpečnostních postupů. Je zřejmé, že OpenAI je v horkém křesle, ale samotné úsilí v oblasti vztahů s veřejností nebude k ochraně společnosti stačit. Skutečně důležitý je potenciální dopad na lidi mimo bublinu Silicon Valley, pokud OpenAI nebude nadále vyvíjet umělou inteligenci s přísnými bezpečnostními protokoly, jak interně tvrdí: běžný člověk nemá možnost ovlivnit vývoj privatizované AI, a přesto nemá na výběr, jak bude chráněn před výtvory OpenAI.

Nástroje umělé inteligence mohou být revoluční,“ řekla v listopadu agentuře Bloomberg předsedkyně FTC Lina Khanová. Podle ní však „v tuto chvíli“ existují obavy, že „kritické vstupy těchto nástrojů ovládá relativně malý počet společností„.

Pokud jsou četná tvrzení proti jejich bezpečnostním protokolům pravdivá, jistě to vyvolává vážné otázky o vhodnosti OpenAI pro tuto roli správce AGI, roli, kterou si organizace v podstatě přisoudila sama. Umožnit jedné skupině v San Franciscu ovládat technologii, která může potenciálně změnit společnost, je důvodem k obavám a i v jejích vlastních řadách je nyní více než kdy jindy naléhavě požadována transparentnost a bezpečnost.

Článek Zaměstnance OpenAI sužují obavy o celkovou bezpečnost se nejdříve objevil na MOBILE PRESS.

Článek Zaměstnance OpenAI sužují obavy o celkovou bezpečnost se nejdříve objevil na GAME PRESS.

Here's What The Video Game Actors Strike Might Mean For Fortnite And Other Games

1. Srpen 2024 v 19:30

Thousands of video game actors went on strike on July 26 for the first time since 2017. The fight is over AI protections and other issues in contract negotiations with some of the biggest studios and publishers, and will halt work from SAG-AFTRA members on future projects, as well as possibly keep them from promotion…

Read more...

  • ✇Kotaku
  • Nvidia Just Grew By $329 Billion In A Single DayEthan Gach
    Nvidia started as a humble graphics card maker. Now it’s riding the tech industry’s AI obsession to absurd new heights. The company added $329 billion to its market cap on Wall Street today after a record-breaking day of stock trading, Bloomberg reports.Read more...
     

Nvidia Just Grew By $329 Billion In A Single Day

31. Červenec 2024 v 23:11

Nvidia started as a humble graphics card maker. Now it’s riding the tech industry’s AI obsession to absurd new heights. The company added $329 billion to its market cap on Wall Street today after a record-breaking day of stock trading, Bloomberg reports.

Read more...

  • ✇Ars Technica - All content
  • Sam Altman accused of being shady about OpenAI’s safety effortsAshley Belanger
    Enlarge / Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024. (credit: Bloomberg / Contributor | Bloomberg) OpenAI is facing increasing pressure to prove it's not hiding AI risks after whistleblowers alleged to the US Securities and Exchange Commission (SEC) that the AI company's non-disclosure agreements had illegally silenced employees fro
     

Sam Altman accused of being shady about OpenAI’s safety efforts

2. Srpen 2024 v 20:08
Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024.

Enlarge / Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024. (credit: Bloomberg / Contributor | Bloomberg)

OpenAI is facing increasing pressure to prove it's not hiding AI risks after whistleblowers alleged to the US Securities and Exchange Commission (SEC) that the AI company's non-disclosure agreements had illegally silenced employees from disclosing major safety concerns to lawmakers.

In a letter to OpenAI yesterday, Senator Chuck Grassley (R-Iowa) demanded evidence that OpenAI is no longer requiring agreements that could be "stifling" its "employees from making protected disclosures to government regulators."

Specifically, Grassley asked OpenAI to produce current employment, severance, non-disparagement, and non-disclosure agreements to reassure Congress that contracts don't discourage disclosures. That's critical, Grassley said, so that it will be possible to rely on whistleblowers exposing emerging threats to help shape effective AI policies safeguarding against existential AI risks as technologies advance.

Read 27 remaining paragraphs | Comments

  • ✇Semiconductor Engineering
  • Chip Industry Week In ReviewThe SE Staff
    BAE Systems and GlobalFoundries are teaming up to strengthen the supply of chips for national security programs, aligning technology roadmaps and collaborating on innovation and manufacturing. Focus areas include advanced packaging, GaN-on-silicon chips, silicon photonics, and advanced technology process development. Onsemi plans to build a $2 billion silicon carbide production plant in the Czech Republic. The site would produce smart power semiconductors for electric vehicles, renewable energy
     

Chip Industry Week In Review

21. Červen 2024 v 09:01

BAE Systems and GlobalFoundries are teaming up to strengthen the supply of chips for national security programs, aligning technology roadmaps and collaborating on innovation and manufacturing. Focus areas include advanced packaging, GaN-on-silicon chips, silicon photonics, and advanced technology process development.

Onsemi plans to build a $2 billion silicon carbide production plant in the Czech Republic. The site would produce smart power semiconductors for electric vehicles, renewable energy technology, and data centers.

The global chip manufacturing industry is projected to boost capacity by 6% in 2024 and 7% in 2025, reaching 33.7 million 8-inch (200mm) wafers per month, according to SEMIs latest World Fab Forecast report. Leading-edge capacity for 5nm nodes and below is expected to grow by 13% in 2024, driven by AI demand for data center applications. Additionally, Intel, Samsung, and TSMC will begin producing 2nm chips using gate-all-around (GAA) FETs next year, boosting leading-edge capacity by 17% in 2025.

At the IEEE Symposium on VLSI Technology & Circuits, imec introduced:

  • Functional CMOS-based CFETs with stacked bottom and top source/drain contacts.
  • CMOS-based 56Gb/s zero-IF D-band beamforming transmitters to support next-gen short-range, high-speed wireless services at frequencies above 100GHz.
  • ADCs for base stations and handsets, a key step toward scalable, high-performance beyond-5G solutions, such as cloud-based AI and extended reality apps.

Quick links to more news:

Global
In-Depth
Market Reports
Education and Training
Security
Product News
Research
Events and Further Reading


Global

Wolfspeed postponed plans to construct a $3 billion chip plant in Germany, underscoring the EU‘s challenges in boosting semiconductor production, reports Reuters. The North Carolina-based company cited reduced capital spending due to a weakened EV market, saying it now aims to start construction in mid-2025, two years later than 0riginally planned.

Micron is building a pilot production line for high-bandwidth memory (HBM) in the U.S., and considering HBM production in Malaysia to meet growing AI demand, according to a Nikkei report. The company is expanding HBM R&D facilities in Boise, Idaho, and eyeing production capacity in Malaysia, while also enhancing its largest HBM facility in Taichung, Taiwan.

Kioxia restored its Yokkaichi and Kitakami plants in Japan to full capacity, ending production cuts as the memory market recovers, according to Nikkei. The company, which is focusing on NAND flash production, has secured new bank credit support, including refinancing a ¥540 billion loan and establishing a ¥210 billion credit line. Kioxia had reduced output by more than 30% in October 2022 due to weak smartphone demand.

Europe’s NATO Innovation Fund announced its first direct investments, which includes semiconductor materials. Twenty-three NATO allies co-invested in this over $1B fund devoted to address critical defense and security challenges.

The second meeting of the U.S.India Initiative on Critical and Emerging Technology (iCET) was held in New Delhi, with various funding and initiatives announced to support semiconductor technology, next-gen telecommunications, connected and autonomous vehicles, ML, and more.

Amazon announced investments of €10 billion in Germany to drive innovation and support the expansion of its logistics network and cloud infrastructure.

Quantum Machines opened the Israeli Quantum Computing Center (IQCC) research facility, backed by the Israel Innovation Authority and located at Tel Aviv University. Also, Israel-based Classiq is collaborating with NVIDIA and BMW, using quantum computing to find the optimal automotive architecture of electrical and mechanical systems.

Global data center vacancy rates are at historic lows, and power availability is becoming less available, according to a Siemens report featured on Broadband Breakfast. The company called for an influx of financing to find new ways to optimize data center technology and sustainability.


In-Depth

Semiconductor Engineering published its Manufacturing, Packaging & Materials newsletter this week, featuring these top stories:

More reporting this week:


Market Reports

Renesas completed its acquisition of Transphorm and will immediately start offering GaN-based power products and reference designs to meet the demand for wide-bandgap (WBG) chips.

Revenues for the top five wafer fab equipment (WFE) companies fell 9% YoY in Q1 2024, according to Counterpoint. This was offset partially by increased demand for NAND and DRAM, which increased 33% YoY, and strong growth in sales to China, which were up 116% YoY.

The SiC power devices industry saw robust growth in 2023, primarily driven by the BEV market, according to TrendForce. The top five suppliers, led by ST with a 32.6% market share and onsemi in second place, accounted for 91.9% of total revenue. However, the anticipated slowdown in BEV sales and weakening industrial demand are expected to significantly decelerate revenue growth in 2024. 

About 30% of vehicles produced globally will have E/E architectures with zonal controllers by 2032, according to McKinsey & Co. The market for automotive micro-components and logic semiconductors is predicted to reach $60 billion in 2032, and the overall automotive semiconductor market is expected to grow from $60 billion to $140 billion in the same period, at a 10% CAGR.

The automotive processor market generated US$20 billion in revenue in 2023, according to Yole. US$7.8 billion was from APUs and FPGAs and $12.2 billion was from MCUs. The ADAS and infotainment processors market was worth US$7.8 billion in 2023 and is predicted to grow to $16.4 billion by 2029 at a 13% CAGR. The market for ADAS sensing is expected to grow at a 7% CAGR.


Security

The CHERI Alliance was established to drive adoption of memory safety and scalable software compartmentalization via the security technology CHERI, or Capability Hardware Enhanced RISC Instructions. Founding members include Capabilities Limited, Codasip, the FreeBSD Foundation, lowRISC, SCI Semiconductor, and the University of Cambridge.

In security research:

  • Japan and China researchers explored a NAND-XOR ring oscillator structure to design an entropy source architecture for a true random number generator (TRNG).
  • University of Toronto and Carleton University researchers presented a survey examining how hardware is applied to achieve security and how reported attacks have exploited certain defects in hardware.
  • University of North Texas and Texas Woman’s University researchers explored the potential of hardware security primitive Physical Unclonable Functions (PUF) for mitigation of visual deepfakes.
  • Villanova University researchers proposed the Boolean DERIVativE attack, which generalizes Boolean domain leakage.

Post-quantum cryptography firm PQShield raised $37 million in Series B funding.

Former OpenAI executive, Ilya Sutskever, who quit over safety concerns, launched Safe Superintelligence Inc. (SSI).

EU industry groups warned the European Commission that its proposed cybersecurity certification scheme (EUCS) for cloud services should not discriminate against Amazon, Google, and Microsoft, reported Reuters.

Cyber Europe tested EU cyber preparedness in the energy sector by simulating a series of large-scale cyber incidents in an exercise organized by the European Union Agency for Cybersecurity (ENISA).

The Cybersecurity and Infrastructure Security Agency (CISA) issued a number of alerts/advisories.


Education and Training

New York non-profit NY CREATES and South Korea’s National Nano Fab Center partnered to develop a hub for joint research, aligned technology services, testbed support, and an engineer exchange program to bolster chips-centered R&D, workforce development, and each nation’s high-tech ecosystem.

New York and the Netherlands agreed on a partnership to promote sustainability within the semiconductor industry, enhance workforce development, and boost semiconductor R&D.

Rapidus is set to send 200 engineers to AI chip developer Tenstorrent in the U.S. for training over the next five years, reports Nikkei. This initiative, led by Japan’s Leading-edge Semiconductor Technology Center (LSTC), aims to bolster Japan’s AI chip industry.


Product News

UMC announced its 22nm embedded high voltage (eHV) technology platform for premium smartphone and mobile device displays. The 22eHV platform reduces core device power consumption by up to 30% compared to previous 28nm processes. Die area is reduced by 10% with the industry’s smallest SRAM bit cells.​

Alphawave Semi announced a new 9.2 Gbps HBM3E sub-system silicon platform capable of 1.2 terabytes per second. Based on the HBM3E IP, the sub-system is aimed at addressing the demand for ultra-high-speed connectivity in high-performance compute applications.

Movellus introduced the Aeonic Power product family for on-die voltage regulation, targeting the challenging area of power delivery.

Cadence partnered with Semiwise and sureCore to develop new cryogenic CMOS circuits with possible quantum computing applications. The circuits are based on modified transistors found in the Cadence Spectre Simulation Platform and are capable of processing analog, mixed-signal, and digital circuit simulation and verification at cryogenic temperatures.

Renesas launched R-Car Open Access (RoX), an integrated development platform for software-defined vehicles (SDVs), designed for Renesas R-Car SoCs and MCUs with tools for deployment of AI applications, reducing complexity and saving time and money for car OEMs and Tier 1s.

Infineon released industry-first radiation-hardened 1 and 2 Mb parallel interface ferroelectric-RAM (F-RAM) nonvolatile memory devices, with up to 120 years of data retention at 85-degree Celsius, along with random access and full memory write at bus speeds. Plus, a CoolGaN Transistor 700 V G4 product family for efficient power conversion up to 700 V, ideal for consumer chargers and notebook adapters, data center power supplies, renewable energy inverters, and more.

Ansys adopted NVIDIA’s Omniverse application programming interfaces for its multi-die chip designers. Those APIs will be used for 5G/6G, IoT, AI/ML, cloud computing, and autonomous vehicle applications. The company also announced ConceptEV, an SaaS solution for automotive concept design for EVs.

Fig. 1: Field visualization of 3D-IC with Omniverse. Source: Ansys

QP Technologies announced a new dicing saw for its manufacturing line that can process a full cassette of 300mm wafers 7% faster than existing tools, improving throughput and productivity.

NXP introduced its SAF9xxx of audio DSPs to support the demand for AI-based audio in software-defined vehicles (SDVs) by using Cadence’s Tensilica HiFi 5 DSPs combined with dedicated neural-network engines and hardware-based accelerators.

Avionyx, a provider of software lifecycle engineering in the aerospace and safety-critical systems sector, partnered with Siemens and will leverage its Polarion application lifecycle management (ALM) tool. Also, Dovetail Electric Aviation adopted Siemens Xcelerator to support sustainable aviation.


Research

Researchers from imec and KU Leuven released a +70 page paper “Selecting Alternative Metals for Advanced Interconnects,” addressing interconnect resistance and reliability.

A comprehensive review article — “Future of plasma etching for microelectronics: Challenges and opportunities” — was created by a team of experts from the University of Maryland, Lam Research, IBM, Intel, and many others.

Researchers from the Institut Polytechnique de Paris’s Laboratory of Condensed Matter for Physics developed an approach to investigate defects in semiconductors. The team “determined the spin-dependent electronic structure linked to defects in the arrangement of semiconductor atoms,” the first time this structure has been measured, according to a release.

Lawrence Berkeley National Laboratory-led researchers developed a small enclosed chamber that can hold all the components of an electrochemical reaction, which can be paired with transmission electron microscopy (TEM) to generate precise views of a reaction at atomic scale, and can be frozen to stop the reaction at specific time points. They used the technique to study a copper catalyst.

The Federal Drug Administration (FDA) approved a clinical trial to test a device with 1,024 nanoscale sensors that records brain activity during surgery, developed by engineers at the University of California San Diego (UC San Diego).


Events and Further Reading

Find upcoming chip industry events here, including:

Event Date Location
Standards for Chiplet Design with 3DIC Packaging (Part 2) Jun 21 Online
DAC 2024 Jun 23 – 27 San Francisco
RISC-V Summit Europe 2024 Jun 24 – 28 Munich
Leti Innovation Days 2024 Jun 25 – 27 Grenoble, France
ISCA 2024 Jun 29 – Jul 3 Buenos Aires, Argentina
SEMICON West Jul 9 – 11 San Francisco
Flash Memory Summit Aug 6 – 8 Santa Clara, CA
USENIX Security Symposium Aug 14 – 16 Philadelphia, PA
Hot Chips 2024 Aug 25- 27 Stanford University
Find All Upcoming Events Here

Upcoming webinars are here.

Semiconductor Engineering’s latest newsletters:

Automotive, Security and Pervasive Computing
Systems and Design
Low Power-High Performance
Test, Measurement and Analytics
Manufacturing, Packaging and Materials


The post Chip Industry Week In Review appeared first on Semiconductor Engineering.

  • ✇Pocketables
  • OpenAI training models on Reddit dataPaul E King
    It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.) Redditors agreed to it in the terms of service. When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your
     

OpenAI training models on Reddit data

17. Květen 2024 v 18:06

It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)

Redditors agreed to it in the terms of service.

When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.

In other words if you’re not paying for the product, you are the product.

I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.

What I suspect location queries to ChatGPT will return in 2025

I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.

It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.

[IPC]

OpenAI training models on Reddit data by Paul E King first appeared on Pocketables.

💾

S’gotchaMe Profile on Spotify - https://open.spotify.com/artist/0hQBkU2vuMYXucmd89JUSw?si=JA77npD1QTWbSaUr89c0sQ&dl_branch=1Itunes/Apple Music - https://musi...
  • ✇Ars Technica - All content
  • Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarksBenj Edwards
    Enlarge (credit: Anthropic / Benj Edwards) On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work d
     

Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarks

20. Červen 2024 v 23:04
The Anthropic Claude 3 logo, jazzed up by Benj Edwards.

Enlarge (credit: Anthropic / Benj Edwards)

On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window.

So far, people outside of Anthropic seem impressed. "This model is really, really good," wrote independent AI researcher Simon Willison on X. "I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump)."

As we've written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).

Read 17 remaining paragraphs | Comments

  • ✇Android Authority
  • Elon Musk yells at Apple over ChatGPT: Threatens to ban iPhones at Tesla, XRyan McNeal
    Credit: Elon Musk Apple has entered into a partnership with OpenAI. The partnership will bring ChatGPT-like features to iPhone, iPad, and iOS. Elon Musk warns that he’ll ban Apple devices at all of his companies due to the integration. There were rumors about Apple and OpenAI partnering up for a collaboration that would bring ChatGPT-like AI features to Apple’s devices. This was finally made concrete after an announcement during WWDC. Now Elon Musk is threatening to ban Apple devices at
     

Elon Musk yells at Apple over ChatGPT: Threatens to ban iPhones at Tesla, X

11. Červen 2024 v 00:13
Elon Musk X logo twitter
Credit: Elon Musk
  • Apple has entered into a partnership with OpenAI.
  • The partnership will bring ChatGPT-like features to iPhone, iPad, and iOS.
  • Elon Musk warns that he’ll ban Apple devices at all of his companies due to the integration.

There were rumors about Apple and OpenAI partnering up for a collaboration that would bring ChatGPT-like AI features to Apple’s devices. This was finally made concrete after an announcement during WWDC. Now Elon Musk is threatening to ban Apple devices at his companies.

During its WWDC keynote presentation, Apple announced it is teaming up with OpenAI to enhance its virtual assistant Siri. When iOS 18 rolls out, Siri will be integrated with ChatGPT-like smarts, giving it the flexibility of a conversational AI chatbot. While most queries will be handled by Apple’s technology, an algorithm will determine if a task should be handed over to OpenAI’s technology.

  • ✇Pocketables
  • OpenAI training models on Reddit dataPaul E King
    It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.) Redditors agreed to it in the terms of service. When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your
     

OpenAI training models on Reddit data

17. Květen 2024 v 18:06

It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)

Redditors agreed to it in the terms of service.

When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.

In other words if you’re not paying for the product, you are the product.

I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.

What I suspect location queries to ChatGPT will return in 2025

I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.

It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.

[IPC]

OpenAI training models on Reddit data by Paul E King first appeared on Pocketables.

💾

S’gotchaMe Profile on Spotify - https://open.spotify.com/artist/0hQBkU2vuMYXucmd89JUSw?si=JA77npD1QTWbSaUr89c0sQ&dl_branch=1Itunes/Apple Music - https://musi...
  • ✇Ars Technica - All content
  • AI trained on photos from kids’ entire childhood without their consentAshley Belanger
    Enlarge (credit: RicardoImagen | E+) Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday. This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW's report said. An HRW researcher, Hye Jung Han, helped expose the problem. She
     

AI trained on photos from kids’ entire childhood without their consent

11. Červen 2024 v 00:37
AI trained on photos from kids’ entire childhood without their consent

Enlarge (credit: RicardoImagen | E+)

Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW's report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed "less than 0.0001 percent" of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

Read 29 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOSBenj Edwards
    Enlarge (credit: Apple) On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will
     

Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

10. Červen 2024 v 21:15
Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

Enlarge (credit: Apple)

On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will be available as a beta test for developers this summer.

The announcements came during a livestream WWDC keynote and a simultaneous event attended by the press on Apple's campus in Cupertino, California. In an introduction, Apple CEO Tim Cook said the company has been using machine learning for years, but the introduction of large language models (LLMs) presents new opportunities to elevate the capabilities of Apple products. He emphasized the need for both personalization and privacy in Apple's approach.

At last year's WWDC, Apple avoided using the term "AI" completely, instead preferring terms like "machine learning" as Apple's way of avoiding buzzy hype while integrating applications of AI into apps in useful ways. This year, Apple figured out a new way to largely avoid the abbreviation "AI" by coining "Apple Intelligence," a catchall branding term that refers to a broad group of machine learning, LLM, and image generation technologies. By our count, the term "AI" was used sparingly in the keynote—most notably near the end of the presentation when Apple executive Craig Federighi said, "It's AI for the rest of us."

Read 10 remaining paragraphs | Comments

  • ✇Pocketables
  • OpenAI training models on Reddit dataPaul E King
    It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.) Redditors agreed to it in the terms of service. When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your
     

OpenAI training models on Reddit data

17. Květen 2024 v 18:06

It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)

Redditors agreed to it in the terms of service.

When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.

In other words if you’re not paying for the product, you are the product.

I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.

What I suspect location queries to ChatGPT will return in 2025

I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.

It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.

[IPC]

OpenAI training models on Reddit data by Paul E King first appeared on Pocketables.

💾

S’gotchaMe Profile on Spotify - https://open.spotify.com/artist/0hQBkU2vuMYXucmd89JUSw?si=JA77npD1QTWbSaUr89c0sQ&dl_branch=1Itunes/Apple Music - https://musi...
  • ✇Ars Technica - All content
  • Journalists “deeply troubled” by OpenAI’s content deals with Vox, The AtlanticBenj Edwards
    Enlarge (credit: Getty Images) On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern." "The unionized members of The
     

Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic

31. Květen 2024 v 23:56
A man covered in newspaper.

Enlarge (credit: Getty Images)

On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern."

"The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union—which represents The Verge, SB Nation, and Vulture, among other publications—reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."

Read 9 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Google’s AI Overview is flawed by design, and a new company blog post hints at whyBenj Edwards
    Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google) On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlin
     

Google’s AI Overview is flawed by design, and a new company blog post hints at why

31. Květen 2024 v 21:47
A selection of Google mascot characters created by the company.

Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Read 11 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Russia and China are using OpenAI tools to spread disinformationFinancial Times
    Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective." (credit: FT montage/NurPhoto via Getty Images) OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year. The San Francis
     

Russia and China are using OpenAI tools to spread disinformation

31. Květen 2024 v 15:47
OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective."

Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective." (credit: FT montage/NurPhoto via Getty Images)

OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.

The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAI’s policies prohibit the use of its models to deceive or mislead others.

The content focused on issues “including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” OpenAI said in the report.

Read 14 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Report: Apple and OpenAI have signed a deal to partner on AISamuel Axon
    Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP) Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal. It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company. "Now, [Altman]
     

Report: Apple and OpenAI have signed a deal to partner on AI

30. Květen 2024 v 23:39
OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.

It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.

"Now, [Altman] has fulfilled a longtime goal by striking a deal with Apple to use OpenAI’s conversational artificial intelligence in its products, which could be worth billions of dollars to the startup if it goes well," according to The Information's source.

Read 7 remaining paragraphs | Comments

  • ✇Android Authority
  • Done deal: ChatGPT will now learn from Reddit conversationsRushil Agrawal
    Credit: Edgar Cervantes / Android Authority Reddit and OpenAI have announced a new partnership. OpenAI will gain access to Reddit’s vast and diverse conversational data to train its language models. Reddit will get OpenAI as an advertising partner, along with new AI-powered features for its platform. In what seems like a significant move for the future of artificial intelligence and the online community in general, Reddit and OpenAI have announced a new partnership aimed at enhancing us
     

Done deal: ChatGPT will now learn from Reddit conversations

17. Květen 2024 v 01:13
ChatGPT stock photo 12
Credit: Edgar Cervantes / Android Authority
  • Reddit and OpenAI have announced a new partnership.
  • OpenAI will gain access to Reddit’s vast and diverse conversational data to train its language models.
  • Reddit will get OpenAI as an advertising partner, along with new AI-powered features for its platform.

In what seems like a significant move for the future of artificial intelligence and the online community in general, Reddit and OpenAI have announced a new partnership aimed at enhancing user experiences on both platforms.

Generative AI models, such as OpenAI’s ChatGPT, rely heavily on real-world data and conversations to learn and refine their language generation capabilities. Reddit, with its millions of active users engaging in discussions on virtually every topic imaginable, is a treasure trove of authentic, up-to-date human interaction. This makes it an ideal resource for OpenAI to train its AI models, potentially leading to more nuanced, contextually aware, and relevant interactions with users.

  • ✇Pocketables
  • OpenAI training models on Reddit dataPaul E King
    It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.) Redditors agreed to it in the terms of service. When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your
     

OpenAI training models on Reddit data

17. Květen 2024 v 18:06

It’s been announced that Reddit is going to be used to train OpenAI’s ChatGPT model on current topics (and probably more closely resemble human interactions.)

Redditors agreed to it in the terms of service.

When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.

In other words if you’re not paying for the product, you are the product.

I suspect using the voting combined with the commentary is going to help reveal what is a useful comment and not, but I can’t help thinking that ChatGPT is going to start making some pretty snarky responses on current events if it’s trained on the groups I’ve looked at.

What I suspect location queries to ChatGPT will return in 2025

I suspect were I a regular contributor to Reddit I’d be annoyed that a chatbot is being trained to comment like me as I thought I was only being used for advertising purposes and not training Skynet to replace me.

It appears the main focus is on more recent content rather than resurrecting deceased redditors as AI ghouls to comment on the state of the post-IPO reddit, but everything Reddit now feeds the machine. Your work for your friends is being sold as a commodity. Fun times.

[IPC]

OpenAI training models on Reddit data by Paul E King first appeared on Pocketables.

💾

S’gotchaMe Profile on Spotify - https://open.spotify.com/artist/0hQBkU2vuMYXucmd89JUSw?si=JA77npD1QTWbSaUr89c0sQ&dl_branch=1Itunes/Apple Music - https://musi...

Klíčový manažer OpenAI odchází, popisuje svůj boj o získání výpočetního výkonu potřebného pro výzkum a poukazuje na přílišné zaměření na „lesklé produkty“

20. Květen 2024 v 07:28

klicovy-manazer-openai-odchazi,-popisuje-svuj-boj-o-ziskani-vypocetniho-vykonu-potrebneho-pro-vyzkum-a-poukazuje-na-prilisne-zamereni-na-„leskle-produkty“

Během 72 hodin odešel již druhý klíčový manažer společnosti OpenAI, což vykresluje nepříliš lichotivý obraz společnosti, která je bezesporu jedním z nejstrategičtějších podniků na světě.

Jan Leike, vedoucí oddělení pro sladění, včera ukončil svůj poslední den v neziskové organizaci. Pro ty, kteří to možná nevědí, je alignment kritickým procesem, při kterém jsou lidské hodnoty zakódovány do velkého jazykového modelu (LLM). Umožňuje také, aby se LLM řídil stanovenými zásadami a pokyny. Podle všeho byl nyní celý tým OpenAI pro zarovnávání rozpuštěn.

Při líčení své cesty v neziskové organizaci Leike poznamenal, že OpenAI neinvestuje odpovídající prostředky do zajištění bezpečnosti a odolnosti LLM proti nepříznivým vlivům. Bývalý člen vedení se také domnívá, že nezisková organizace nezohledňuje širší společenský dopad snahy o vytvoření strojů „chytřejších než člověk“ a že „kultura bezpečnosti a procesy ustoupily do pozadí nablýskaným produktům„.

V překvapivém tvrzení Leike uvádí, že jeho tým měl v OpenAI často problém získat potřebné výpočetní zdroje:

V posledních měsících můj tým plul proti větru. Někdy jsme bojovali o výpočetní techniku a bylo stále těžší a těžší tento klíčový výzkum dokončit.

Jak již bylo řečeno, Leike je samozřejmě druhým klíčovým manažerem, který opustil společnost OpenAI během posledních 72 hodin. Jak jsme uvedli v samostatném příspěvku, hlavní vědecký pracovník neziskové organizace Ilja Sutskever zveřejnil své rozhodnutí odejít 14. května, aby mohl pracovat na „osobně významném“ projektu.

Mějte na paměti, že Sutskever hrál klíčovou roli při převratu v listopadu 2023 proti generálnímu řediteli OpenAI Samu Altmanovi, který byl správní radou bezodkladně odvolán kvůli svým stále více obchodním sklonům a sklonu prosazovat pokroky související s umělou inteligencí bez náležitých záruk. Oficiálně správní rada jako causus belli uvedla Altmanovu nedostatečnou upřímnost v komunikaci. Toto rozhodnutí však vyvolalo v OpenAI plnou vzpouru, v jejímž důsledku zaměstnanci hromadně podávali výpovědi. Altman se nakonec mohl vrátit do funkce generálního ředitele poté, co donutil představenstvo podporující převrat k rezignaci.

Článek Klíčový manažer OpenAI odchází, popisuje svůj boj o získání výpočetního výkonu potřebného pro výzkum a poukazuje na přílišné zaměření na „lesklé produkty“ se nejdříve objevil na MOBILE PRESS.

Článek Klíčový manažer OpenAI odchází, popisuje svůj boj o získání výpočetního výkonu potřebného pro výzkum a poukazuje na přílišné zaměření na „lesklé produkty“ se nejdříve objevil na GAME PRESS.

  • ✇Ars Technica - All content
  • What happened to OpenAI’s long-term AI risk team?WIRED
    Enlarge (credit: Benj Edwards) In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s co-founders, was named as the co-lead of this new team. OpenAI said the team would receive 20 percent of its computing power. Now OpenAI’s “superalignment team” is no more, the company
     

What happened to OpenAI’s long-term AI risk team?

Od: WIRED
18. Květen 2024 v 17:54
A glowing OpenAI logo on a blue background.

Enlarge (credit: Benj Edwards)

In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s co-founders, was named as the co-lead of this new team. OpenAI said the team would receive 20 percent of its computing power.

Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other co-lead. The group’s work will be absorbed into OpenAI’s other research efforts.

Read 14 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • OpenAI will use Reddit posts to train ChatGPT under new dealScharon Harding
    Enlarge (credit: Getty) Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts. Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAI’s AI tools to better understand and showcase Reddit content, especia
     

OpenAI will use Reddit posts to train ChatGPT under new deal

17. Květen 2024 v 23:18
An image of a woman holding a cell phone in front of the Reddit logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada.

Enlarge (credit: Getty)

Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.

Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics." OpenAI will also start advertising on Reddit.

The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make "new ways to display Reddit content" and provide "more efficient ways to train models," Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit's partnership with Google was reportedly worth $60 million.

Read 8 remaining paragraphs | Comments

  • ✇Android Authority
  • OpenAI expected to announce major ChatGPT updates today: How to watch livestreamAdamya Sharma
    Credit: Edgar Cervantes / Android Authority OpenAI will announce updates to ChatGPT and GPT 4 at its “Spring Updates” event today. The livestream for the announcements is set to start at 10 AM PT (1 PM ET). The company is reportedly working on a ChatGPT-powered search engine and a new multimodal assistant. As expected, OpenAI is all set to make some important ChatGPT and GPT 4 announcements later today. The company has scheduled a “Spring Updates” livestream on its own website for 10
     

OpenAI expected to announce major ChatGPT updates today: How to watch livestream

13. Květen 2024 v 08:24

OpenAI on website on smartphone stock photo (2)

Credit: Edgar Cervantes / Android Authority
  • OpenAI will announce updates to ChatGPT and GPT 4 at its “Spring Updates” event today.
  • The livestream for the announcements is set to start at 10 AM PT (1 PM ET).
  • The company is reportedly working on a ChatGPT-powered search engine and a new multimodal assistant.

As expected, OpenAI is all set to make some important ChatGPT and GPT 4 announcements later today. The company has scheduled a “Spring Updates” livestream on its own website for 10 AM PT (1 PM ET).

  • ✇Android Authority
  • OpenAI could announce new multimodal assistant to directly take on GoogleC. Scott Brown
    Credit: Edgar Cervantes / Android Authority On Monday, OpenAI is holding an event that could see an announcement about a new multimodal digital assistant. Being multimodal would allow the assistant to use images for prompts, such as identifying and translating a sign in the real world. This would be a direct threat against Google’s digital assistants, namely Google Assistant and the newer Gemini. Over the past few weeks, the rumor mill has been churning, suggesting that OpenAI — the com
     

OpenAI could announce new multimodal assistant to directly take on Google

11. Květen 2024 v 21:55
OpenAI on website on smartphone stock photo (1)
Credit: Edgar Cervantes / Android Authority
  • On Monday, OpenAI is holding an event that could see an announcement about a new multimodal digital assistant.
  • Being multimodal would allow the assistant to use images for prompts, such as identifying and translating a sign in the real world.
  • This would be a direct threat against Google’s digital assistants, namely Google Assistant and the newer Gemini.

Over the past few weeks, the rumor mill has been churning, suggesting that OpenAI — the company responsible for ChatGPT — could soon launch an AI-powered search engine, which would be a direct threat to Google’s core business. Given how prominent ChatGPT has become in such a short time, this would represent the first real threat to Google Search in decades.

However, it’s looking less likely that OpenAI has a search engine on the way (via The Information). Instead, new rumors suggest that OpenAI’s scheduled event on Monday could see the company announcing a multimodal digital assistant. While not a traditional search engine, it would still allow people to search for things using the power of AI, so it would still be a significant threat to Google.

  • ✇Android Authority
  • Apple and OpenAI closing in on deal for ChatGPT in iOSC. Scott Brown
    Credit: Robert Triggs / Android Authority According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS. It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features. Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing. Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the And
     

Apple and OpenAI closing in on deal for ChatGPT in iOS

11. Květen 2024 v 17:21
Apple Logo EOY 2020
Credit: Robert Triggs / Android Authority
  • According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
  • It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
  • Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.

Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the Android operating system, the most popular mobile OS in the world. Meanwhile, Apple hasn’t done that much at all with generative AI and large language models (LLM). All signs point to that changing very soon — just not through Apple itself.

Over the past few months, we’ve learned that Apple has been in discussions with both Google and OpenAI (which owns ChatGPT) about using their respective LLMs to power future features coming to iOS. Now, according to industry analyst Mark Gurman, Apple’s deal with OpenAI might be close to finalized.

  • ✇Techdirt
  • Ctrl-Alt-Speech: Between A Rock And A Hard PolicyLeigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware) T
     

Ctrl-Alt-Speech: Between A Rock And A Hard Policy

11. Květen 2024 v 00:25

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

OpenAI Vs Reddit: Why did OpenAI withdraw its lawsuit?

Od: Efe Udin
13. Květen 2024 v 08:08
OpenAI

In the realm of artificial intelligence and copyright law, a recent clash between OpenAI and the Reddit community r/ChatGPT has sparked significant controversy and discussions ...

The post OpenAI Vs Reddit: Why did OpenAI withdraw its lawsuit? appeared first on Gizchina.com.

  • ✇Android Authority
  • OpenAI could announce new multimodal assistant to directly take on GoogleC. Scott Brown
    Credit: Edgar Cervantes / Android Authority On Monday, OpenAI is holding an event that could see an announcement about a new multimodal digital assistant. Being multimodal would allow the assistant to use images for prompts, such as identifying and translating a sign in the real world. This would be a direct threat against Google’s digital assistants, namely Google Assistant and the newer Gemini. Over the past few weeks, the rumor mill has been churning, suggesting that OpenAI — the com
     

OpenAI could announce new multimodal assistant to directly take on Google

11. Květen 2024 v 21:55
OpenAI on website on smartphone stock photo (1)
Credit: Edgar Cervantes / Android Authority
  • On Monday, OpenAI is holding an event that could see an announcement about a new multimodal digital assistant.
  • Being multimodal would allow the assistant to use images for prompts, such as identifying and translating a sign in the real world.
  • This would be a direct threat against Google’s digital assistants, namely Google Assistant and the newer Gemini.

Over the past few weeks, the rumor mill has been churning, suggesting that OpenAI — the company responsible for ChatGPT — could soon launch an AI-powered search engine, which would be a direct threat to Google’s core business. Given how prominent ChatGPT has become in such a short time, this would represent the first real threat to Google Search in decades.

However, it’s looking less likely that OpenAI has a search engine on the way (via The Information). Instead, new rumors suggest that OpenAI’s scheduled event on Monday could see the company announcing a multimodal digital assistant. While not a traditional search engine, it would still allow people to search for things using the power of AI, so it would still be a significant threat to Google.

  • ✇Android Authority
  • Apple and OpenAI closing in on deal for ChatGPT in iOSC. Scott Brown
    Credit: Robert Triggs / Android Authority According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS. It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features. Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing. Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the And
     

Apple and OpenAI closing in on deal for ChatGPT in iOS

11. Květen 2024 v 17:21
Apple Logo EOY 2020
Credit: Robert Triggs / Android Authority
  • According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
  • It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
  • Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.

Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the Android operating system, the most popular mobile OS in the world. Meanwhile, Apple hasn’t done that much at all with generative AI and large language models (LLM). All signs point to that changing very soon — just not through Apple itself.

Over the past few months, we’ve learned that Apple has been in discussions with both Google and OpenAI (which owns ChatGPT) about using their respective LLMs to power future features coming to iOS. Now, according to industry analyst Mark Gurman, Apple’s deal with OpenAI might be close to finalized.

  • ✇Techdirt
  • Ctrl-Alt-Speech: Between A Rock And A Hard PolicyLeigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware) T
     

Ctrl-Alt-Speech: Between A Rock And A Hard Policy

11. Květen 2024 v 00:25

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

  • ✇Android Authority
  • Apple and OpenAI closing in on deal for ChatGPT in iOSC. Scott Brown
    Credit: Robert Triggs / Android Authority According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS. It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features. Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing. Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the And
     

Apple and OpenAI closing in on deal for ChatGPT in iOS

11. Květen 2024 v 17:21
Apple Logo EOY 2020
Credit: Robert Triggs / Android Authority
  • According to a trusted industry analyst, Apple and OpenAI could be finalizing a deal to bring ChatGPT features to iOS.
  • It is unclear if Apple’s AI features based on its own LLM would debut on iOS alongside OpenAI features.
  • Meanwhile, a separate negotiation with Google to bring Gemini features to iOS is still ongoing.

Over the past six months, Google has been hitting Gemini hard. It seems Gemini is now in everything Google does, including the Android operating system, the most popular mobile OS in the world. Meanwhile, Apple hasn’t done that much at all with generative AI and large language models (LLM). All signs point to that changing very soon — just not through Apple itself.

Over the past few months, we’ve learned that Apple has been in discussions with both Google and OpenAI (which owns ChatGPT) about using their respective LLMs to power future features coming to iOS. Now, according to industry analyst Mark Gurman, Apple’s deal with OpenAI might be close to finalized.

  • ✇Boing Boing
  • OpenAI said to be launching search engine on Monday, directly targeting Google'sRob Beschizza
    OpenAI, the Microsoft-backed A.I. startup that's often in the news, is reportedly launching a search engine on Monday. Powered by its generative A.I. technology, the site takes direct aim at Google's cash cow. OpenAI declined to comment. The announcement could be timed a day before the Tuesday start of Google's annual I/O conference, where the tech giant is expected to unveil a slew of AI-related products. — Read the rest The post OpenAI said to be launching search engine on Monday, directly
     

OpenAI said to be launching search engine on Monday, directly targeting Google's

10. Květen 2024 v 16:57
hate computers

OpenAI, the Microsoft-backed A.I. startup that's often in the news, is reportedly launching a search engine on Monday. Powered by its generative A.I. technology, the site takes direct aim at Google's cash cow.

OpenAI declined to comment. The announcement could be timed a day before the Tuesday start of Google's annual I/O conference, where the tech giant is expected to unveil a slew of AI-related products.

Read the rest

The post OpenAI said to be launching search engine on Monday, directly targeting Google's appeared first on Boing Boing.

  • ✇Techdirt
  • Ctrl-Alt-Speech: Between A Rock And A Hard PolicyLeigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware) T
     

Ctrl-Alt-Speech: Between A Rock And A Hard Policy

11. Květen 2024 v 00:25

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

  • ✇Android Authority
  • ChatGPT’s alternative to Google Search might arrive on May 9Rushil Agrawal
    Credit: Calvin Wankhede / Android Authority Rumors suggest OpenAI may unveil a new search product on May 9. The domain name search.chatgpt.com was recently registered. This new search product might focus on providing AI-powered quick answers. With the AI war between Google Gemini and Microsoft’s ChatGPT escalating, rumors pointing to a May 9 unveiling of a new ChatGPT-powered web search offering have surfaced, setting the stage for a direct challenge to Google’s search dominance. A Redd
     

ChatGPT’s alternative to Google Search might arrive on May 9

3. Květen 2024 v 02:24
ChatGPT stock photo 71
Credit: Calvin Wankhede / Android Authority
  • Rumors suggest OpenAI may unveil a new search product on May 9.
  • The domain name search.chatgpt.com was recently registered.
  • This new search product might focus on providing AI-powered quick answers.

With the AI war between Google Gemini and Microsoft’s ChatGPT escalating, rumors pointing to a May 9 unveiling of a new ChatGPT-powered web search offering have surfaced, setting the stage for a direct challenge to Google’s search dominance.

A Reddit user spotted the creation of SSL certificates for the domain search.chatgpt.com. We also found a tweet by an AI podcast host reading, “Search (dot) ChatGPT (dot) com May 9th,” hinting at a potential release date. The subdomain mentioned currently displays a cryptic “Not found” message instead of throwing a 404 or domain error, further adding to the speculation.

  • ✇Android Authority
  • OpenAI makes it easier for ChatGPT to remember and forgetAamir Siddiqui
    Credit: Calvin Wankhede / Android Authority OpenAI is rolling out ChatGPT’s Memory function to ChatGPT Plus users, letting ChatGPT remember personal facts. ChatGPT users are also getting access to Temporary Chat for one-off conversations that won’t appear in chat history. ChatGPT has been a boon for a lot of people, making their lives easier in ways that previous non-AI digital assistants just didn’t. Once you figure out how to use ChatGPT effectively, it becomes an indispensable tool.
     

OpenAI makes it easier for ChatGPT to remember and forget

1. Květen 2024 v 12:25
ChatGPT stock photo 73
Credit: Calvin Wankhede / Android Authority
  • OpenAI is rolling out ChatGPT’s Memory function to ChatGPT Plus users, letting ChatGPT remember personal facts.
  • ChatGPT users are also getting access to Temporary Chat for one-off conversations that won’t appear in chat history.

ChatGPT has been a boon for a lot of people, making their lives easier in ways that previous non-AI digital assistants just didn’t. Once you figure out how to use ChatGPT effectively, it becomes an indispensable tool. OpenAI, the company behind ChatGPT, recently added memory functions to ChatGPT, and this is now rolling out to all ChatGPT Plus users. Free users are getting temporary chat features, which doesn’t keep the conversation in your history.

OpenAI says ChatGPT’s Memory function is very easy to use. First, switch it on in settings, and then you can tell ChatGPT anything you want it to remember. All your future responses will remember these facts, saving you from repeating yourself.

  • ✇Android Police
  • ChatGPT's Google Search alternative could become a reality soonJay Bonggolto
    For a while now, Google has been the dominant player in search engines, handling almost every search worldwide. But lately, it's been seriously challenged by Microsoft's Bing, which got a boost from ChatGPT in its AI chatbot department. Now, with OpenAI stepping up, Google might have more competition than ever. Rumors about OpenAI working on its own search engine have been floating around, and it looks like it's about to become a reality in less than a week.
     

ChatGPT's Google Search alternative could become a reality soon

3. Květen 2024 v 11:47

For a while now, Google has been the dominant player in search engines, handling almost every search worldwide. But lately, it's been seriously challenged by Microsoft's Bing, which got a boost from ChatGPT in its AI chatbot department. Now, with OpenAI stepping up, Google might have more competition than ever. Rumors about OpenAI working on its own search engine have been floating around, and it looks like it's about to become a reality in less than a week.

  • ✇GAME PRESS
  • ChatGPT lze nyní používat bez účtu, ale stále existuje jedna podmínkaMobile Press
    Používání ChatGPT by dříve vyžadovalo registraci účtu, což by bylo považováno za zdlouhavý proces, protože jsme byli několikrát svědky toho, že byste se museli znovu zaregistrovat při používání stejného prohlížeče. Naštěstí OpenAI přináší některé vítané změny ve způsobu, jakým používáme chatbota, přičemž společnost uvádí, že uživatelé se již nemusí registrovat prostřednictvím účtu, aby mohli službu využívat. Není to však tak, že by vše oznámené OpenAI bylo bez účtu, jak brzy zjistíte. Oznámení
     

ChatGPT lze nyní používat bez účtu, ale stále existuje jedna podmínka

2. Duben 2024 v 07:44

chatgpt-lze-nyni-pouzivat-bez-uctu,-ale-stale-existuje-jedna-podminka

Používání ChatGPT by dříve vyžadovalo registraci účtu, což by bylo považováno za zdlouhavý proces, protože jsme byli několikrát svědky toho, že byste se museli znovu zaregistrovat při používání stejného prohlížeče.

Naštěstí OpenAI přináší některé vítané změny ve způsobu, jakým používáme chatbota, přičemž společnost uvádí, že uživatelé se již nemusí registrovat prostřednictvím účtu, aby mohli službu využívat. Není to však tak, že by vše oznámené OpenAI bylo bez účtu, jak brzy zjistíte.

Oznámení bylo učiněno prostřednictvím různých kanálů, počínaje oficiálním blogem na webu OpenAI a krátce poté byl zveřejněn příspěvek na X, který odhaloval, že uživatelé ChatGPT se již nemusí registrovat, aby mohli službu používat. Společnost ve svém blogovém příspěvku uvedla následující a zároveň uvedla, že ChatGPT zaznamenala boom, protože jej používá více než 100 milionů lidí týdně a v několika zemích.

Základem našeho poslání je široce zpřístupnit nástroje jako ChatGPT, aby lidé mohli využívat výhod AI. Více než 100 milionů lidí ve 185 zemích používá ChatGPT týdně, aby se naučili něco nového, našli kreativní inspiraci a získali odpovědi na své otázky. Ode dneška můžete ChatGPT používat okamžitě, aniž byste se museli registrovat. Postupně to zavádíme s cílem zpřístupnit AI každému, kdo se zajímá o její schopnosti.

We’re rolling out the ability to start using ChatGPT instantly, without needing to sign-up, so it’s even easier to experience the potential of AI. https://t.co/juhjKfQaoD pic.twitter.com/TIVoX8KFDB

— OpenAI (@OpenAI) April 1, 2024

Bohužel tento snadný přístup k ChatGPT je omezen pouze na chatbota, protože ostatní produkty OpenAI nelze používat bez účtu. Mezi tyto produkty patří DALL-E 3, který také vyžaduje předplatné. Navíc další služby OpenAI, jako je nově oznámená služba klonování hlasu AI Voice Engine a platforma pro tvorbu videa Sora, zůstávají dostupné pouze omezenému počtu uživatelů. Ve skutečnosti ti, kteří pravidelně používají účet OpenAI, nezískají z používání ostatních služeb žádné výhody, dokud nebude zahájeno řádné zavedení.

Článek ChatGPT lze nyní používat bez účtu, ale stále existuje jedna podmínka se nejdříve objevil na MOBILE PRESS.

Článek ChatGPT lze nyní používat bez účtu, ale stále existuje jedna podmínka se nejdříve objevil na GAME PRESS.

  • ✇IEEE Spectrum
  • Figure Raises $675M for Its Humanoid Robot DevelopmentEvan Ackerman
    Today, Figure is announcing an astonishing US $675 million Series B raise, which values the company at an even more astonishing $2.6 billion. Figure is one of the companies working toward a multipurpose or general-purpose (depending on whom you ask) bipedal or humanoid (depending on whom you ask) robot. The astonishing thing about this valuation is that Figure’s robot is still very much in the development phase—although they’re making rapid progress, which they demonstrate in a new video posted
     

Figure Raises $675M for Its Humanoid Robot Development

29. Únor 2024 v 14:00


Today, Figure is announcing an astonishing US $675 million Series B raise, which values the company at an even more astonishing $2.6 billion. Figure is one of the companies working toward a multipurpose or general-purpose (depending on whom you ask) bipedal or humanoid (depending on whom you ask) robot. The astonishing thing about this valuation is that Figure’s robot is still very much in the development phase—although they’re making rapid progress, which they demonstrate in a new video posted this week.


This round of funding comes from Microsoft, OpenAI Startup Fund, Nvidia, Jeff Bezos (through Bezos Expeditions), Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest. Figure says that they’re going to use this new capital “for scaling up AI training, robot manufacturing, expanding engineering head count, and advancing commercial deployment efforts.” In addition, Figure and OpenAI will be collaborating on the development of “next-generation AI models for humanoid robots” which will “help accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.”

As far as that commercial timeline goes, here’s the most recent update:

Figure

And to understand everything that’s going on here, we sent a whole bunch of questions to Jenna Reher, senior robotics/AI engineer at Figure.

What does “fully autonomous” mean, exactly?

Jenna Reher: In this case, we simply put the robot on the ground and hit go on the task with no other user input. What you see is using a learned vision model for bin detection that allows us to localize the robot relative to the target bin and get the bin pose. The robot can then navigate itself to within reach of the bin, determine grasp points based on the bin pose, and detect grasp success through the measured forces on the hands. Once the robot turns and sees the conveyor, the rest of the task rolls out in a similar manner. By doing things in this way we can move the bins and conveyor around in the test space or start the robot from a different position and still complete the task successfully.

How many takes did it take to get this take?

Reher: We’ve been running this use case consistently for some time now as part of our work in the lab, so we didn’t really have to change much for the filming here. We did two or three practice runs in the morning and then three filming takes. All of the takes were successful, so the extras were to make sure we got the cleanest one to show.

What’s back in the Advanced Actuator Lab?

Reher: We have an awesome team of folks working on some exciting custom actuator designs for our future robots, as well as supporting and characterizing the actuators that went into our current robots.

That’s a very specific number for “speed vs. human.” Which human did you measure the robot’s speed against?

Reher: We timed Brett [Adcock, founder of Figure] and a few poor engineers doing the task and took the average to get a rough baseline. If you are observant, that seemingly overspecific number is just saying we’re at 1/6 human speed. The main point that we’re trying to make here is that we are aware we are currently below human speed, and it’s an important metric to track as we improve.

What’s the tether for?

Reher: For this task we currently process the camera data off-robot while all of the behavior planning and control happens on board in the computer that’s in the torso. Our robots should be fully tetherless in the near future as we finish packaging all of that on board. We’ve been developing behaviors quickly in the lab here at Figure in parallel to all of the other systems engineering and integration efforts happening, so hopefully folks notice all of these subtle parallel threads converging as we try to release regular updates.

How the heck do you keep your robotics lab so clean?

Reher: Everything we’ve filmed so far is in our large robot test lab, so it’s a lot easier to keep the area clean when people’s desks aren’t intruding in the space. Definitely no guarantees on that level of cleanliness if the camera were pointed in the other direction!

Is the robot in the background doing okay?

Reher: Yes! The other robot was patiently standing there in the background, waiting for the filming to finish up so that our manipulation team could get back to training it to do more manipulation tasks. We hope we can share some more developments with that robot as the main star in the near future.

What would happen if I put a single bowling ball into that tote?

Reher: A bowling ball is particularly menacing to this task primarily due to the moving mass, in addition to the impact if you are throwing it in. The robot would in all likelihood end up dropping the tote, stay standing, and abort the task. With what you see here, we assume that the mass of the tote is known a priori so that our whole-body controller can compensate for the external forces while tracking the manipulation task. Reacting to and estimating larger unknown disturbances such as this is a challenging problem, but we’re definitely working on it.

Tell me more about that very Zen arm and hand pose that the robot adopts after putting the tote on the conveyor.

Reher: It does look kind of Zen! If you rewatch our coffee video, you’ll notice the same pose after the robot gets things brewing. This is a reset pose that our controller will go into between manipulation tasks while the robot is awaiting commands to execute either an engineered behavior or a learned policy.

Are the fingers less fragile than they look?

Reher: They are more robust than they look, but not impervious to damage by any means. The design is pretty modular, which is great, meaning that if we damage one or two fingers, there is a small number of parts to swap to get everything back up and running. The current fingers won’t necessarily survive a direct impact from a bad fall, but can pick up totes and do manipulation tasks all day without issues.

Is the Figure logo footsteps?

Reher: One of the reasons I really like the Figure logo is that it has a bunch of different interpretations depending on how you look at it. In some cases it’s just an F that looks like a footstep plan rollout, while some of the logo animations we have look like active stepping. One other possible interpretation could be an occupancy grid.

  • ✇Ars Technica - All content
  • Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profitsFinancial Times
    Enlarge (credit: Anadolu Agency / Contributor | Anadolu) Elon Musk has sued OpenAI and its chief executive Sam Altman for breach of contract, alleging they have compromised the start-up’s original mission of building artificial intelligence systems for the benefit of humanity. In the lawsuit, filed to a San Francisco court on Thursday, Musk’s lawyers wrote that OpenAI’s multibillion-dollar alliance with Microsoft had broken an agreement to make a major breakthrough in AI “fre
     

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits

1. Březen 2024 v 15:31
Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits

Enlarge (credit: Anadolu Agency / Contributor | Anadolu)

Elon Musk has sued OpenAI and its chief executive Sam Altman for breach of contract, alleging they have compromised the start-up’s original mission of building artificial intelligence systems for the benefit of humanity.

In the lawsuit, filed to a San Francisco court on Thursday, Musk’s lawyers wrote that OpenAI’s multibillion-dollar alliance with Microsoft had broken an agreement to make a major breakthrough in AI “freely available to the public.”

Instead, the lawsuit said, OpenAI was working on “proprietary technology to maximize profits for literally the largest company in the world.”

Read 19 remaining paragraphs | Comments

One Minute of Sora by OpenAI: Over an Hour of Generation Time

21. Únor 2024 v 13:47
OpenAI Sora

OpenAI, the renowned research organization behind GPT-3 and DALL-E 2, recently unveiled its latest innovation: Sora, a text-to-video model. It is capable of generating high-quality videos up to a ...

The post One Minute of Sora by OpenAI: Over an Hour of Generation Time appeared first on Gizchina.com.

  • ✇Ars Technica - All content
  • Google goes “open AI” with Gemma, a free, open-weights chatbot familyBenj Edwards
    Enlarge (credit: Google) On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It's Google's first significant open large language model (LLM) release since OpenAI's ChatGPT started a frenzy for AI chatbots in 2022. Gemma models come in two sizes: Gemma 2B (2 billi
     

Google goes “open AI” with Gemma, a free, open-weights chatbot family

21. Únor 2024 v 23:01
The Google Gemma logo

Enlarge (credit: Google)

On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It's Google's first significant open large language model (LLM) release since OpenAI's ChatGPT started a frenzy for AI chatbots in 2022.

Gemma models come in two sizes: Gemma 2B (2 billion parameters) and Gemma 7B (7 billion parameters), each available in pre-trained and instruction-tuned variants. In AI, parameters are values in a neural network that determine AI model behavior, and weights are a subset of these parameters stored in a file.

Developed by Google DeepMind and other Google AI teams, Gemma pulls from techniques learned during the development of Gemini, which is the family name for Google's most capable (public-facing) commercial LLMs, including the ones that power its Gemini AI assistant. Google says the name comes from the Latin gemma, which means "precious stone."

Read 5 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • ChatGPT goes temporarily “insane” with unexpected outputs, spooking usersBenj Edwards
    Enlarge (credit: Benj Edwards / Getty Images) On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanli
     

ChatGPT goes temporarily “insane” with unexpected outputs, spooking users

21. Únor 2024 v 17:57
Illustration of a broken toy robot.

Enlarge (credit: Benj Edwards / Getty Images)

On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.

ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.

"It gave me the exact same feeling—like watching someone slowly lose their mind either from psychosis or dementia," wrote a Reddit user named z3ldafitzgerald in response to a post about ChatGPT bugging out. "It’s the first time anything AI related sincerely gave me the creeps."

Read 7 remaining paragraphs | Comments

  • ✇I, Cringely
  • AI and Moore’s Law: It’s the Chips, StupidRobert X. Cringely
    Sorry I’ve been away: time flies when you are not having fun. But now I’m back. Moore’s Law, which began with a random observation by the late Intel co-founder Gordon Moore that transistor densities on silicon substrates were doubling every 18 months, has over the intervening 60+ years been both borne-out yet also changed from a lithography technical feature to an economic law. It’s getting harder to etch ever-thinner lines, so we’ve taken as a culture to emphasizing the cost part of Moore’s Law
     

AI and Moore’s Law: It’s the Chips, Stupid

15. Červen 2023 v 16:20

Sorry I’ve been away: time flies when you are not having fun. But now I’m back.

Moore’s Law, which began with a random observation by the late Intel co-founder Gordon Moore that transistor densities on silicon substrates were doubling every 18 months, has over the intervening 60+ years been both borne-out yet also changed from a lithography technical feature to an economic law. It’s getting harder to etch ever-thinner lines, so we’ve taken as a culture to emphasizing the cost part of Moore’s Law (chips drop in price by 50 percent on an area basis (dollars per acre of silicon) every 18 months). We can accomplish this economic effect through a variety of techniques including multiple cores, System-On-Chip design, and unified memory — anything to keep prices going-down.

I predict that Generative Artificial Intelligence is going to go a long way toward keeping Moore’s Law in force and the way this is going to happen says a lot about the chip business, global economics, and Artificial Intelligence, itself.

Let’s take these points in reverse order. First, Generative AI products like ChatGPT are astoundingly expensive to build. GPT-4 reportedly cost $100+ million to build, mainly in cloud computing resources. Yes, this was primarily Microsoft paying itself and so maybe the economics are a bit suspect, but the actual calculations took tens of thousands of GPUs running for months and that can’t be denied. Nor can it be denied that building GPT-5 will cost even more.

Some people think this economic argument is wrong, that Large Language Models comparable to ChatGPT can be built using Open Source software for only a few hundred or a few thousand dollars. Yes and no.

Competitive-yet-inexpensive LLMs built at such low cost have nearly all started with Meta’s (Facebook’s)  LLaMA (Large Language Model Meta AI), which has effectively become Open Source now that both the code and the associated parameter weightsa big deal in fine-tuning language models — have been released to the wild.  It’s not clear how much of this Meta actually intended to do, but this genie is out of its bottle to great effect in the AI research community.

But GPT-5 will still cost $1+ billion and even ChatGPT, itself, is costing about $1 million per day just to run. That’s $300+ million per year to run old code.

So the current el cheapo AI research frenzy is likely to subside as LLaMA ages into obsolescence and has to be replaced by something more expensive, putting Google, Microsoft and OpenAI back in control.  Understand, too, that these big, established companies like the idea of LLMs costing so much to build because that makes it harder for startups to disrupt. It’s a form of restraint of trade, though not illegal.

But before then — and even after then in certain vertical markets — there is a lot to learn and a lot of business to be done using these smaller models, which can be used to build true professional language models, which GPT-4 and ChatGPT definitely are not.

GPT-4 and ChatGPT are general purpose models — supposedly useful for pretty much anything. But that means that when you are asking ChatGPT for legal advice, for example, you are asking it to imitate a lawyer. While ChatGPT may be able to pass the bar test, so did my cousin Chad, whom I assure you is an idiot.

If you are reading this I’ll bet you are smarter than your lawyer.

This means there is an opportunity for vertical LLMs trained on different data — real data from industries like medicine and auto mechanics. Whoever owns this data will own these markets.

What will make these models both better and cheaper is they can be built from a LLaMA base because most of that data doesn’t have to change over time to still fix your car, and the added Machine Learning won’t be from crap found on the Internet, but rather from the service manuals actually used to train mechanics and fix cars.

We are approaching a time when LLMs won’t have to imitate mechanics and nurses because they will be trained like mechanics and nurses.

Bloomberg has already done this for investment advice using its unique database of historical financial information.

With an average of 50 billion nodes, these vertical models will cost only five percent as much to run as OpenAI’s one billion node GPT-4.

But what does this have to do with semiconductors and Moore’s Law? Chip design is very similar to fixing cars in that there is a very limited amount of Machine Learning data required (think of logic cells as language words). It’s a small vocabulary (the auto repair section at the public library is just a few shelves of books). And EVEN BETTER THAN AUTO REPAIR, the semiconductor industry has well-developed simulation tools for testing logic before it is actually built.

So it ought to be pretty simple to apply AI to chip design, building custom chip design models to iterate into existing simulators and refine new designs that actually have a pretty good chance of being novel.

And who will be the first to leverage this chip AI? China.

The USA is doing its best to freeze China out of semiconductor development, denying access to advanced manufacturing tools, for example. But China is arguably the world’s #2 country for AI research and can use that advantage to make up some of the difference.

Look for fabless AI chip startups to spring-up around Chinese universities and for the Chinese Communist Party to put lots of money into this very cost-effective work. Because even if it’s used just to slim-down and improve existing designs, that’s another generation of chips China might otherwise not have had at all.

The post AI and Moore’s Law: It’s the Chips, Stupid first appeared on I, Cringely.






Digital Branding
Web Design Marketing

  • ✇Ars Technica - All content
  • Will Smith parodies viral AI-generated video by actually eating spaghettiBenj Edwards
    Enlarge / The real Will Smith eating spaghetti, parodying an AI-generated video from 2023. (credit: Will Smith / Getty Images / Benj Edwards) On Monday, Will Smith posted a video on his official Instagram feed that parodied an AI-generated video of the actor eating spaghetti that went viral last year. With the recent announcement of OpenAI's Sora video synthesis model, many people have noted the dramatic jump in AI-video quality over the past year compared to the infamous spa
     

Will Smith parodies viral AI-generated video by actually eating spaghetti

20. Únor 2024 v 15:50
The real Will Smith eating spaghetti, parodying an AI-generated video from 2023.

Enlarge / The real Will Smith eating spaghetti, parodying an AI-generated video from 2023. (credit: Will Smith / Getty Images / Benj Edwards)

On Monday, Will Smith posted a video on his official Instagram feed that parodied an AI-generated video of the actor eating spaghetti that went viral last year. With the recent announcement of OpenAI's Sora video synthesis model, many people have noted the dramatic jump in AI-video quality over the past year compared to the infamous spaghetti video. Smith's new video plays on that comparison by showing the actual actor eating spaghetti in a comical fashion and claiming that it is AI-generated.

Captioned "This is getting out of hand!", the Instagram video uses a split screen layout to show the original AI-generated spaghetti video created by a Reddit user named "chaindrop" in March 2023 on the top, labeled with the subtitle "AI Video 1 year ago." Below that, in a box titled "AI Video Now," the real Smith shows 11 video segments of himself actually eating spaghetti by slurping it up while shaking his head, pouring it into his mouth with his fingers, and even nibbling on a friend's hair. 2006's Snap Yo Fingers by Lil Jon plays in the background.

In the Instagram comments section, some people expressed confusion about the new (non-AI) video, saying, "I'm still in doubt if second video was also made by AI or not." In a reply, someone else wrote, "Boomers are gonna loose [sic] this one. Second one is clearly him making a joke but I wouldn’t doubt it in a couple months time it will get like that."

Read 2 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Why The New York Times might win its copyright lawsuit against OpenAITimothy B. Lee
    Enlarge (credit: Aurich Lawson | Getty Images) The day after The New York Times sued OpenAI for copyright infringement, the author and systems architect Daniel Jeffries wrote an essay-length tweet arguing that the Times “has a near zero probability of winning” its lawsuit. As we write this, it has been retweeted 288 times and received 885,000 views. “Trying to get everyone to license training data is not going to work because that's not what copyright is about,” Jeffries wrot
     

Why The New York Times might win its copyright lawsuit against OpenAI

20. Únor 2024 v 15:05
Why The New York Times might win its copyright lawsuit against OpenAI

Enlarge (credit: Aurich Lawson | Getty Images)

The day after The New York Times sued OpenAI for copyright infringement, the author and systems architect Daniel Jeffries wrote an essay-length tweet arguing that the Times “has a near zero probability of winning” its lawsuit. As we write this, it has been retweeted 288 times and received 885,000 views.

“Trying to get everyone to license training data is not going to work because that's not what copyright is about,” Jeffries wrote. “Copyright law is about preventing people from producing exact copies or near exact copies of content and posting it for commercial gain. Period. Anyone who tells you otherwise is lying or simply does not understand how copyright works.”

This article is written by two authors. One of us is a journalist who has been on the copyright beat for nearly 20 years. The other is a law professor who has taught dozens of courses on IP and Internet law. We’re pretty sure we understand how copyright works. And we’re here to warn the AI community that it needs to take these lawsuits seriously.

Read 67 remaining paragraphs | Comments

❌
❌