FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

“Something has gone seriously wrong,” dual-boot systems warn after Microsoft update

“Something has gone seriously wrong,” dual-boot systems warn after Microsoft update

Enlarge (credit: Getty Images)

Last Tuesday, loads of Linux users—many running packages released as early as this year—started reporting their devices were failing to boot. Instead, they received a cryptic error message that included the phrase: “Something has gone seriously wrong.”

The cause: an update Microsoft issued as part of its monthly patch release. It was intended to close a 2-year-old vulnerability in GRUB, an open source boot loader used to start up many Linux devices. The vulnerability, with a severity rating of 8.6 out of 10, made it possible for hackers to bypass secure boot, the industry standard for ensuring that devices running Windows or other operating systems don’t load malicious firmware or software during the bootup process. CVE-2022-2601 was discovered in 2022, but for unclear reasons, Microsoft patched it only last Tuesday.

Multiple distros, both new and old, affected

Tuesday’s update left dual-boot devices—meaning those configured to run both Windows and Linux—no longer able to boot into the latter when Secure Boot was enforced. When users tried to load Linux, they received the message: “Verifying shim SBAT data failed: Security Policy Violation. Something has gone seriously wrong: SBAT self-check failed: Security Policy Violation.” Almost immediately support and discussion forums lit up with ​​reports of the failure.

Read 10 remaining paragraphs | Comments

Procreate defies AI trend, pledges “no generative AI” in its illustration app

Still of Procreate CEO James Cuda from a video posted to X.

Enlarge / Still of Procreate CEO James Cuda from a video posted to X. (credit: Procreate)

On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries.

"Generative AI is ripping the humanity out of things," Procreate wrote on its website. "Built on a foundation of theft, the technology is steering us toward a barren future."

In a video posted on X, Procreate CEO James Cuda laid out his company's stance, saying, "We’re not going to be introducing any generative AI into our products. I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists."

Read 10 remaining paragraphs | Comments

Windows 0-day was exploited by North Korea to install advanced rootkit

Windows 0-day was exploited by North Korea to install advanced rootkit

Enlarge (credit: Getty Images)

A Windows zero-day vulnerability recently patched by Microsoft was exploited by hackers working on behalf of the North Korean government so they could install custom malware that’s exceptionally stealthy and advanced, researchers reported Monday.

The vulnerability, tracked as CVE-2024-38193, was one of six zero-days—meaning vulnerabilities known or actively exploited before the vendor has a patch—fixed in Microsoft’s monthly update release last Tuesday. Microsoft said the vulnerability—in a class known as a "use after free"—was located in AFD.sys, the binary file for what’s known as the ancillary function driver and the kernel entry point for the Winsock API. Microsoft warned that the zero-day could be exploited to give attackers system privileges, the maximum system rights available in Windows and a required status for executing untrusted code.

Lazarus gets access to the Windows kernel

Microsoft warned at the time that the vulnerability was being actively exploited but provided no details about who was behind the attacks or what their ultimate objective was. On Monday, researchers with Gen—the security firm that discovered the attacks and reported them privately to Microsoft—said the threat actors were part of Lazarus, the name researchers use to track a hacking outfit backed by the North Korean government.

Read 6 remaining paragraphs | Comments

Mac and Windows users infected by software updates delivered over hacked ISP

The words

Enlarge (credit: Marco Verch Professional Photographer and Speaker)

Hackers delivered malware to Windows and Mac users by compromising their Internet service provider and then tampering with software updates delivered over unsecure connections, researchers said.

The attack, researchers from security firm Volexity said, worked by hacking routers or similar types of device infrastructure of an unnamed ISP. The attackers then used their control of the devices to poison domain name system responses for legitimate hostnames providing updates for at least six different apps written for Windows or macOS. The apps affected were the 5KPlayer, Quick Heal, Rainmeter, Partition Wizard, and those from Corel and Sogou.

These aren’t the update servers you’re looking for

Because the update mechanisms didn’t use TLS or cryptographic signatures to authenticate the connections or downloaded software, the threat actors were able to use their control of the ISP infrastructure to successfully perform machine-in-the-middle (MitM) attacks that directed targeted users to hostile servers rather than the ones operated by the affected software makers. These redirections worked even when users employed non-encrypted public DNS services such as Google’s 8.8.8.8 or Cloudflare’s 1.1.1.1 rather than the authoritative DNS server provided by the ISP.

Read 12 remaining paragraphs | Comments

CrowdStrike claps back at Delta, says airline rejected offers for help

LOS ANGELES, CALIFORNIA - JULY 23: Travelers from France wait on their delayed flight on the check-in floor of the Delta Air Lines terminal at Los Angeles International Airport (LAX) on July 23, 2024 in Los Angeles, California.

Enlarge / LOS ANGELES, CALIFORNIA - JULY 23: Travelers from France wait on their delayed flight on the check-in floor of the Delta Air Lines terminal at Los Angeles International Airport (LAX) on July 23, 2024 in Los Angeles, California. (credit: Mario Tama/Getty Images)

CrowdStrike has hit back at Delta Air Lines’ threat of litigation against the cyber security company over a botched software update that grounded thousands of flights, denying it was responsible for the carrier’s own IT decisions and days-long disruption.

In a letter on Sunday, lawyers for CrowdStrike argued that the US carrier had created a “misleading narrative” that the cyber security firm was “grossly negligent” in an incident that the US airline has said will cost it $500 million.

Delta took days longer than its rivals to recover when CrowdStrike’s update brought down millions of Windows computers around the world last month. The airline has alerted the cyber security company that it plans to seek damages for the disruptions and hired litigation firm Boies Schiller Flexner.

Read 12 remaining paragraphs | Comments

FLUX: This new AI image generator is eerily good at creating human hands

AI-generated image by FLUX.1 dev:

Enlarge / AI-generated image by FLUX.1 dev: "A beautiful queen of the universe holding up her hands, face in the background." (credit: FLUX.1)

On Thursday, AI-startup Black Forest Labs announced the launch of its company and the release of its first suite of text-to-image AI models, called FLUX.1. The German-based company, founded by researchers who developed the technology behind Stable Diffusion and invented the latent diffusion technique, aims to create advanced generative AI for images and videos.

The launch of FLUX.1 comes about seven weeks after Stability AI's troubled release of Stable Diffusion 3 Medium in mid-June. Stability AI's offering faced widespread criticism among image-synthesis hobbyists for its poor performance in generating human anatomy, with users sharing examples of distorted limbs and bodies across social media. That problematic launch followed the earlier departure of three key engineers from Stability AI—Robin Rombach, Andreas Blattmann, and Dominik Lorenz—who went on to found Black Forest Labs along with latent diffusion co-developer Patrick Esser and others.

Black Forest Labs launched with the release of three FLUX.1 text-to-image models: a high-end commercial "pro" version, a mid-range "dev" version with open weights for non-commercial use, and a faster open-weights "schnell" version ("schnell" means quick or fast in German). Black Forest Labs claims its models outperform existing options like Midjourney and DALL-E in areas such as image quality and adherence to text prompts.

Read 9 remaining paragraphs | Comments

Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarks

The Anthropic Claude 3 logo, jazzed up by Benj Edwards.

Enlarge (credit: Anthropic / Benj Edwards)

On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window.

So far, people outside of Anthropic seem impressed. "This model is really, really good," wrote independent AI researcher Simon Willison on X. "I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump)."

As we've written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).

Read 17 remaining paragraphs | Comments

Hackers steal “significant volume” of data from hundreds of Snowflake customers

Hackers steal “significant volume” of data from hundreds of Snowflake customers

Enlarge (credit: Getty Images)

As many as 165 customers of cloud storage provider Snowflake have been compromised by a group that obtained login credentials through information-stealing malware, researchers said Monday.

On Friday, Lending Tree subsidiary QuoteWizard confirmed it was among the customers notified by Snowflake that it was affected in the incident. Lending Tree spokesperson Megan Greuling said the company is in the process of determining whether data stored on Snowflake has been stolen.

“That investigation is ongoing,” she wrote in an email. “As of this time, it does not appear that consumer financial account information was impacted, nor information of the parent entity, Lending Tree.”

Read 13 remaining paragraphs | Comments

Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

Enlarge (credit: Apple)

On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will be available as a beta test for developers this summer.

The announcements came during a livestream WWDC keynote and a simultaneous event attended by the press on Apple's campus in Cupertino, California. In an introduction, Apple CEO Tim Cook said the company has been using machine learning for years, but the introduction of large language models (LLMs) presents new opportunities to elevate the capabilities of Apple products. He emphasized the need for both personalization and privacy in Apple's approach.

At last year's WWDC, Apple avoided using the term "AI" completely, instead preferring terms like "machine learning" as Apple's way of avoiding buzzy hype while integrating applications of AI into apps in useful ways. This year, Apple figured out a new way to largely avoid the abbreviation "AI" by coining "Apple Intelligence," a catchall branding term that refers to a broad group of machine learning, LLM, and image generation technologies. By our count, the term "AI" was used sparingly in the keynote—most notably near the end of the presentation when Apple executive Craig Federighi said, "It's AI for the rest of us."

Read 10 remaining paragraphs | Comments

Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic

A man covered in newspaper.

Enlarge (credit: Getty Images)

On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern."

"The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union—which represents The Verge, SB Nation, and Vulture, among other publications—reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."

Read 9 remaining paragraphs | Comments

Google’s AI Overview is flawed by design, and a new company blog post hints at why

A selection of Google mascot characters created by the company.

Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Read 11 remaining paragraphs | Comments

Federal agency warns critical Linux vulnerability being actively exploited

Federal agency warns critical Linux vulnerability being actively exploited

Enlarge (credit: Getty Images)

The US Cybersecurity and Infrastructure Security Agency has added a critical security bug in Linux to its list of vulnerabilities known to be actively exploited in the wild.

The vulnerability, tracked as CVE-2024-1086 and carrying a severity rating of 7.8 out of a possible 10, allows people who have already gained a foothold inside an affected system to escalate their system privileges. It’s the result of a use-after-free error, a class of vulnerability that occurs in software written in the C and C++ languages when a process continues to access a memory location after it has been freed or deallocated. Use-after-free vulnerabilities can result in remote code or privilege escalation.

The vulnerability, which affects Linux kernel versions 5.14 through 6.6, resides in the NF_tables, a kernel component enabling the Netfilter, which in turn facilitates a variety of network operations, including packet filtering, network address [and port] translation (NA[P]T), packet logging, userspace packet queueing, and other packet mangling. It was patched in January, but as the CISA advisory indicates, some production systems have yet to install it. At the time this Ars post went live, there were no known details about the active exploitation.

Read 4 remaining paragraphs | Comments

Tech giants form AI group to counter Nvidia with new interconnect standard

Abstract image of data center with flowchart.

Enlarge (credit: Getty Images)

On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.

The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.

This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.

Read 5 remaining paragraphs | Comments

Financial institutions have 30 days to disclose breaches under new rules

Financial institutions have 30 days to disclose breaches under new rules

Enlarge (credit: Brendan Smialowski / Getty Images)

The Securities and Exchange Commission (SEC) will require some financial institutions to disclose security breaches within 30 days of learning about them.

On Wednesday, the SEC adopted changes to Regulation S-P, which governs the treatment of the personal information of consumers. Under the amendments, institutions must notify individuals whose personal information was compromised “as soon as practicable, but not later than 30 days” after learning of unauthorized network access or use of customer data. The new requirements will be binding on broker-dealers (including funding portals), investment companies, registered investment advisers, and transfer agents.

"Over the last 24 years, the nature, scale, and impact of data breaches has transformed substantially," SEC Chair Gary Gensler said. "These amendments to Regulation S-P will make critical updates to a rule first adopted in 2000 and help protect the privacy of customers’ financial data. The basic idea for covered firms is if you’ve got a breach, then you’ve got to notify. That’s good for investors."

Read 9 remaining paragraphs | Comments

Arizona woman accused of helping North Koreans get remote IT jobs at 300 companies

Illustration of a judge's gavel on a digital background resembling a computer circuit board.

Enlarge (credit: Getty Images | the-lightwriter)

An Arizona woman has been accused of helping generate millions of dollars for North Korea’s ballistic missile program by helping citizens of that country land IT jobs at US-based Fortune 500 companies.

Christina Marie Chapman, 49, of Litchfield Park, Arizona, raised $6.8 million in the scheme, federal prosecutors said in an indictment unsealed Thursday. Chapman allegedly funneled the money to North Korea’s Munitions Industry Department, which is involved in key aspects of North Korea’s weapons program, including its development of ballistic missiles.

Part of the alleged scheme involved Chapman and co-conspirators compromising the identities of more than 60 people living in the US and using their personal information to get North Koreans IT jobs across more than 300 US companies.

Read 7 remaining paragraphs | Comments

Google patches its fifth zero-day vulnerability of the year in Chrome

Extreme close-up photograph of finger above Chrome icon on smartphone.

Enlarge (credit: Getty Images)

Google has updated its Chrome browser to patch a high-severity zero-day vulnerability that allows attackers to execute malicious code on end user devices. The fix marks the fifth time this year the company has updated the browser to protect users from an existing malicious exploit.

The vulnerability, tracked as CVE-2024-4671, is a “use after free,” a class of bug that occurs in C-based programming languages. In these languages, developers must allocate memory space needed to run certain applications or operations. They do this by using “pointers” that store the memory addresses where the required data will reside. Because this space is finite, memory locations should be deallocated once the application or operation no longer needs it.

Use-after-free bugs occur when the app or process fails to clear the pointer after freeing the memory location. In some cases, the pointer to the freed memory is used again and points to a new memory location storing malicious shellcode planted by an attacker’s exploit, a condition that will result in the execution of this code.

Read 5 remaining paragraphs | Comments

Maximum-severity GitLab flaw allowing account hijacking under active exploitation

Maximum-severity GitLab flaw allowing account hijacking under active exploitation

Enlarge

A maximum severity vulnerability that allows hackers to hijack GitLab accounts with no user interaction required is now under active exploitation, federal government officials warned as data showed that thousands of users had yet to install a patch released in January.

A change GitLab implemented in May 2023 made it possible for users to initiate password changes through links sent to secondary email addresses. The move was designed to permit resets when users didn’t have access to the email address used to establish the account. In January, GitLab disclosed that the feature allowed attackers to send reset emails to accounts they controlled and from there click on the embedded link and take over the account.

While exploits require no user interaction, hijackings work only against accounts that aren’t configured to use multifactor authentication. Even with MFA, accounts remained vulnerable to password resets, but the attackers ultimately are unable to access the account, allowing the rightful owner to change the reset password. The vulnerability, tracked as CVE-2023-7028, carries a severity rating of 10 out of 10.

Read 9 remaining paragraphs | Comments

Matrix multiplication breakthrough could lead to faster, more efficient AI models

Futuristic huge technology tunnel and binary data.

Enlarge / When you do math on a computer, you fly through a numerical tunnel like this—figuratively, of course. (credit: Getty Images)

Computer scientists have discovered a new way to multiply large matrices faster than ever before by eliminating a previously unknown inefficiency, reports Quanta Magazine. This could eventually accelerate AI models like ChatGPT, which rely heavily on matrix multiplication to function. The findings, presented in two recent papers, have led to what is reported to be the biggest improvement in matrix multiplication efficiency in over a decade.

Multiplying two rectangular number arrays, known as matrix multiplication, plays a crucial role in today's AI models, including speech and image recognition, chatbots from every major vendor, AI image generators, and video synthesis models like Sora. Beyond AI, matrix math is so important to modern computing (think image processing and data compression) that even slight gains in efficiency could lead to computational and power savings.

Graphics processing units (GPUs) excel in handling matrix multiplication tasks because of their ability to process many calculations at once. They break down large matrix problems into smaller segments and solve them concurrently using an algorithm.

Read 11 remaining paragraphs | Comments

Microsoft says Kremlin-backed hackers accessed its source and internal systems

Microsoft says Kremlin-backed hackers accessed its source and internal systems

Enlarge (credit: Getty Images)

Microsoft said that Kremlin-backed hackers who breached its corporate network in January have expanded their access since then in follow-on attacks that are targeting customers and have compromised the company's source code and internal systems.

The intrusion, which the software company disclosed in January, was carried out by Midnight Blizzard, the name used to track a hacking group widely attributed to the Federal Security Service, a Russian intelligence agency. Microsoft said at the time that Midnight Blizzard gained access to senior executives’ email accounts for months after first exploiting a weak password in a test device connected to the company’s network. Microsoft went on to say it had no indication any of its source code or production systems had been compromised.

Secrets sent in email

In an update published Friday, Microsoft said it uncovered evidence that Midnight Blizzard had used the information it gained initially to further push into its network and compromise both source code and internal systems. The hacking group—which is tracked under multiple other names, including APT29, Cozy Bear, CozyDuke, The Dukes, Dark Halo, and Nobelium—has been using the proprietary information in follow-on attacks, not only against Microsoft but also its customers.

Read 7 remaining paragraphs | Comments

Attack wrangles thousands of web users into a password-cracking botnet

Attack wrangles thousands of web users into a password-cracking botnet

Enlarge (credit: Getty Images)

Attackers have transformed hundreds of hacked sites running WordPress software into command-and-control servers that force visitors’ browsers to perform password-cracking attacks.

A web search for the JavaScript that performs the attack showed it was hosted on 708 sites at the time this post went live on Ars, up from 500 two days ago. Denis Sinegubko, the researcher who spotted the campaign, said at the time that he had seen thousands of visitor computers running the script, which caused them to reach out to thousands of domains in an attempt to guess the passwords of usernames with accounts on them.

Visitors unwittingly recruited

“This is how thousands of visitors across hundreds of infected websites unknowingly and simultaneously try to bruteforce thousands of other third-party WordPress sites,” Sinegubko wrote. “And since the requests come from the browsers of real visitors, you can imagine this is a challenge to filter and block such requests.”

Read 8 remaining paragraphs | Comments

US prescription market hamstrung for 9 days (so far) by ransomware attack

US prescription market hamstrung for 9 days (so far) by ransomware attack

Enlarge (credit: Getty Images)

Nine days after a Russian-speaking ransomware syndicate took down the biggest US health care payment processor, pharmacies, health care providers, and patients were still scrambling to fill prescriptions for medicines, many of which are lifesaving.

On Thursday, UnitedHealth Group accused a notorious ransomware gang known both as AlphV and Black Cat of hacking its subsidiary Optum. Optum provides a nationwide network called Change Healthcare, which allows health care providers to manage customer payments and insurance claims. With no easy way for pharmacies to calculate what costs were covered by insurance companies, many had to turn to alternative services or offline methods.

The most serious incident of its kind

Optum first disclosed on February 21 that its services were down as a result of a “cyber security issue.” Its service has been hamstrung ever since. Shortly before this post went live on Ars, Optum said it had restored Change Healthcare services.

Read 8 remaining paragraphs | Comments

Hugging Face, the GitHub of AI, hosted code that backdoored user devices

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word "exploit"

Enlarge (credit: Getty Images)

Code uploaded to AI developer platform Hugging Face covertly installed backdoors and other types of malware on end-user machines, researchers from security firm JFrog said Thursday in a report that’s a likely harbinger of what’s to come.

In all, JFrog researchers said, they found roughly 100 submissions that performed hidden and unwanted actions when they were downloaded and loaded onto an end-user device. Most of the flagged machine learning models—all of which went undetected by Hugging Face—appeared to be benign proofs of concept uploaded by researchers or curious users. JFrog researchers said in an email that 10 of them were “truly malicious” in that they performed actions that actually compromised the users’ security when loaded.

Full control of user devices

One model drew particular concern because it opened a reverse shell that gave a remote device on the Internet full control of the end user’s device. When JFrog researchers loaded the model into a lab machine, the submission indeed loaded a reverse shell but took no further action.

Read 17 remaining paragraphs | Comments

HP wants you to pay up to $36/month to rent a printer that it monitors

HP Envy 6020e printer

Enlarge / The HP Envy 6020e is one of the printers available for rent. (credit: HP)

HP launched a subscription service today that rents people a printer, allots them a specific amount of printed pages, and sends them ink for a monthly fee. HP is framing its service as a way to simplify printing for families and small businesses, but the deal also comes with monitoring and a years-long commitment.

Prices range from $6.99 per month for a plan that includes an HP Envy printer (the current model is the 6020e) and 20 printed pages. The priciest plan includes an HP OfficeJet Pro rental and 700 printed pages for $35.99 per month.

HP says it will provide subscribers with ink deliveries when they're running low and 24/7 support via phone or chat (although it's dubious how much you want to rely on HP support). Support doesn't include on or offsite repairs or part replacements. The subscription's terms of service (TOS) note that the service doesn't cover damage or failure caused by, unsurprisingly, "use of non-HP media supplies and other products" or if you use your printer more than what your plan calls for.

Read 21 remaining paragraphs | Comments

AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating

The CNET logo on a smartphone screen.

Enlarge (credit: Jaap Arriens/NurPhoto/Getty Images)

Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness, as noted in a detailed report from Futurism. The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022.

Around November 2022, CNET began publishing articles written by an AI model under the byline "CNET Money Staff." In January 2023, Futurism brought widespread attention to the issue and discovered that the articles were full of plagiarism and mistakes. (Around that time, we covered plans to do similar automated publishing at BuzzFeed.) After the revelation, CNET management paused the experiment, but the reputational damage had already been done.

Wikipedia maintains a page called "Reliable sources/Perennial sources" that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia's perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication.

Read 7 remaining paragraphs | Comments

iMessage gets a major makeover that puts it on equal footing with Signal

Stylized illustration of key.

Enlarge (credit: Getty Images)

iMessage is getting a major makeover that makes it among the two messaging apps most prepared to withstand the coming advent of quantum computing, largely at parity with Signal or arguably incrementally more hardened.

On Wednesday, Apple said messages sent through iMessage will now be protected by two forms of end-to-end encryption (E2EE), whereas before, it had only one. The encryption being added, known as PQ3, is an implementation of a new algorithm called Kyber that, unlike the algorithms iMessage has used until now, can’t be broken with quantum computing. Apple isn’t replacing the older quantum-vulnerable algorithm with PQ3—it's augmenting it. That means, for the encryption to be broken, an attacker will have to crack both.

Making E2EE future safe

The iMessage changes come five months after the Signal Foundation, maker of the Signal Protocol that encrypts messages sent by more than a billion people, updated the open standard so that it, too, is ready for post-quantum computing (PQC). Just like Apple, Signal added Kyber to X3DH, the algorithm it was using previously. Together, they’re known as PQXDH.

Read 12 remaining paragraphs | Comments

Google goes “open AI” with Gemma, a free, open-weights chatbot family

The Google Gemma logo

Enlarge (credit: Google)

On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It's Google's first significant open large language model (LLM) release since OpenAI's ChatGPT started a frenzy for AI chatbots in 2022.

Gemma models come in two sizes: Gemma 2B (2 billion parameters) and Gemma 7B (7 billion parameters), each available in pre-trained and instruction-tuned variants. In AI, parameters are values in a neural network that determine AI model behavior, and weights are a subset of these parameters stored in a file.

Developed by Google DeepMind and other Google AI teams, Gemma pulls from techniques learned during the development of Gemini, which is the family name for Google's most capable (public-facing) commercial LLMs, including the ones that power its Gemini AI assistant. Google says the name comes from the Latin gemma, which means "precious stone."

Read 5 remaining paragraphs | Comments

ChatGPT goes temporarily “insane” with unexpected outputs, spooking users

Illustration of a broken toy robot.

Enlarge (credit: Benj Edwards / Getty Images)

On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.

ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.

"It gave me the exact same feeling—like watching someone slowly lose their mind either from psychosis or dementia," wrote a Reddit user named z3ldafitzgerald in response to a post about ChatGPT bugging out. "It’s the first time anything AI related sincerely gave me the creeps."

Read 7 remaining paragraphs | Comments

After years of losing, it’s finally feds’ turn to troll ransomware group

After years of losing, it’s finally feds’ turn to troll ransomware group

Enlarge (credit: Getty Images)

After years of being outmaneuvered by snarky ransomware criminals who tease and brag about each new victim they claim, international authorities finally got their chance to turn the tables, and they aren't squandering it.

The top-notch trolling came after authorities from the US, UK, and Europol took down most of the infrastructure belonging to LockBit, a ransomware syndicate that has extorted more than $120 million from thousands of victims around the world. On Tuesday, most of the sites LockBit uses to shame its victims for being hacked, pressure them into paying, and brag of their hacking prowess began displaying content announcing the takedown. The seized infrastructure also hosted decryptors victims could use to recover their data.

this_is_really_bad

Authorities didn’t use the seized name-and-shame site solely for informational purposes. One section that appeared prominently gloated over the extraordinary extent of the system access investigators gained. Several images indicated they had control of /etc/shadow, a Linux file that stores cryptographically hashed passwords. This file, among the most security-sensitive ones in Linux, can be accessed only by a user with root, the highest level of system privileges.

Read 9 remaining paragraphs | Comments

Will Smith parodies viral AI-generated video by actually eating spaghetti

The real Will Smith eating spaghetti, parodying an AI-generated video from 2023.

Enlarge / The real Will Smith eating spaghetti, parodying an AI-generated video from 2023. (credit: Will Smith / Getty Images / Benj Edwards)

On Monday, Will Smith posted a video on his official Instagram feed that parodied an AI-generated video of the actor eating spaghetti that went viral last year. With the recent announcement of OpenAI's Sora video synthesis model, many people have noted the dramatic jump in AI-video quality over the past year compared to the infamous spaghetti video. Smith's new video plays on that comparison by showing the actual actor eating spaghetti in a comical fashion and claiming that it is AI-generated.

Captioned "This is getting out of hand!", the Instagram video uses a split screen layout to show the original AI-generated spaghetti video created by a Reddit user named "chaindrop" in March 2023 on the top, labeled with the subtitle "AI Video 1 year ago." Below that, in a box titled "AI Video Now," the real Smith shows 11 video segments of himself actually eating spaghetti by slurping it up while shaking his head, pouring it into his mouth with his fingers, and even nibbling on a friend's hair. 2006's Snap Yo Fingers by Lil Jon plays in the background.

In the Instagram comments section, some people expressed confusion about the new (non-AI) video, saying, "I'm still in doubt if second video was also made by AI or not." In a reply, someone else wrote, "Boomers are gonna loose [sic] this one. Second one is clearly him making a joke but I wouldn’t doubt it in a couple months time it will get like that."

Read 2 remaining paragraphs | Comments

LockBit ransomware group taken down in multinational operation

A ransom message on a monochrome computer screen.

Enlarge (credit: Rob Engelaar | Getty Images)

Law enforcement agencies including the FBI and the UK’s National Crime Agency have dealt a crippling blow to LockBit, one of the world’s most prolific cybercrime gangs, whose victims include Royal Mail and Boeing.

The 11 international agencies behind “Operation Cronos” said on Tuesday that the ransomware group—many of whose members are based in Russia—had been “locked out” of its own systems. Several of the group’s key members have been arrested, indicted, or identified and its core technology seized, including hacking tools and its “dark web” homepage.

Graeme Biggar, NCA director-general, said law enforcement officers had “successfully infiltrated and fundamentally disrupted LockBit.”

Read 16 remaining paragraphs | Comments

Reddit sells training data to unnamed AI company ahead of IPO

In this photo illustration the American social news

Enlarge (credit: Reddit)

On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site's content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month.

Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investors of an anticipated IPO, Bloomberg said. The Bloomberg source speculates that the contract could serve as a model for future agreements with other AI companies.

After an era where AI companies utilized AI training data without expressly seeking any rightsholder permission, some tech firms have more recently begun entering deals where some content used for training AI models similar to GPT-4 (which runs the paid version of ChatGPT) comes under license. In December, for example, OpenAI signed an agreement with German publisher Axel Springer (publisher of Politico and Business Insider) for access to its articles. Previously, OpenAI has struck deals with other organizations, including the Associated Press. Reportedly, OpenAI is also in licensing talks with CNN, Fox, and Time, among others.

Read 4 remaining paragraphs | Comments

❌