FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Ars Technica - All content
  • Procreate defies AI trend, pledges “no generative AI” in its illustration appBenj Edwards
    Enlarge / Still of Procreate CEO James Cuda from a video posted to X. (credit: Procreate) On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries. "Generative AI is ripping the humanity out of things," Procreate
     

Procreate defies AI trend, pledges “no generative AI” in its illustration app

20. Srpen 2024 v 18:52
Still of Procreate CEO James Cuda from a video posted to X.

Enlarge / Still of Procreate CEO James Cuda from a video posted to X. (credit: Procreate)

On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries.

"Generative AI is ripping the humanity out of things," Procreate wrote on its website. "Built on a foundation of theft, the technology is steering us toward a barren future."

In a video posted on X, Procreate CEO James Cuda laid out his company's stance, saying, "We’re not going to be introducing any generative AI into our products. I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists."

Read 10 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • FLUX: This new AI image generator is eerily good at creating human handsBenj Edwards
    Enlarge / AI-generated image by FLUX.1 dev: "A beautiful queen of the universe holding up her hands, face in the background." (credit: FLUX.1) On Thursday, AI-startup Black Forest Labs announced the launch of its company and the release of its first suite of text-to-image AI models, called FLUX.1. The German-based company, founded by researchers who developed the technology behind Stable Diffusion and invented the latent diffusion technique, aims to create advanced generative
     

FLUX: This new AI image generator is eerily good at creating human hands

2. Srpen 2024 v 19:47
AI-generated image by FLUX.1 dev:

Enlarge / AI-generated image by FLUX.1 dev: "A beautiful queen of the universe holding up her hands, face in the background." (credit: FLUX.1)

On Thursday, AI-startup Black Forest Labs announced the launch of its company and the release of its first suite of text-to-image AI models, called FLUX.1. The German-based company, founded by researchers who developed the technology behind Stable Diffusion and invented the latent diffusion technique, aims to create advanced generative AI for images and videos.

The launch of FLUX.1 comes about seven weeks after Stability AI's troubled release of Stable Diffusion 3 Medium in mid-June. Stability AI's offering faced widespread criticism among image-synthesis hobbyists for its poor performance in generating human anatomy, with users sharing examples of distorted limbs and bodies across social media. That problematic launch followed the earlier departure of three key engineers from Stability AI—Robin Rombach, Andreas Blattmann, and Dominik Lorenz—who went on to found Black Forest Labs along with latent diffusion co-developer Patrick Esser and others.

Black Forest Labs launched with the release of three FLUX.1 text-to-image models: a high-end commercial "pro" version, a mid-range "dev" version with open weights for non-commercial use, and a faster open-weights "schnell" version ("schnell" means quick or fast in German). Black Forest Labs claims its models outperform existing options like Midjourney and DALL-E in areas such as image quality and adherence to text prompts.

Read 9 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarksBenj Edwards
    Enlarge (credit: Anthropic / Benj Edwards) On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work d
     

Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarks

20. Červen 2024 v 23:04
The Anthropic Claude 3 logo, jazzed up by Benj Edwards.

Enlarge (credit: Anthropic / Benj Edwards)

On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window.

So far, people outside of Anthropic seem impressed. "This model is really, really good," wrote independent AI researcher Simon Willison on X. "I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump)."

As we've written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).

Read 17 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOSBenj Edwards
    Enlarge (credit: Apple) On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will
     

Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

10. Červen 2024 v 21:15
Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

Enlarge (credit: Apple)

On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will be available as a beta test for developers this summer.

The announcements came during a livestream WWDC keynote and a simultaneous event attended by the press on Apple's campus in Cupertino, California. In an introduction, Apple CEO Tim Cook said the company has been using machine learning for years, but the introduction of large language models (LLMs) presents new opportunities to elevate the capabilities of Apple products. He emphasized the need for both personalization and privacy in Apple's approach.

At last year's WWDC, Apple avoided using the term "AI" completely, instead preferring terms like "machine learning" as Apple's way of avoiding buzzy hype while integrating applications of AI into apps in useful ways. This year, Apple figured out a new way to largely avoid the abbreviation "AI" by coining "Apple Intelligence," a catchall branding term that refers to a broad group of machine learning, LLM, and image generation technologies. By our count, the term "AI" was used sparingly in the keynote—most notably near the end of the presentation when Apple executive Craig Federighi said, "It's AI for the rest of us."

Read 10 remaining paragraphs | Comments

  • ✇IEEE Spectrum
  • The Mythical Non-RoboticistBenjie Holson
    The original version of this post by Benjie Holson was published on Substack here, and includes Benjie’s original comics as part of his series on robots and startups. I worked on this idea for months before I decided it was a mistake. The second time I heard someone mention it, I thought, “That’s strange, these two groups had the same idea. Maybe I should tell them it didn’t work for us.” The third and fourth time I rolled my eyes and ignored it. The fifth time I heard about a group strugglin
     

The Mythical Non-Roboticist

9. Červen 2024 v 15:00


The original version of this post by Benjie Holson was published on Substack here, and includes Benjie’s original comics as part of his series on robots and startups.

I worked on this idea for months before I decided it was a mistake. The second time I heard someone mention it, I thought, “That’s strange, these two groups had the same idea. Maybe I should tell them it didn’t work for us.” The third and fourth time I rolled my eyes and ignored it. The fifth time I heard about a group struggling with this mistake, I decided it was worth a blog post all on its own. I call this idea “The Mythical Non-Roboticist.”


The Mistake

The idea goes something like this: Programming robots is hard. And there are some people with really arcane skills and PhDs who are really expensive and seem to be required for some reason. Wouldn’t it be nice if we could do robotics without them? 1 What if everyone could do robotics? That would be great, right? We should make a software framework so that non-roboticists can program robots.

This idea is so close to a correct idea that it’s hard to tell why it doesn’t work out. On the surface, it’s not wrong: All else being equal, it would be good if programming robots was more accessible. The problem is that we don’t have a good recipe for making working robots. So we don’t know how to make that recipe easier to follow. In order to make things simple, people end up removing things that folks might need, because no one knows for sure what’s absolutely required. It’s like saying you want to invent an invisibility cloak and want to be able to make it from materials you can buy from Home Depot. Sure, that would be nice, but if you invented an invisibility cloak that required some mercury and neodymium to manufacture would you toss the recipe?

In robotics, this mistake is based on a very true and very real observation: Programming robots is super hard. Famously hard. It would be super great if programming robots was easier. The issue is this: Programming robots has two different kinds of hard parts.

Robots are hard because the world is complicated

Illustration of a robot photo stepping down towards a banana peel. Moor Studio/Getty Images

The first kind of hard part is that robots deal with the real world, imperfectly sensed and imperfectly actuated. Global mutable state is bad programming style because it’s really hard to deal with, but to robot software the entire physical world is global mutable state, and you only get to unreliably observe it and hope your actions approximate what you wanted to achieve. Getting robotics to work at all is often at the very limit of what a person can reason about, and requires the flexibility to employ whatever heuristic might work for your special problem. This is the intrinsic complexity of the problem: Robots live in complex worlds, and for every working solution there are millions of solutions that don’t work, and finding the right one is hard, and often very dependent on the task, robot, sensors, and environment.

Folks look at that challenge, see that it is super hard, and decide that, sure, maybe some fancy roboticist could solve it in one particular scenario, but what about “normal” people? “We should make this possible for non-roboticists” they say. I call these users “Mythical Non-Roboticists” because once they are programming a robot, I feel they become roboticists. Isn’t anyone programming a robot for a purpose a roboticist? Stop gatekeeping, people.

Don’t design for amorphous groups

I call also them “mythical” because usually the “non-roboticist” implied is a vague, amorphous group. Don’t design for amorphous groups. If you can’t name three real people (that you have talked to) that your API is for, then you are designing for an amorphous group and only amorphous people will like your API.

And with this hazy group of users in mind (and seeing how difficult everything is), folks think, “Surely we could make this easier for everyone else by papering over these things with simple APIs?”

No. No you can’t. Stop it.

You can’t paper over intrinsic complexity with simple APIs because if your APIs are simple they can’t cover the complexity of the problem. You will inevitably end up with a beautiful looking API, with calls like “grasp_object” and “approach_person” which demo nicely in a hackathon kickoff but last about 15 minutes of someone actually trying to get some work done. It will turn out that, for their particular application, “grasp_object()” makes 3 or 4 wrong assumptions about “grasp” and “object” and doesn’t work for them at all.

Your users are just as smart as you

This is made worse by the pervasive assumption that these people are less savvy (read: less intelligent) than the creators of this magical framework. 2 That feeling of superiority will cause the designers to cling desperately to their beautiful, simple “grasp_object()”s and resist adding the knobs and arguments needed to cover more use cases and allow the users to customize what they get.

Ironically this foists a bunch of complexity on to the poor users of the API who have to come up with clever workarounds to get it to work at all.

Illustration of a human and robot hand fitting puzzle pieces together in front of a brain Moor Studio/Getty Images

The sad, salty, bitter icing on this cake-of-frustration is that, even if done really well, the goal of this kind of framework would be to expand the group of people who can do the work. And to achieve that, it would sacrifice some performance you can only get by super-specializing your solution to your problem. If we lived in a world where expert roboticists could program robots that worked really well, but there was so much demand for robots that there just wasn’t enough time for those folks to do all the programming, this would be a great solution. 3

The obvious truth is that (outside of really constrained environments like manufacturing cells) even the very best collection of real bone-fide, card-carrying roboticists working at the best of their ability struggle to get close to a level of performance that makes the robots commercially viable, even with long timelines and mountains of funding. 4 We don’t have any headroom to sacrifice power and effectiveness for ease.

What problem are we solving?

So should we give up making it easier? Is robotic development available only to a small group of elites with fancy PhDs? 5 No to both! I have worked with tons of undergrad interns who have been completely able to do robotics.6 I myself am mostly self-taught in robot programming.7 While there is a lot of intrinsic complexity in making robots work, I don’t think there is any more than, say, video game development.

In robotics, like in all things, experience helps, some things are teachable, and as you master many areas you can see things start to connect together. These skills are not magical or unique to robotics. We are not as special as we like to think we are.

But what about making programming robots easier? Remember way back at the beginning of the post when I said that there were two different kinds of hard parts? One is the intrinsic complexity of the problem, and that one will be hard no matter what. 8 But the second is the incidental complexity, or as I like to call it, the stupid BS complexity.

Stupid BS Complexity

Robots are asynchronous, distributed, real-time systems with weird hardware. All of that will be hard to configure for stupid BS reasons. Those drivers need to work in the weird flavor of Linux you want for hard real-time for your controls and getting that all set up will be hard for stupid BS reasons. You are abusing Wi-Fi so you can roam seamlessly without interruption but Linux’s Wi-Fi will not want to do that. Your log files are huge and you have to upload them somewhere so they don’t fill up your robot. You’ll need to integrate with some cloud something or other and deal with its stupid BS. 9

An illustration of a robot whose head has exploded off Moor Studio/Getty Images

There is a ton of crap to deal with before you even get to complexity of dealing with 3D rotation, moving reference frames, time synchronization, messaging protocols. Those things have intrinsic complexity (you have to think about when something was observed and how to reason about it as other things have moved) and stupid BS complexity (There’s a weird bug because someone multiplied two transform matrices in the wrong order and now you’re getting an error message that deep in some protocol a quaternion is not normalized. WTF does that mean?) 10

One of the biggest challenges of robot programming is wading through the sea of stupid BS you need to wrangle in order to start working on your interesting and challenging robotics problem.

So a simple heuristic to make good APIs is:

Design your APIs for someone as smart as you, but less tolerant of stupid BS.

That feels universal enough that I’m tempted to call it Holson’s Law of Tolerable API Design.

When you are using tools you’ve made, you know them well enough to know the rough edges and how to avoid them.

But rough edges are things that have to be held in a programmer’s memory while they are using your system. If you insist on making a robotics framework 11, you should strive to make it as powerful as you can with the least amount of stupid BS. Eradicate incidental complexity everywhere you can. You want to make APIs that have maximum flexibility but good defaults. I like python’s default-argument syntax for this because it means you can write APIs that can be used like:

A screenshot of code

It is possible to have easy things be simple and allow complex things. And please, please, please don’t make condescending APIs. Thanks!

1. Ironically it is very often the expensive arcane-knowledge-having PhDs who are proposing this.

2. Why is it always a framework?

3. The exception that might prove the rule is things like traditional manufacturing-cell automation. That is a place where the solutions exist, but the limit to expanding is set up cost. I’m not an expert in this domain, but I’d worry that physical installation and safety compliance might still dwarf the software programming cost, though.

4. As I well know from personal experience.

5. Or non-fancy PhDs for that matter?

6. I suspect that many bright highschoolers would also be able to do the work. Though, as Google tends not to hire them, I don’t have good examples.

7. My schooling was in Mechanical Engineering and I never got a PhD, though my ME classwork did include some programming fundamentals.

8. Unless we create effective general purpose AI. It feels weird that I have to add that caveat, but the possibility that it’s actually coming for robotics in my lifetime feels much more possible than it did two years ago.

9. And if you are unlucky, its API was designed by someone who thought they were smarter than their customers.

10. This particular flavor of BS complexity is why I wrote posetree.py. If you do robotics, you should check it out.

11. Which, judging by the trail of dead robot-framework-companies, is a fraught thing to do.

  • ✇Ars Technica - All content
  • Journalists “deeply troubled” by OpenAI’s content deals with Vox, The AtlanticBenj Edwards
    Enlarge (credit: Getty Images) On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern." "The unionized members of The
     

Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic

31. Květen 2024 v 23:56
A man covered in newspaper.

Enlarge (credit: Getty Images)

On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern."

"The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union—which represents The Verge, SB Nation, and Vulture, among other publications—reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."

Read 9 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Google’s AI Overview is flawed by design, and a new company blog post hints at whyBenj Edwards
    Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google) On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlin
     

Google’s AI Overview is flawed by design, and a new company blog post hints at why

31. Květen 2024 v 21:47
A selection of Google mascot characters created by the company.

Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Read 11 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Tech giants form AI group to counter Nvidia with new interconnect standardBenj Edwards
    Enlarge (credit: Getty Images) On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today'
     

Tech giants form AI group to counter Nvidia with new interconnect standard

30. Květen 2024 v 22:42
Abstract image of data center with flowchart.

Enlarge (credit: Getty Images)

On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.

The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.

This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.

Read 5 remaining paragraphs | Comments

  • ✇Android Police
  • Xfinity: How to turn closed captions on or offBenjamin Zeman
    Xfinity TV box settings aren't always intuitive. Sometimes, we accidentally change a setting and have no idea how. It happens, even on our favorite TV boxes. Other times, we forget how we turned it on, and suddenly, it's gone. How do we get it back? When we're talking about closed captioning, not knowing how to turn it on or off can make a difference in your viewing experience.
     

Xfinity: How to turn closed captions on or off

20. Květen 2024 v 00:28

Xfinity TV box settings aren't always intuitive. Sometimes, we accidentally change a setting and have no idea how. It happens, even on our favorite TV boxes. Other times, we forget how we turned it on, and suddenly, it's gone. How do we get it back? When we're talking about closed captioning, not knowing how to turn it on or off can make a difference in your viewing experience.

  • ✇Ars Technica - All content
  • After 48 years, Zilog is killing the classic standalone Z80 microprocessor chipBenj Edwards
    Enlarge / A cropped portion of a ca. 1980 ad for the Microsoft Z80 SoftCard, which allowed Apple II users to run the CP/M operating system. (credit: Microsoft) Last week, chip manufacturer Zilog announced that after 48 years on the market, its line of standalone DIP (dual inline package) Z80 CPUs is coming to an end, ceasing sales on June 14, 2024. The 8-bit architecture debuted in 1976 and powered a small-business-PC revolution in conjunction with CP/M, also serving as the h
     

After 48 years, Zilog is killing the classic standalone Z80 microprocessor chip

22. Duben 2024 v 18:48
A cropped portion of a ca. 1980 ad for the Microsoft Z80 SoftCard, which allowed Apple II users to run the CP/M operating system.

Enlarge / A cropped portion of a ca. 1980 ad for the Microsoft Z80 SoftCard, which allowed Apple II users to run the CP/M operating system. (credit: Microsoft)

Last week, chip manufacturer Zilog announced that after 48 years on the market, its line of standalone DIP (dual inline package) Z80 CPUs is coming to an end, ceasing sales on June 14, 2024. The 8-bit architecture debuted in 1976 and powered a small-business-PC revolution in conjunction with CP/M, also serving as the heart of the Nintendo Game Boy, Sinclair ZX Spectrum, the Radio Shack TRS-80, the Pac-Man arcade game, and the TI-83 graphing calculator.

In a letter to customers dated April 15, 2024, Zilog wrote, "Please be advised that our Wafer Foundry Manufacturer will be discontinuing support for the Z80 product and other product lines. Refer to the attached list of the Z84C00 Z80 products affected."

Designers typically use the Z84C00 chips because of familiarity with the Z80 architecture or to allow legacy system upgrades without needing significant system redesigns. And while many other embedded chip architectures have superseded these Z80 chips in speed, processing power, and capability, they remained go-to solutions for decades in products that didn't need any extra horsepower.

Read 8 remaining paragraphs | Comments

  • ✇IEEE Spectrum
  • Getting the Grid to Net ZeroBenjamin Kroposki
    It’s late in the afternoon of 2 April 2023 on the island of Kauai. The sun is sinking over this beautiful and peaceful place, when, suddenly, at 4:25 pm, there’s a glitch: The largest generator on the island, a 26-megawatt oil-fired turbine, goes offline. This is a more urgent problem than it might sound. The westernmost Hawaiian island of significant size, Kauai is home to around 70,000 residents and 30,000 tourists at any given time. Renewable energy accounts for 70 percent of the energy p
     

Getting the Grid to Net Zero

13. Duben 2024 v 21:00


It’s late in the afternoon of 2 April 2023 on the island of Kauai. The sun is sinking over this beautiful and peaceful place, when, suddenly, at 4:25 pm, there’s a glitch: The largest generator on the island, a 26-megawatt oil-fired turbine, goes offline.

This is a more urgent problem than it might sound. The westernmost Hawaiian island of significant size, Kauai is home to around 70,000 residents and 30,000 tourists at any given time. Renewable energy accounts for 70 percent of the energy produced in a typical year—a proportion that’s among the highest in the world and that can be hard to sustain for such a small and isolated grid. During the day, the local system operator, the Kauai Island Utility Cooperative, sometimes reaches levels of 90 percent from solar alone. But on 2 April, the 26-MW generator was running near its peak output, to compensate for the drop in solar output as the sun set. At the moment when it failed, that single generator had been supplying 60 percent of the load for the entire island, with the rest being met by a mix of smaller generators and several utility-scale solar-and-battery systems.

Normally, such a sudden loss would spell disaster for a small, islanded grid. But the Kauai grid has a feature that many larger grids lack: a technology called grid-forming inverters. An inverter converts direct-current electricity to grid-compatible alternating current. The island’s grid-forming inverters are connected to those battery systems, and they are a special type—in fact, they had been installed with just such a contingency in mind. They improve the grid’s resilience and allow it to operate largely on resources like batteries, solar photovoltaics, and wind turbines, all of which connect to the grid through inverters. On that April day in 2023, Kauai had over 150 megawatt-hours’ worth of energy stored in batteries—and also the grid-forming inverters necessary to let those batteries respond rapidly and provide stable power to the grid. They worked exactly as intended and kept the grid going without any blackouts.

An image of above of solar panels and the batteries for them.  The photovoltaic panels at the Kapaia solar-plus-storage facility, operated by the Kauai Island Utility Cooperative in Hawaii, are capable of generating 13 megawatts under ideal conditions.TESLA

A photo of a solar farm. A solar-plus-storage facility at the U.S. Navy’s Pacific Missile Range Facility, in the southwestern part of Kauai, is one of two on the island equipped with grid-forming inverters. U.S. NAVY

That April event in Kauai offers a preview of the electrical future, especially for places where utilities are now, or soon will be, relying heavily on solar photovoltaic or wind power. Similar inverters have operated for years within smaller off-grid installations. However, using them in a multimegawatt power grid, such as Kauai’s, is a relatively new idea. And it’s catching on fast: At the time of this writing, at least eight major grid-forming projects are either under construction or in operation in Australia, along with others in Asia, Europe, North America, and the Middle East.

Reaching net-zero-carbon emissions by 2050, as many international organizations now insist is necessary to stave off dire climate consequences, will require a rapid and massive shift in electricity-generating infrastructures. The International Energy Agency has calculated that to have any hope of achieving this goal would require the addition, every year, of 630 gigawatts of solar photovoltaics and 390 GW of wind starting no later than 2030—figures that are around four times as great as than any annual tally so far.

The only economical way to integrate such high levels of renewable energy into our grids is with grid-forming inverters, which can be implemented on any technology that uses an inverter, including wind, solar photovoltaics, batteries, fuel cells, microturbines, and even high-voltage direct-current transmission lines. Grid-forming inverters for utility-scale batteries are available today from Tesla, GPTech, SMA, GE Vernova, EPC Power, Dynapower, Hitachi, Enphase, CE+T, and others. Grid-forming converters for HVDC, which convert high-voltage direct current to alternating current and vice versa, are also commercially available, from companies including Hitachi, Siemens, and GE Vernova. For photovoltaics and wind, grid-forming inverters are not yet commercially available at the size and scale needed for large grids, but they are now being developed by GE Vernova, Enphase, and Solectria.

The Grid Depends on Inertia

To understand the promise of grid-forming inverters, you must first grasp how our present electrical grid functions, and why it’s inadequate for a future dominated by renewable resources such as solar and wind power.

Conventional power plants that run on natural gas, coal, nuclear fuel, or hydropower produce electricity with synchronous generators—large rotating machines that produce AC electricity at a specified frequency and voltage. These generators have some physical characteristics that make them ideal for operating power grids. Among other things, they have a natural tendency to synchronize with one another, which helps make it possible to restart a grid that’s completely blacked out. Most important, a generator has a large rotating mass, namely its rotor. When a synchronous generator is spinning, its rotor, which can weigh well over 100 tonnes, cannot stop quickly.

An illustration of a map of Kauai and it's energy grid. The Kauai electric transmission grid operates at 57.1 kilovolts, an unusual voltage that is a legacy from the island’s sugar-plantation era. The network has grid-forming inverters at the Pacific Missile Range Facility, in the southwest, and at Kapaia, in the southeast. CHRIS PHILPOT

This characteristic gives rise to a property called system inertia. It arises naturally from those large generators running in synchrony with one another. Over many years, engineers used the inertia characteristics of the grid to determine how fast a power grid will change its frequency when a failure occurs, and then developed mitigation procedures based on that information.

If one or more big generators disconnect from the grid, the sudden imbalance of load to generation creates torque that extracts rotational energy from the remaining synchronous machines, slowing them down and thereby reducing the grid frequency—the frequency is electromechanically linked to the rotational speed of the generators feeding the grid. Fortunately, the kinetic energy stored in all that rotating mass slows this frequency drop and typically allows the remaining generators enough time to ramp up their power output to meet the additional load.

Electricity grids are designed so that even if the network loses its largest generator, running at full output, the other generators can pick up the additional load and the frequency nadir never falls below a specific threshold. In the United States, where nominal grid frequency is 60 hertz, the threshold is generally between 59.3 and 59.5 Hz. As long as the frequency remains above this point, local blackouts are unlikely to occur.

Why We Need Grid-Forming Inverters

Wind turbines, photovoltaics, and battery-storage systems differ from conventional generators because they all produce direct current (DC) electricity—they don’t have a heartbeat like alternating current does. With the exception of wind turbines, these are not rotating machines. And most modern wind turbines aren’t synchronously rotating machines from a grid standpoint—the frequency of their AC output depends on the wind speed. So that variable-frequency AC is rectified to DC before being converted to an AC waveform that matches the grid’s.

As mentioned, inverters convert the DC electricity to grid-compatible AC. A conventional, or grid-following, inverter uses power transistors that repeatedly and rapidly switch the polarity applied to a load. By switching at high speed, under software control, the inverter produces a high-frequency AC signal that is filtered by capacitors and other components to produce a smooth AC current output. So in this scheme, the software shapes the output waveform. In contrast, with synchronous generators the output waveform is determined by the physical and electrical characteristics of the generator.

Grid-following inverters operate only if they can “see” an existing voltage and frequency on the grid that they can synchronize to. They rely on controls that sense the frequency of the voltage waveform and lock onto that signal, usually by means of a technology called a phase-locked loop. So if the grid goes down, these inverters will stop injecting power because there is no voltage to follow. A key point here is that grid-following inverters do not deliver any inertia.

A photo of a row of people looking at monitors. Przemyslaw Koralewicz, David Corbus, Shahil Shah, and Robb Wallen, researchers at the National Renewable Energy Laboratory, evaluate a grid-forming inverter used on Kauai at the NREL Flatirons Campus. DENNIS SCHROEDER/NREL

Grid-following inverters work fine when inverter-based power sources are relatively scarce. But as the levels of inverter-based resources rise above 60 to 70 percent, things start to get challenging. That’s why system operators around the world are beginning to put the brakes on renewable deployment and curtailing the operation of existing renewable plants. For example, the Electric Reliability Council of Texas (ERCOT) regularly curtails the use of renewables in that state because of stability issues arising from too many grid-following inverters.

It doesn’t have to be this way. When the level of inverter-based power sources on a grid is high, the inverters themselves could support grid-frequency stability. And when the level is very high, they could form the voltage and frequency of the grid. In other words, they could collectively set the pulse, rather than follow it. That’s what grid-forming inverters do.

The Difference Between Grid Forming and Grid Following

Grid-forming (GFM) and grid-following (GFL) inverters share several key characteristics. Both can inject current into the grid during a disturbance. Also, both types of inverters can support the voltage on a grid by controlling their reactive power, which is the product of the voltage and the current that are out of phase with each other. Both kinds of inverters can also help prop up the frequency on the grid, by controlling their active power, which is the product of the voltage and current that are in phase with each other.

What makes grid-forming inverters different from grid-following inverters is mainly software. GFM inverters are controlled by code designed to maintain a stable output voltage waveform, but they also allow the magnitude and phase of that waveform to change over time. What does that mean in practice? The unifying characteristic of all GFM inverters is that they hold a constant voltage magnitude and frequency on short timescales—for example, a few dozen milliseconds—while allowing that waveform’s magnitude and frequency to change over several seconds to synchronize with other nearby sources, such as traditional generators and other GFM inverters.

Some GFM inverters, called virtual synchronous machines, achieve this response by mimicking the physical and electrical characteristics of a synchronous generator, using control equations that describe how it operates. Other GFM inverters are programmed to simply hold a constant target voltage and frequency, allowing that target voltage and frequency to change slowly over time to synchronize with the rest of the power grid following what is called a droop curve. A droop curve is a formula used by grid operators to indicate how a generator should respond to a deviation from nominal voltage or frequency on its grid. There are many variations of these two basic GFM control methods, and other methods have been proposed as well.

At least eight major grid-forming projects are either under construction or in operation in Australia, along with others in Asia, Europe, North America, and the Middle East.

To better understand this concept, imagine that a transmission line shorts to ground or a generator trips due to a lightning strike. (Such problems typically occur multiple times a week, even on the best-run grids.) The key advantage of a GFM inverter in such a situation is that it does not need to quickly sense frequency and voltage decline on the grid to respond. Instead, a GFM inverter just holds its own voltage and frequency relatively constant by injecting whatever current is needed to achieve that, subject to its physical limits. In other words, a GFM inverter is programmed to act like an AC voltage source behind some small impedance (impedance is the opposition to AC current arising from resistance, capacitance, and inductance). In response to an abrupt drop in grid voltage, its digital controller increases current output by allowing more current to pass through its power transistors, without even needing to measure the change it’s responding to. In response to falling grid frequency, the controller increases power.

GFL controls, on the other hand, need to first measure the change in voltage or frequency, and then take an appropriate control action before adjusting their output current to mitigate the change. This GFL strategy works if the response does not need to be superfast (as in microseconds). But as the grid becomes weaker (meaning there are fewer voltage sources nearby), GFL controls tend to become unstable. That’s because by the time they measure the voltage and adjust their output, the voltage has already changed significantly, and fast injection of current at that point can potentially lead to a dangerous positive feedback loop. Adding more GFL inverters also tends to reduce stability because it becomes more difficult for the remaining voltage sources to stabilize them all.

When a GFM inverter responds with a surge in current, it must do so within tightly prescribed limits. It must inject enough current to provide some stability but not enough to damage the power transistors that control the current flow.

Increasing the maximum current flow is possible, but it requires increasing the capacity of the power transistors and other components, which can significantly increase cost. So most inverters (both GFM and GFL) don’t provide current surges larger than about 10 to 30 percent above their rated steady-state current. For comparison, a synchronous generator can inject around 500 to 700 percent more than its rated current for several AC line cycles (around a tenth of a second, say) without sustaining any damage. For a large generator, this can amount to thousands of amperes. Because of this difference between inverters and synchronous generators, the protection technologies used in power grids will need to be adjusted to account for lower levels of fault current.

What the Kauai Episode Reveals

The 2 April event on Kauai offered an unusual opportunity to study the performance of GFM inverters during a disturbance. After the event, one of us (Andy Hoke) along with Jin Tan and Shuan Dong and some coworkers at the National Renewable Energy Laboratory, collaborated with the Kauai Island Utility Cooperative (KIUC) to get a clear understanding of how the remaining system generators and inverter-based resources interacted with each other during the disturbance. What we determined will help power grids of the future operate at levels of inverter-based resources up to 100 percent.

NREL researchers started by creating a model of the Kauai grid. We then used a technique called electromagnetic transient (EMT) simulation, which yields information on the AC waveforms on a sub-millisecond basis. In addition, we conducted hardware tests at NREL’s Flatirons Campus on a scaled-down replica of one of Kauai’s solar-battery plants, to evaluate the grid-forming control algorithms for inverters deployed on the island.The leap from power systems like Kauai’s, with a peak demand of roughly 80 MW, to ones like South Australia’s, at 3,000 MW, is a big one. But it’s nothing compared to what will come next: grids with peak demands of 85,000 MW (in Texas) and 742,000 MW (the rest of the continental United States).

Several challenges need to be solved before we can attempt such leaps. They include creating standard GFM specifications so that inverter vendors can create products. We also need accurate models that can be used to simulate the performance of GFM inverters, so we can understand their impact on the grid.

Some progress in standardization is already happening. In the United States, for example, the North American Electric Reliability Corporation (NERC) recently published a recommendation that all future large-scale battery-storage systems have grid-forming capability.

Standards for GFM performance and validation are also starting to emerge in some countries, including Australia, Finland, and Great Britain. In the United States, the Department of Energy recently backed a consortium to tackle building and integrating inverter-based resources into power grids. Led by the National Renewable Energy Laboratory, the University of Texas at Austin, and the Electric Power Research Institute, the Universal Interoperability for Grid-Forming Inverters (UNIFI) Consortium aims to address the fundamental challenges in integrating very high levels of inverter-based resources with synchronous generators in power grids. The consortium now has over 30 members from industry, academia, and research laboratories.

An image of frequency responses to two different grid disruptions. A recording of the frequency responses to two different grid disruptions on Kauai shows the advantages of grid-forming inverters. The red trace shows the relatively contained response with two grid-forming inverter systems in operation. The blue trace shows the more extreme response to an earlier, comparable disruption, at a time when there was only one grid-forming plant online.NATIONAL RENEWABLE ENERGY LABORATORY

At 4:25 pm on 2 April, there were two large GFM solar-battery plants, one large GFL solar-battery plant, one large oil-fired turbine, one small diesel plant, two small hydro plants, one small biomass plant, and a handful of other solar generators online. Immediately after the oil-fired turbine failed, the AC frequency dropped quickly from 60 Hz to just above 59 Hz during the first 3 seconds [red trace in the figure above]. As the frequency dropped, the two GFM-equipped plants quickly ramped up power, with one plant quadrupling its output and the other doubling its output in less than 1/20 of a second.

In contrast, the remaining synchronous machines contributed some rapid but unsustained active power via their inertial responses, but took several seconds to produce sustained increases in their output. It is safe to say, and it has been confirmed through EMT simulation, that without the two GFM plants, the entire grid would have experienced a blackout.

Coincidentally, an almost identical generator failure had occurred a couple of years earlier, on 21 November 2021. In this case, only one solar-battery plant had grid-forming inverters. As in the 2023 event, the three large solar-battery plants quickly ramped up power and prevented a blackout. However, the frequency and voltage throughout the grid began to oscillate around 20 times per second [the blue trace in the figure above], indicating a major grid stability problem and causing some customers to be automatically disconnected. NREL’s EMT simulations, hardware tests, and controls analysis all confirmed that the severe oscillation was due to a combination of grid-following inverters tuned for extremely fast response and a lack of sufficient grid strength to support those GFL inverters.

In other words, the 2021 event illustrates how too many conventional GFL inverters can erode stability. Comparing the two events demonstrates the value of GFM inverter controls—not just to provide fast yet stable responses to grid events but also to stabilize nearby GFL inverters and allow the entire grid to maintain operations without a blackout.

Australia Commissions Big GFM Projects

An illustration of a chart. In sunny South Australia, solar power now routinely supplies all or nearly all of the power needed during the middle of the day. Shown here is the chart for 31 December 2023, in which solar supplied slightly more power than the state needed at around 1:30 p.m. AUSTRALIAN ENERGY MARKET OPERATOR (AEMO)

The next step for inverter-dominated power grids is to go big. Some of the most important deployments are in South Australia. As in Kauai, the South Australian grid now has such high levels of solar generation that it regularly experiences days in which the solar generation can exceed the peak demand during the middle of the day [see figure at left].

The most well-known of the GFM resources in Australia is the Hornsdale Power Reserve in South Australia. This 150-MW/194-MWh system, which uses Tesla’s Powerpack 2 lithium-ion batteries, was originally installed in 2017 and was upgraded to grid-forming capability in 2020.

Australia’s largest battery (500 MW/1,000 MWh) with grid-forming inverters is expected to start operating in Liddell, New South Wales, later this year. This battery, from AGL Energy, will be located at the site of a decommissioned coal plant. This and several other larger GFM systems are expected to start working on the South Australia grid over the next year.

The leap from power systems like Kauai’s, with a peak demand of roughly 80 MW, to ones like South Australia’s, at 3,000 MW, is a big one. But it’s nothing compared to what will come next: grids with peak demands of 85,000 MW (in Texas) and 742,000 MW (the rest of the continental United States).

Several challenges need to be solved before we can attempt such leaps. They include creating standard GFM specifications so that inverter vendors can create products. We also need accurate models that can be used to simulate the performance of GFM inverters, so we can understand their impact on the grid.

Some progress in standardization is already happening. In the United States, for example, the North American Electric Reliability Corporation (NERC) recently published a recommendation that all future large-scale battery-storage systems have grid-forming capability.

Standards for GFM performance and validation are also starting to emerge in some countries, including Australia, Finland, and Great Britain. In the United States, the Department of Energy recently backed a consortium to tackle building and integrating inverter-based resources into power grids. Led by the National Renewable Energy Laboratory, the University of Texas at Austin, and the Electric Power Research Institute, the Universal Interoperability for Grid-Forming Inverters (UNIFI) Consortium aims to address the fundamental challenges in integrating very high levels of inverter-based resources with synchronous generators in power grids. The consortium now has over 30 members from industry, academia, and research laboratories.

A photo of a field of white-squared batteries from above.  One of Australia’s major energy-storage facilities is the Hornsdale Power Reserve, at 150 megawatts and 194 megawatt-hours. Hornsdale, along with another facility called the Riverina Battery, are the country’s two largest grid-forming installations. NEOEN

In addition to specifications, we need computer models of GFM inverters to verify their performance in large-scale systems. Without such verification, grid operators won’t trust the performance of new GFM technologies. Using GFM models built by the UNIFI Consortium, system operators and utilities such as the Western Electricity Coordinating Council, American Electric Power, and ERCOT (the Texas’s grid-reliability organization) are conducting studies to understand how GFM technology can help their grids.

Getting to a Greener Grid

As we progress toward a future grid dominated by inverter-based generation, a question naturally arises: Will all inverters need to be grid-forming? No. Several studies and simulations have indicated that we’ll need just enough GFM inverters to strengthen each area of the grid so that nearby GFL inverters remain stable.

How many GFMs is that? The answer depends on the characteristics of the grid and other generators. Some initial studies have shown that a power system can operate with 100 percent inverter-based resources if around 30 percent are grid-forming. More research is needed to understand how that number depends on details such as the grid topology and the control details of both the GFLs and the GFMs.

Ultimately, though, electricity generation that is completely carbon free in its operation is within our grasp. Our challenge now is to make the leap from small to large to very large systems. We know what we have to do, and it will not require technologies that are far more advanced than what we already have. It will take testing, validation in real-world scenarios, and standardization so that synchronous generators and inverters can unify their operations to create a reliable and robust power grid. Manufacturers, utilities, and regulators will have to work together to make this happen rapidly and smoothly. Only then can we begin the next stage of the grid’s evolution, to large-scale systems that are truly carbon neutral.

  • ✇Ars Technica - All content
  • Matrix multiplication breakthrough could lead to faster, more efficient AI modelsBenj Edwards
    Enlarge / When you do math on a computer, you fly through a numerical tunnel like this—figuratively, of course. (credit: Getty Images) Computer scientists have discovered a new way to multiply large matrices faster than ever before by eliminating a previously unknown inefficiency, reports Quanta Magazine. This could eventually accelerate AI models like ChatGPT, which rely heavily on matrix multiplication to function. The findings, presented in two recent papers, have led to w
     

Matrix multiplication breakthrough could lead to faster, more efficient AI models

8. Březen 2024 v 22:07
Futuristic huge technology tunnel and binary data.

Enlarge / When you do math on a computer, you fly through a numerical tunnel like this—figuratively, of course. (credit: Getty Images)

Computer scientists have discovered a new way to multiply large matrices faster than ever before by eliminating a previously unknown inefficiency, reports Quanta Magazine. This could eventually accelerate AI models like ChatGPT, which rely heavily on matrix multiplication to function. The findings, presented in two recent papers, have led to what is reported to be the biggest improvement in matrix multiplication efficiency in over a decade.

Multiplying two rectangular number arrays, known as matrix multiplication, plays a crucial role in today's AI models, including speech and image recognition, chatbots from every major vendor, AI image generators, and video synthesis models like Sora. Beyond AI, matrix math is so important to modern computing (think image processing and data compression) that even slight gains in efficiency could lead to computational and power savings.

Graphics processing units (GPUs) excel in handling matrix multiplication tasks because of their ability to process many calculations at once. They break down large matrix problems into smaller segments and solve them concurrently using an algorithm.

Read 11 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • AI-generated articles prompt Wikipedia to downgrade CNET’s reliability ratingBenj Edwards
    Enlarge (credit: Jaap Arriens/NurPhoto/Getty Images) Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness, as noted in a detailed report from Futurism. The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022. Around November 2022, CNET began publishi
     

AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating

29. Únor 2024 v 23:00
The CNET logo on a smartphone screen.

Enlarge (credit: Jaap Arriens/NurPhoto/Getty Images)

Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness, as noted in a detailed report from Futurism. The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022.

Around November 2022, CNET began publishing articles written by an AI model under the byline "CNET Money Staff." In January 2023, Futurism brought widespread attention to the issue and discovered that the articles were full of plagiarism and mistakes. (Around that time, we covered plans to do similar automated publishing at BuzzFeed.) After the revelation, CNET management paused the experiment, but the reputational damage had already been done.

Wikipedia maintains a page called "Reliable sources/Perennial sources" that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia's perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication.

Read 7 remaining paragraphs | Comments

  • ✇Android Police
  • X: How to see sensitive content on TwitterBenjamin Zeman
    If you use X (formerly known as Twitter), you've seen the "potentially sensitive content" warnings on some posts. It's not surprising on a platform used worldwide, where anyone can post anonymously from a budget mobile phone. X lets you choose what you want to see on the platform, whether it's adult content or nudity.
     

X: How to see sensitive content on Twitter

22. Únor 2024 v 11:00

If you use X (formerly known as Twitter), you've seen the "potentially sensitive content" warnings on some posts. It's not surprising on a platform used worldwide, where anyone can post anonymously from a budget mobile phone. X lets you choose what you want to see on the platform, whether it's adult content or nudity.

  • ✇Ars Technica - All content
  • Google goes “open AI” with Gemma, a free, open-weights chatbot familyBenj Edwards
    Enlarge (credit: Google) On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It's Google's first significant open large language model (LLM) release since OpenAI's ChatGPT started a frenzy for AI chatbots in 2022. Gemma models come in two sizes: Gemma 2B (2 billi
     

Google goes “open AI” with Gemma, a free, open-weights chatbot family

21. Únor 2024 v 23:01
The Google Gemma logo

Enlarge (credit: Google)

On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It's Google's first significant open large language model (LLM) release since OpenAI's ChatGPT started a frenzy for AI chatbots in 2022.

Gemma models come in two sizes: Gemma 2B (2 billion parameters) and Gemma 7B (7 billion parameters), each available in pre-trained and instruction-tuned variants. In AI, parameters are values in a neural network that determine AI model behavior, and weights are a subset of these parameters stored in a file.

Developed by Google DeepMind and other Google AI teams, Gemma pulls from techniques learned during the development of Gemini, which is the family name for Google's most capable (public-facing) commercial LLMs, including the ones that power its Gemini AI assistant. Google says the name comes from the Latin gemma, which means "precious stone."

Read 5 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • ChatGPT goes temporarily “insane” with unexpected outputs, spooking usersBenj Edwards
    Enlarge (credit: Benj Edwards / Getty Images) On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanli
     

ChatGPT goes temporarily “insane” with unexpected outputs, spooking users

21. Únor 2024 v 17:57
Illustration of a broken toy robot.

Enlarge (credit: Benj Edwards / Getty Images)

On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.

ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.

"It gave me the exact same feeling—like watching someone slowly lose their mind either from psychosis or dementia," wrote a Reddit user named z3ldafitzgerald in response to a post about ChatGPT bugging out. "It’s the first time anything AI related sincerely gave me the creeps."

Read 7 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Will Smith parodies viral AI-generated video by actually eating spaghettiBenj Edwards
    Enlarge / The real Will Smith eating spaghetti, parodying an AI-generated video from 2023. (credit: Will Smith / Getty Images / Benj Edwards) On Monday, Will Smith posted a video on his official Instagram feed that parodied an AI-generated video of the actor eating spaghetti that went viral last year. With the recent announcement of OpenAI's Sora video synthesis model, many people have noted the dramatic jump in AI-video quality over the past year compared to the infamous spa
     

Will Smith parodies viral AI-generated video by actually eating spaghetti

20. Únor 2024 v 15:50
The real Will Smith eating spaghetti, parodying an AI-generated video from 2023.

Enlarge / The real Will Smith eating spaghetti, parodying an AI-generated video from 2023. (credit: Will Smith / Getty Images / Benj Edwards)

On Monday, Will Smith posted a video on his official Instagram feed that parodied an AI-generated video of the actor eating spaghetti that went viral last year. With the recent announcement of OpenAI's Sora video synthesis model, many people have noted the dramatic jump in AI-video quality over the past year compared to the infamous spaghetti video. Smith's new video plays on that comparison by showing the actual actor eating spaghetti in a comical fashion and claiming that it is AI-generated.

Captioned "This is getting out of hand!", the Instagram video uses a split screen layout to show the original AI-generated spaghetti video created by a Reddit user named "chaindrop" in March 2023 on the top, labeled with the subtitle "AI Video 1 year ago." Below that, in a box titled "AI Video Now," the real Smith shows 11 video segments of himself actually eating spaghetti by slurping it up while shaking his head, pouring it into his mouth with his fingers, and even nibbling on a friend's hair. 2006's Snap Yo Fingers by Lil Jon plays in the background.

In the Instagram comments section, some people expressed confusion about the new (non-AI) video, saying, "I'm still in doubt if second video was also made by AI or not." In a reply, someone else wrote, "Boomers are gonna loose [sic] this one. Second one is clearly him making a joke but I wouldn’t doubt it in a couple months time it will get like that."

Read 2 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Reddit sells training data to unnamed AI company ahead of IPOBenj Edwards
    Enlarge (credit: Reddit) On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site's content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month. Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investor
     

Reddit sells training data to unnamed AI company ahead of IPO

19. Únor 2024 v 22:10
In this photo illustration the American social news

Enlarge (credit: Reddit)

On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site's content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month.

Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investors of an anticipated IPO, Bloomberg said. The Bloomberg source speculates that the contract could serve as a model for future agreements with other AI companies.

After an era where AI companies utilized AI training data without expressly seeking any rightsholder permission, some tech firms have more recently begun entering deals where some content used for training AI models similar to GPT-4 (which runs the paid version of ChatGPT) comes under license. In December, for example, OpenAI signed an agreement with German publisher Axel Springer (publisher of Politico and Business Insider) for access to its articles. Previously, OpenAI has struck deals with other organizations, including the Associated Press. Reportedly, OpenAI is also in licensing talks with CNN, Fox, and Time, among others.

Read 4 remaining paragraphs | Comments

  • ✇PC Reviews Archives | BrutalGamer
  • Resident Evil 4 Remake (PC) ReviewBenj
    Las Plagas, A Missing Presidential Daughter, and Roundhouse Kicks, the long awaited Resident Evil 4 remake is here and -yes- it’s fantastic. Did I mention the kicks? The ancient injunction “if it ain’t broke, don’t fix it” does not apply to the 2023 remake of Resident Evil 4. Resident Evil 4 has been the definitive best game in the entire franchise for over 16 years now (maybe a lil opinion there -ed). At the time of its original release, it was a genre-defining video game. RE4 set
     

Resident Evil 4 Remake (PC) Review

Od: Benj
27. Březen 2023 v 16:00

Las Plagas, A Missing Presidential Daughter, and Roundhouse Kicks, the long awaited Resident Evil 4 remake is here and -yes- it’s fantastic.

Did I mention the kicks?

The ancient injunction “if it ain’t broke, don’t fix it” does not apply to the 2023 remake of Resident Evil 4.

Resident Evil 4 has been the definitive best game in the entire franchise for over 16 years now (maybe a lil opinion there -ed). At the time of its original release, it was a genre-defining video game. RE4 set the standards on many of the mechanics and elements that subsequent releases would make use of.

Speaking of, after a few hiccups, Capcom has been on a roll with a slew of successes. Those have been almost year after year, with the remakes of Resident Evil 2 and 3, as well as completely new sequels in Resident Evil 7 and Village. But after that last new title, Capcom finally decided to pull out all the stops, and start working on a full remake for Resident Evil 4.

It was a measure that was probably alluring as well as daunting to the developers of Division 1 – the standard bearer for everything Devil May Cry and Resident Evil. Many people considered RE4 to be a perfect game, in that it didn’t need a remake.

I personally could not think of any way for Capcom to improve on the original, aside from updating its graphics and removing some of the dated mechanics such as the QTEs. But that said, the remake annihilated all my pre-conceived notions in a very big way.

Here is my review of Resident Evil 4.

Graphics

Resident Evil 7 initially blew people’s collective minds with the full might of the RE Engine. It was a revolutionary game engine that rendered 3D scanned materials (and people) with an incredible mount of detail, while using as few resources as possible.

Through gaming use of that, the next-generation Resident Evil games all had one thing in common: they all looked great even on the dated hardware of the eighth-generation consoles. And Resident Evil 4 looks as great, if not better than, its predecessors.

Ashley and Leon hiding in the Village Church

The game looks so good and the characters all look so life-like, that it almost feels like this is a new engine. RE4 looks now how I imagined it to look back when I was younger, and playing it on the PlayStation 2.

The guns are all well-crafted and the locales look amazing. Just a note here though, as there are some reused assets from the previously mentioned Resident Evil Village. Even so, it doesn’t take away from the immersion, and that’s 100% a good thing.

But again, I feel like I can’t overstate how absolutely gorgeous everything looks. And the best part about all of this is that you can still 100% run Resident Evil 4 relatively well even with an older (gaming) computer.

Ada Wong as she appears in Resident Evil 4 (2023)

All in all, Resident Evil 4 is arguably the best-looking game released in the year 2023… well thus far, at least.

Gameplay

Resident Evil 4 plays like a dream. The original release was already lightyears ahead of its time in both gameplay and replayability, and the remake is the same if not better in every single way.

Referencing that original release again, it had tank-like controls, with an over-the-shoulder third-person perspective, and limited usage of the knife. But again, at the time it was considered an industry-defining title. Many of its successors, such as Dead Space, took some inspiration from this and created their own legacies that people love to this day.

The remake, however, has changed the game – literally. Leon can now parry enemies by pressing on the spacebar (for the PC) right before an enemy’s attack hits him. If done right, Leon will perform a “perfect parry” that will allow him to follow-up with a melee attack of his own.

Bye-bye tank

The parry is a terrific new addition, but there’s also a key alteration. Gone are the tank-like controls, and Leon is now able to move while aiming and shooting. His movement is also more fluid, and Leon finally feels like the way he is portrayed in the arguably pretty good Resident Evil CG animated movies.

The president’s daughter Ashley is also way better this time, despite still being an NPC that Leon has to protect while moving around the game’s various locations. Not only did Capcom change the original “follow me” and “wait here” commands to the more tactical commands of “tight” and “loose”.

Using the command “tight” (using the left control key on the PC) will have Ashley stick close to Leon in the same way she did in the original. On the other hand, “loose” will have Ashley stay a certain distance away from Leon while he is the middle of a fight.

It’s an innovative way to change Ashley’s positioning, because not only is she more nuanced as an NPC companion, but she can also survive on her own with the “loose” command. To balance this out, Capcom actually removed the “hide” command save for a few scenarios of the game (where I think it is still sorely needed).

One of Ashley’s biggest changes is telling Leon to look out if an unseen enemy is about to attack him. There are numerous instances when Ashley saved me from a killing blow by calling it out before it happens. She will also alert the player if an enemy is preparing to fire at Leon with a projectile.

All in all, I am very satisfied with Ashley’s changes. She made Resident Evil 4 even better despite being someone I needed to babysit from the Plaga-infested villagers.

That’s a lotta luggage

Returning to the game is the beloved attache inventory system.

The good old attache case is back – with a few changes

The attache functions similarly to the old system, but it’s learned a few new tricks. The case now has functional charms that give various buffs to Leon ranging from bonus ammunition, crafting, additional healing from herbs (plus eggs and fish), as well as a legendary charm that can even increase Leon’s movement speed.

Each attache case can hold up to a maximum of three charms out of a possible thirty. These charms can be unlocked by a simple gacha system from the returning shooting range and it will take a while to unlock every charm in the game.

Like the case, guns are also upgradeable. And while most of the weapons from the original game make a comeback, the P.R.L. 412 (which basically looked like a handheld laser cannon) is not included as of this writing.

It takes a lot of Pesetas to unlock gun upgrades too, but they are absolutely essential to surviving the horrors of the Plaga-infested Spanish countryside. However, not everything in the game can be purchased with money. The spinel also makes a comeback but in a slightly different form.

Instead of being an easily found collectible, it is now a form of currency that Leon can trade with the merchant for some nifty items such as treasure maps and additional attache cases that give buffs such as increasing the spawn rate for Red Herbs and materials for crafting.

As you can hopefully see from all of this, the Resident Evil 4 remake plays exactly like the original game… but at the same time it’s almost unrecognizable, with how much of its systems have been changed for the better.

The hills have Plagas

Unlike with the controls, the story of RE4’s remake plays out almost the same as the original. The US President’s Daughter gets kidnapped, and the information gathered from the Augmented Reality mini-game (that launched a few weeks before the game did) suggests Ashley was smuggled into Spain.

Leon is tasked by the President to investigate Ashley’s disappearance, and to get her back alive. There are a few key differences here and there as you progress through the story, but the overall gist is still the same. Still, there’s a devil in some of those details.

That’s because some of the changes in the story could possibly alter the direction of Resident Evil 5 and 6, should they be next on the docket to be remade.

Further Changes

Along with those story details, some of the characters’ fates have also been altered. That’s including Luis Sera, who was destined to die in the middle of the game in the castle section of the original game. (Minor spoilers ahead -ed)

Luis also becomes a companion NPC in the latter half of the game. But unlike Ashley, Luis can actually fight for himself, and the notorious double El Gigante fight is made a little easier with his help. One thing worth noting too, is how Luis is given more depth to his character from his voice lines as well as the various notes that Leon finds throughout his journey. And another character with a few key changes is one Ada Wong.

She is still protecting Leon from the shadows, while acting on her own agenda, but Ada is no longer a one-dimensional femme fatale. While still remaining cool, Ada is made more human in her reactions and interactions with Leon. She still cares for the former rookie-cop whose life she changed when she, uh, betrayed him… and seemingly died in front of him during the events of Resident Evil 2.

So Leon and Ada have clearly changed and been made more whole by the subtle storytelling in their encounters. But what hasn’t changed is that they both still clearly care for each other, albeit with Leon’s frustrations and jaded views on Ada and life in general.

Changes have also been made in the presentation and characteristics of major players such as Chief Mendez and Jack Krauser. While not as cartoony as his original version, Ramon Salazar is also altered for the better. Instead of being goofy, he is presented as trying to seem noble but with darker undertones, almost like a psychopath pretending to be a posh noble.

All in all, I really enjoyed how more grounded the characters seem to be in the new version of RE4. It’s definitely a welcome change.

Overall

Resident Evil 4 (2023) is the definitive version, above even the original release. It is better in every way and it’s the most fun I’ve had with a game in years. It took me somewhere around 17 hours to finish my first playthrough on hardcore mode, but it was worth every second.

Game Results Screen

This is now my Top 1, favorite game of all time. And while it has easily dethroned my former number 1 (which is unsurprisingly RE4 2006), there are still DLCs to come in the form of the Mercenaries mode as well as the rumored Separate Ways DLC (which is still unconfirmed as this review’s writing).

If Resident Evil 3 is arguably the worst remade game out of the original trilogy, then Resident Evil 4 is the best. And actually, as far as I’m concerned it’s the best Resident Evil game period. Bar none.

The best is yet to come for sure.

Resident Evil 4 (2023)
Release date:
March 24th, 2023
Platforms: PC (reviewed), Xbox Series X|S, PS5, PS4, Xbox One
Publisher: Capcom
Developer: Capcom
MSRP: $59.99 USD

The post Resident Evil 4 Remake (PC) Review appeared first on BrutalGamer.

❌
❌