FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál

More durable metals for fusion power reactors

For many decades, nuclear fusion power has been viewed as the ultimate energy source. A fusion power plant could generate carbon-free energy at a scale needed to address climate change. And it could be fueled by deuterium recovered from an essentially endless source — seawater.

Decades of work and billions of dollars in research funding have yielded many advances, but challenges remain. To Ju Li, the TEPCO Professor in Nuclear Science and Engineering and a professor of materials science and engineering at MIT, there are still two big challenges. The first is to build a fusion power plant that generates more energy than is put into it; in other words, it produces a net output of power. Researchers worldwide are making progress toward meeting that goal.

The second challenge that Li cites sounds straightforward: “How do we get the heat out?” But understanding the problem and finding a solution are both far from obvious.

Research in the MIT Energy Initiative (MITEI) includes development and testing of advanced materials that may help address those challenges, as well as many other challenges of the energy transition. MITEI has multiple corporate members that have been supporting MIT’s efforts to advance technologies required to harness fusion energy.

The problem: An abundance of helium, a destructive force

Key to a fusion reactor is a superheated plasma — an ionized gas — that’s reacting inside a vacuum vessel. As light atoms in the plasma combine to form heavier ones, they release fast neutrons with high kinetic energy that shoot through the surrounding vacuum vessel into a coolant. During this process, those fast neutrons gradually lose their energy by causing radiation damage and generating heat. The heat that’s transferred to the coolant is eventually used to raise steam that drives an electricity-generating turbine.

The problem is finding a material for the vacuum vessel that remains strong enough to keep the reacting plasma and the coolant apart, while allowing the fast neutrons to pass through to the coolant. If one considers only the damage due to neutrons knocking atoms out of position in the metal structure, the vacuum vessel should last a full decade. However, depending on what materials are used in the fabrication of the vacuum vessel, some projections indicate that the vacuum vessel will last only six to 12 months. Why is that? Today’s nuclear fission reactors also generate neutrons, and those reactors last far longer than a year.

The difference is that fusion neutrons possess much higher kinetic energy than fission neutrons do, and as they penetrate the vacuum vessel walls, some of them interact with the nuclei of atoms in the structural material, giving off particles that rapidly turn into helium atoms. The result is hundreds of times more helium atoms than are present in a fission reactor. Those helium atoms look for somewhere to land — a place with low “embedding energy,” a measure that indicates how much energy it takes for a helium atom to be absorbed. As Li explains, “The helium atoms like to go to places with low helium embedding energy.” And in the metals used in fusion vacuum vessels, there are places with relatively low helium embedding energy — namely, naturally occurring openings called grain boundaries.

Metals are made up of individual grains inside which atoms are lined up in an orderly fashion. Where the grains come together there are gaps where the atoms don’t line up as well. That open space has relatively low helium embedding energy, so the helium atoms congregate there. Worse still, helium atoms have a repellent interaction with other atoms, so the helium atoms basically push open the grain boundary. Over time, the opening grows into a continuous crack, and the vacuum vessel breaks.

That congregation of helium atoms explains why the structure fails much sooner than expected based just on the number of helium atoms that are present. Li offers an analogy to illustrate. “Babylon is a city of a million people. But the claim is that 100 bad persons can destroy the whole city — if all those bad persons work at the city hall.” The solution? Give those bad persons other, more attractive places to go, ideally in their own villages.

To Li, the problem and possible solution are the same in a fusion reactor. If many helium atoms go to the grain boundary at once, they can destroy the metal wall. The solution? Add a small amount of a material that has a helium embedding energy even lower than that of the grain boundary. And over the past two years, Li and his team have demonstrated — both theoretically and experimentally — that their diversionary tactic works. By adding nanoscale particles of a carefully selected second material to the metal wall, they’ve found they can keep the helium atoms that form from congregating in the structurally vulnerable grain boundaries in the metal.

Looking for helium-absorbing compounds

To test their idea, So Yeon Kim ScD ’23 of the Department of Materials Science and Engineering and Haowei Xu PhD ’23 of the Department of Nuclear Science and Engineering acquired a sample composed of two materials, or “phases,” one with a lower helium embedding energy than the other. They and their collaborators then implanted helium ions into the sample at a temperature similar to that in a fusion reactor and watched as bubbles of helium formed. Transmission electron microscope images confirmed that the helium bubbles occurred predominantly in the phase with the lower helium embedding energy. As Li notes, “All the damage is in that phase — evidence that it protected the phase with the higher embedding energy.”

Having confirmed their approach, the researchers were ready to search for helium-absorbing compounds that would work well with iron, which is often the principal metal in vacuum vessel walls. “But calculating helium embedding energy for all sorts of different materials would be computationally demanding and expensive,” says Kim. “We wanted to find a metric that is easy to compute and a reliable indicator of helium embedding energy.”

They found such a metric: the “atomic-scale free volume,” which is basically the maximum size of the internal vacant space available for helium atoms to potentially settle. “This is just the radius of the largest sphere that can fit into a given crystal structure,” explains Kim. “It is a simple calculation.” Examination of a series of possible helium-absorbing ceramic materials confirmed that atomic free volume correlates well with helium embedding energy. Moreover, many of the ceramics they investigated have higher free volume, thus lower embedding energy, than the grain boundaries do.

However, in order to identify options for the nuclear fusion application, the screening needed to include some other factors. For example, in addition to the atomic free volume, a good second phase must be mechanically robust (able to sustain a load); it must not get very radioactive with neutron exposure; and it must be compatible — but not too cozy — with the surrounding metal, so it disperses well but does not dissolve into the metal. “We want to disperse the ceramic phase uniformly in the bulk metal to ensure that all grain boundary regions are close to the dispersed ceramic phase so it can provide protection to those regions,” says Li. “The two phases need to coexist, so the ceramic won’t either clump together or totally dissolve in the iron.”

Using their analytical tools, Kim and Xu examined about 50,000 compounds and identified 750 potential candidates. Of those, a good option for inclusion in a vacuum vessel wall made mainly of iron was iron silicate.

Experimental testing

The researchers were ready to examine samples in the lab. To make the composite material for proof-of-concept demonstrations, Kim and collaborators dispersed nanoscale particles of iron silicate into iron and implanted helium into that composite material. She took X-ray diffraction (XRD) images before and after implanting the helium and also computed the XRD patterns. The ratio between the implanted helium and the dispersed iron silicate was carefully controlled to allow a direct comparison between the experimental and computed XRD patterns. The measured XRD intensity changed with the helium implantation exactly as the calculations had predicted. “That agreement confirms that atomic helium is being stored within the bulk lattice of the iron silicate,” says Kim.

To follow up, Kim directly counted the number of helium bubbles in the composite. In iron samples without the iron silicate added, grain boundaries were flanked by many helium bubbles. In contrast, in the iron samples with the iron silicate ceramic phase added, helium bubbles were spread throughout the material, with many fewer occurring along the grain boundaries. Thus, the iron silicate had provided sites with low helium-embedding energy that lured the helium atoms away from the grain boundaries, protecting those vulnerable openings and preventing cracks from opening up and causing the vacuum vessel to fail catastrophically.

The researchers conclude that adding just 1 percent (by volume) of iron silicate to the iron walls of the vacuum vessel will cut the number of helium bubbles in half and also reduce their diameter by 20 percent — “and having a lot of small bubbles is OK if they’re not in the grain boundaries,” explains Li.

Next steps

Thus far, Li and his team have gone from computational studies of the problem and a possible solution to experimental demonstrations that confirm their approach. And they’re well on their way to commercial fabrication of components. “We’ve made powders that are compatible with existing commercial 3D printers and are preloaded with helium-absorbing ceramics,” says Li. The helium-absorbing nanoparticles are well dispersed and should provide sufficient helium uptake to protect the vulnerable grain boundaries in the structural metals of the vessel walls. While Li confirms that there’s more scientific and engineering work to be done, he, along with Alexander O'Brien PhD ’23 of the Department of Nuclear Science and Engineering and Kang Pyo So, a former postdoc in the same department, have already developed a startup company that’s ready to 3D print structural materials that can meet all the challenges faced by the vacuum vessel inside a fusion reactor.

This research was supported by Eni S.p.A. through the MIT Energy Initiative. Additional support was provided by a Kwajeong Scholarship; the U.S. Department of Energy (DOE) Laboratory Directed Research and Development program at Idaho National Laboratory; U.S. DOE Lawrence Livermore National Laboratory; and Creative Materials Discovery Program through the National Research Foundation of Korea.

© Photo: Gretchen Ertl

Based on theoretical and experimental studies, MIT engineers have shown that adding nanoparticles of certain ceramics to the metal walls of the vessel containing the reacting plasma inside a nuclear fusion reactor can protect the metal from damage, significantly extending its lifetime. Professor Ju Li (right) and postdoc So Yeon Kim (left) examine samples of the composite they have fabricated for their demonstrations.
  • ✇Ars Technica - All content
  • Procreate defies AI trend, pledges “no generative AI” in its illustration appBenj Edwards
    Enlarge / Still of Procreate CEO James Cuda from a video posted to X. (credit: Procreate) On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries. "Generative AI is ripping the humanity out of things," Procreate
     

Procreate defies AI trend, pledges “no generative AI” in its illustration app

20. Srpen 2024 v 18:52
Still of Procreate CEO James Cuda from a video posted to X.

Enlarge / Still of Procreate CEO James Cuda from a video posted to X. (credit: Procreate)

On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries.

"Generative AI is ripping the humanity out of things," Procreate wrote on its website. "Built on a foundation of theft, the technology is steering us toward a barren future."

In a video posted on X, Procreate CEO James Cuda laid out his company's stance, saying, "We’re not going to be introducing any generative AI into our products. I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists."

Read 10 remaining paragraphs | Comments

  • ✇Kotaku
  • Mega Man Returns, But As A Digital Funko PopZack Zwiezen
    Funko Fusion, the upcoming third-person brand-blending action game, is out next month. And to help spread the world, the devs have released a demo for Funko Fusion, as well as confirmed that not even Mega Man is safe from this mish-mash of IP. Read more...
     

Mega Man Returns, But As A Digital Funko Pop

19. Srpen 2024 v 21:05

Funko Fusion, the upcoming third-person brand-blending action game, is out next month. And to help spread the world, the devs have released a demo for Funko Fusion, as well as confirmed that not even Mega Man is safe from this mish-mash of IP.

Read more...

  • ✇The Game Fanatics
  • A Collectable Card Battler – Let’s Play SolForge FusionJulian Harris
    If you are not familiar, SolForge Fusion is a CCG (Collectable Card Game) created by Richard Garfield (Magic The Gathering) and Justin Gary (Ascension). It started as a physical card game and it is now making its way over to Steam. I have been having the best time in its single-player campaign mode. It reminds … A Collectable Card Battler – Let’s Play SolForge Fusion Read More »The post https://thegamefanatics.com/a-collectable-card-battler-lets-play-solforge-fusion/ appeared first on The Game
     

A Collectable Card Battler – Let’s Play SolForge Fusion

9. Květen 2024 v 22:27
If you are not familiar, SolForge Fusion is a CCG (Collectable Card Game) created by Richard Garfield (Magic The Gathering) and Justin Gary (Ascension). It started as a physical card game and it is now making its way over to Steam. I have been having the best time in its single-player campaign mode. It reminds …

A Collectable Card Battler – Let’s Play SolForge Fusion Read More »

The post https://thegamefanatics.com/a-collectable-card-battler-lets-play-solforge-fusion/ appeared first on The Game Fanatics,.
  • ✇Ars Technica - All content
  • FLUX: This new AI image generator is eerily good at creating human handsBenj Edwards
    Enlarge / AI-generated image by FLUX.1 dev: "A beautiful queen of the universe holding up her hands, face in the background." (credit: FLUX.1) On Thursday, AI-startup Black Forest Labs announced the launch of its company and the release of its first suite of text-to-image AI models, called FLUX.1. The German-based company, founded by researchers who developed the technology behind Stable Diffusion and invented the latent diffusion technique, aims to create advanced generative
     

FLUX: This new AI image generator is eerily good at creating human hands

2. Srpen 2024 v 19:47
AI-generated image by FLUX.1 dev:

Enlarge / AI-generated image by FLUX.1 dev: "A beautiful queen of the universe holding up her hands, face in the background." (credit: FLUX.1)

On Thursday, AI-startup Black Forest Labs announced the launch of its company and the release of its first suite of text-to-image AI models, called FLUX.1. The German-based company, founded by researchers who developed the technology behind Stable Diffusion and invented the latent diffusion technique, aims to create advanced generative AI for images and videos.

The launch of FLUX.1 comes about seven weeks after Stability AI's troubled release of Stable Diffusion 3 Medium in mid-June. Stability AI's offering faced widespread criticism among image-synthesis hobbyists for its poor performance in generating human anatomy, with users sharing examples of distorted limbs and bodies across social media. That problematic launch followed the earlier departure of three key engineers from Stability AI—Robin Rombach, Andreas Blattmann, and Dominik Lorenz—who went on to found Black Forest Labs along with latent diffusion co-developer Patrick Esser and others.

Black Forest Labs launched with the release of three FLUX.1 text-to-image models: a high-end commercial "pro" version, a mid-range "dev" version with open weights for non-commercial use, and a faster open-weights "schnell" version ("schnell" means quick or fast in German). Black Forest Labs claims its models outperform existing options like Midjourney and DALL-E in areas such as image quality and adherence to text prompts.

Read 9 remaining paragraphs | Comments

  • ✇IEEE Spectrum
  • Nvidia Conquers Latest AI Tests​Samuel K. Moore
    For years, Nvidia has dominated many machine learning benchmarks, and now there are two more notches in its belt. MLPerf, the AI benchmarking suite sometimes called “the Olympics of machine learning,” has released a new set of training tests to help make more and better apples-to-apples comparisons between competing computer systems. One of MLPerf’s new tests concerns fine-tuning of large language models, a process that takes an existing trained model and trains it a bit more with specialized
     

Nvidia Conquers Latest AI Tests​

12. Červen 2024 v 17:00


For years, Nvidia has dominated many machine learning benchmarks, and now there are two more notches in its belt.

MLPerf, the AI benchmarking suite sometimes called “the Olympics of machine learning,” has released a new set of training tests to help make more and better apples-to-apples comparisons between competing computer systems. One of MLPerf’s new tests concerns fine-tuning of large language models, a process that takes an existing trained model and trains it a bit more with specialized knowledge to make it fit for a particular purpose. The other is for graph neural networks, a type of machine learning behind some literature databases, fraud detection in financial systems, and social networks.

Even with the additions and the participation of computers using Google’s and Intel’s AI accelerators, systems powered by Nvidia’s Hopper architecture dominated the results once again. One system that included 11,616 Nvidia H100 GPUs—the largest collection yet—topped each of the nine benchmarks, setting records in five of them (including the two new benchmarks).

“If you just throw hardware at the problem, it’s not a given that you’re going to improve.” —Dave Salvator, Nvidia

The 11,616-H100 system is “the biggest we’ve ever done,” says Dave Salvator, director of accelerated computing products at Nvidia. It smashed through the GPT-3 training trial in less than 3.5 minutes. A 512-GPU system, for comparison, took about 51 minutes. (Note that the GPT-3 task is not a full training, which could take weeks and cost millions of dollars. Instead, the computers train on a representative portion of the data, at an agreed-upon point well before completion.)

Compared to Nvidia’s largest entrant on GPT-3 last year, a 3,584 H100 computer, the 3.5-minute result represents a 3.2-fold improvement. You might expect that just from the difference in the size of these systems, but in AI computing that isn’t always the case, explains Salvator. “If you just throw hardware at the problem, it’s not a given that you’re going to improve,” he says.

“We are getting essentially linear scaling,” says Salvator. By that he means that twice as many GPUs lead to a halved training time. “[That] represents a great achievement from our engineering teams,” he adds.

Competitors are also getting closer to linear scaling. This round Intel deployed a system using 1,024 GPUs that performed the GPT-3 task in 67 minutes versus a computer one-fourth the size that took 224 minutes six months ago. Google’s largest GPT-3 entry used 12-times the number of TPU v5p accelerators as its smallest entry and performed its task nine times as fast.

Linear scaling is going to be particularly important for upcoming “AI factories” housing 100,000 GPUs or more, Salvator says. He says to expect one such data center to come online this year, and another, using Nvidia’s next architecture, Blackwell, to startup in 2025.

Nvidia’s streak continues

Nvidia continued to boost training times despite using the same architecture, Hopper, as it did in last year’s training results. That’s all down to software improvements, says Salvator. “Typically, we’ll get a 2-2.5x [boost] from software after a new architecture is released,” he says.

For GPT-3 training, Nvidia logged a 27 percent improvement from the June 2023 MLPerf benchmarks. Salvator says there were several software changes behind the boost. For example, Nvidia engineers tuned up Hopper’s use of less accurate, 8-bit floating point operations by trimming unnecessary conversions between 8-bit and 16-bit numbers and better targeting of which layers of a neural network could use the lower precision number format. They also found a more intelligent way to adjust the power budget of each chip’s compute engines, and sped communication among GPUs in a way that Salvator likened to “buttering your toast while it’s still in the toaster.”

Additionally, the company implemented a scheme called flash attention. Invented in the Stanford University laboratory of Samba Nova founder Chris Re, flash attention is an algorithm that speeds transformer networks by minimizing writes to memory. When it first showed up in MLPerf benchmarks, flash attention shaved as much as 10 percent from training times. (Intel, too, used a version of flash attention but not for GPT-3. It instead used the algorithm for one of the new benchmarks, fine-tuning.)

Using other software and network tricks, Nvidia delivered an 80 percent speedup in the text-to-image test, Stable Diffusion, versus its submission in November 2023.

New benchmarks

MLPerf adds new benchmarks and upgrades old ones to stay relevant to what’s happening in the AI industry. This year saw the addition of fine-tuning and graph neural networks.

Fine tuning takes an already trained LLM and specializes it for use in a particular field. Nvidia, for example took a trained 43-billion-parameter model and trained it on the GPU-maker’s design files and documentation to create ChipNeMo, an AI intended to boost the productivity of its chip designers. At the time, the company’s chief technology officer Bill Dally said that training an LLM was like giving it a liberal arts education, and fine tuning was like sending it to graduate school.

The MLPerf benchmark takes a pretrained Llama-2-70B model and asks the system to fine tune it using a dataset of government documents with the goal of generating more accurate document summaries.

There are several ways to do fine-tuning. MLPerf chose one called low-rank adaptation (LoRA). The method winds up training only a small portion of the LLM’s parameters leading to a 3-fold lower burden on hardware and reduced use of memory and storage versus other methods, according to the organization.

The other new benchmark involved a graph neural network (GNN). These are for problems that can be represented by a very large set of interconnected nodes, such as a social network or a recommender system. Compared to other AI tasks, GNNs require a lot of communication between nodes in a computer.

The benchmark trained a GNN on a database that shows relationships about academic authors, papers, and institutes—a graph with 547 million nodes and 5.8 billion edges. The neural network was then trained to predict the right label for each node in the graph.

Future fights

Training rounds in 2025 may see head-to-head contests comparing new accelerators from AMD, Intel, and Nvidia. AMD’s MI300 series was launched about six months ago, and a memory-boosted upgrade the MI325x is planned for the end of 2024, with the next generation MI350 slated for 2025. Intel says its Gaudi 3, generally available to computer makers later this year, will appear in MLPerf’s upcoming inferencing benchmarks. Intel executives have said the new chip has the capacity to beat H100 at training LLMs. But the victory may be short-lived, as Nvidia has unveiled a new architecture, Blackwell, which is planned for late this year.

  • ✇Eurogamer.net
  • Funko Fusion is a co-op clash of big-headed pop culture brandsVictoria Kennedy
    Remember a couple of years ago, when Funko announced a collaboration with Jon Burton's 10.10 Games to release new big budget video games? Well, the time has come to see exactly what this collaboration has produced, and it all looks rather chaotic. The appropriately-named Funko Fusion is described as a co-op action "extravaganza" full of more brands than you can shake a stick at: Jurassic World, The Umbrella Academy, Battlestar Galactica and Nope. The Simon Pegg-fronted Hot Fuzz is even maki
     

Funko Fusion is a co-op clash of big-headed pop culture brands

1. Květen 2024 v 13:45

Remember a couple of years ago, when Funko announced a collaboration with Jon Burton's 10.10 Games to release new big budget video games? Well, the time has come to see exactly what this collaboration has produced, and it all looks rather chaotic.

The appropriately-named Funko Fusion is described as a co-op action "extravaganza" full of more brands than you can shake a stick at: Jurassic World, The Umbrella Academy, Battlestar Galactica and Nope. The Simon Pegg-fronted Hot Fuzz is even making a big-headed showing in the upcoming game, and that's far from all the properties players can expect to see on release.

Don't believe me? Well, you can see for yourself in Funko Fusion's reveal trailer - which features more giant head decapitations than I was expecting - below:

Read more

  • ✇Semiconductor Engineering
  • Sensor Fusion Challenges In AutomotiveEd Sperling
    The number of sensors in automobiles is growing rapidly alongside new safety features and increasing levels of autonomy. The challenge is integrating them in a way that makes sense, because these sensors are optimized for different types of data, sometimes with different resolution requirements even for the same type of data, and frequently with very different latency, power consumption, and reliability requirements. Pulin Desai, group director for product marketing, management and business deve
     

Sensor Fusion Challenges In Automotive

2. Květen 2024 v 09:15

The number of sensors in automobiles is growing rapidly alongside new safety features and increasing levels of autonomy. The challenge is integrating them in a way that makes sense, because these sensors are optimized for different types of data, sometimes with different resolution requirements even for the same type of data, and frequently with very different latency, power consumption, and reliability requirements. Pulin Desai, group director for product marketing, management and business development at Cadence, talks about challenges with sensor fusion, the growing importance of four-dimensional sensing, what’s needed to future-proof sensor designs, and the difficulty of integrating one or more software stacks with conflicting requirements.

The post Sensor Fusion Challenges In Automotive appeared first on Semiconductor Engineering.

New Funko Fusion Trailer Cramming Even More Franchises Into This Nightmare

30. Duben 2024 v 22:50

A new trailer is here for Funko Fusion, a game that looks more and more like a fever-dream mess of giant-headed characters from popular movies and shows. The game is out this September, so get excited, because our monoculture future is getting closer and closer!

Read more...

Funko Pop's co-op shooter lets you blast the heads off dead-eyed dolls from Hot Fuzz, Battlestar Galactica and more this September

Your feelings on Funko Pop probably fall into one of two categories: you either hate the black-eyed, copy-paste figures modelled on pop-culture characters with a burning passion, or you own enough of them to construct a small fortress and defend your newfounded Funko nation from the government. Either way, it looks like the first video game starring the ubiquitous toy collectables might somehow scratch your itch.

Read more

This Company Will Give You A Fortnite Batman Skin If You Generate Enough AI Porn

18. Duben 2024 v 19:00

Salad, a cloud computing and AI tech company, is renting high-end graphics cards found in gamers’ computers and using all that power to create AI-generated pornography. In return, the gamers who lend the company their GPUs are paid in Fortnite skins, Minecraft cosmetics, Roblox bux, and other gaming-related gift cards…

Read more...

Lilbits: Limitless AI Pendant, Pixel Buds pro 2 leaked, and Motorola’s Edge 50 smartphone series launches (in Europe and Latin America)

17. Duben 2024 v 00:15

The first major wearable built around AI hit the streets last week… and the Humane AI Pin was widely panned by reviewers as a buggy, overpriced mess that fails miserably to deliver on its promise. But a startup called Limitless is hoping to do better… by doing less. The upcoming Limitless Pendant is a $99 […]

The post Lilbits: Limitless AI Pendant, Pixel Buds pro 2 leaked, and Motorola’s Edge 50 smartphone series launches (in Europe and Latin America) appeared first on Liliputing.

  • ✇IEEE Spectrum
  • Momentary Fusion Breakthroughs Face Hard RealityEdd Gent
    The dream of fusion power inched closer to reality in December 2022, when researchers at Lawrence Livermore National Laboratory (LLNL) revealed that a fusion reaction had produced more energy than what was required to kick-start it. According to new research, the momentary fusion feat required exquisite choreography and extensive preparations, whose high degree of difficulty reveals a long road ahead before anyone dares hope a practicable power source could be at hand. The groundbreaking result
     

Momentary Fusion Breakthroughs Face Hard Reality

Od: Edd Gent
6. Únor 2024 v 22:43


The dream of fusion power inched closer to reality in December 2022, when researchers at Lawrence Livermore National Laboratory (LLNL) revealed that a fusion reaction had produced more energy than what was required to kick-start it. According to new research, the momentary fusion feat required exquisite choreography and extensive preparations, whose high degree of difficulty reveals a long road ahead before anyone dares hope a practicable power source could be at hand.

The groundbreaking result was achieved at the California lab’s National Ignition Facility (NIF), which uses an array of 192 high-power lasers to blast tiny pellets of deuterium and tritium fuel in a process known as inertial confinement fusion. This causes the fuel to implode, smashing its atoms together and generating higher temperatures and pressures than are found at the center of the sun. The atoms then fuse together, releasing huge amounts of energy.

“It showed there’s nothing fundamentally limiting us from being able to harness fusion in the laboratory.” —Annie Kritcher, Lawrence Livermore National Laboratory

The facility has been running since 2011, and for a long time the amount of energy produced by these reactions was significantly less than the amount of laser energy pumped into the fuel. But on 5 December 2022, researchers at NIF announced that they had finally achieved breakeven by generating 1.5 times more energy than was required to start the fusion reaction.

A new paper published yesterday in Physical Review Letters confirms the team’s claims and details the complex engineering required to make it possible. While the results underscore the considerable work ahead, Annie Kritcher, a physicist at LLNL who led design of the experiment, says it still signals a major milestone in fusion science. “It showed there’s nothing fundamentally limiting us from being able to harness fusion in the laboratory,” she says.

While the experiment was characterized as a breakthrough, Kritcher says it was actually the result of painstaking incremental improvements to the facility’s equipment and processes. In particular, the team has spent years perfecting the design of the fuel pellet and the cylindrical gold container that houses it, known as a “hohlraum”.

Why is fusion so hard?

When lasers hit the outside of this capsule, their energy is converted into X-rays that then blast the fuel pellet, which consists of a diamond outer shell coated on the inside with deuterium and tritium fuel. It’s crucial that the hohlraum is as symmetrical as possible, says Kritcher, so it distributes X-rays evenly across the pellet. This ensures the fuel is compressed equally from all sides, allowing it to reach the temperatures and pressures required for fusion. “If you don’t do that, you can basically imagine your plasmas squirting out in one direction, and you can’t squeeze it and heat it enough,” she says.

The team has since carried out six more experiments—two that have generated roughly the same amount of energy as was put in and four that significantly exceeded it.

Carefully tailoring the laser beams is also important, Kritcher says, because laser light can scatter off the hohlraum, reducing efficiency and potentially damaging laser optics. In addition, as soon as the laser starts to hit the capsule, it starts giving off a plume of plasma that interferes with the beam. “It’s a race against time,” says Kritcher. “We’re trying to get the laser pulse in there before this happens, because then you can’t get the laser energy to go where you want it to go.”

The design process is slowgoing, because the facility is capable of carrying out only a few shots a year, limiting the team’s ability to iterate. And predicting how those changes will pan out ahead of time is challenging because of our poor understanding of the extreme physics at play. “We’re blasting a tiny target with the biggest laser in the world, and a whole lot of crap is flying all over the place,” says Kritcher. “And we’re trying to control that to very, very precise levels.”

Nonetheless, by analyzing the results of previous experiments and using computer modeling, the team was able to crack the problem. They worked out that using a slightly higher power laser coupled with a thicker diamond shell around the fuel pellet could overcome the destabilizing effects of imperfections on the pellet’s surface. Moreover, they found these modifications could also help confine the fusion reaction for long enough for it to become self-sustaining. The resulting experiment ended up producing 3.15 megajoules, considerably more than the 2.05 MJ produced by the lasers.

Since then, the team has carried out six more experiments—two that have generated roughly the same amount of energy as was put in and four that significantly exceeded it. Consistently achieving breakeven is a significant feat, says Kritcher. However, she adds that the significant variability in the amount of energy produced remains something the researchers need to address.

This kind of inconsistency is unsurprising, though, says Saskia Mordijck, an associate professor of physics at the College of William & Mary in Virginia. The amount of energy generated is strongly linked to how self-sustaining the reactions are, which can be impacted by very small changes in the setup, she says. She compares the challenge to landing on the moon—we know how to do it, but it’s such an enormous technical challenge that there’s no guarantee you’ll stick the landing.

Relatedly, researchers from the University of Rochester’s Laboratory for Laser Energetics today reported in the journal Nature Physics that they have developed an inertial confinement fusion system that’s one-hundredth the size of NIF’s. Their 28 kilojoule laser system, the team noted, can at least yield more fusion energy than what is contained in the central plasma—an accomplishment that’s on the road toward NIF’s success, but still a distance away. They’re calling what they’ve developed a “spark plug“ toward more energetic reactions.

Both NIF’s and LLE’s newly reported results represent steps along a development path—where in both cases that path remains long and challenging if inertial confinement fusion is to ever become more than a research curiosity, though.

Plenty of other obstacles remain than those noted above, too. Current calculations compare energy generated against the NIF laser’s output, but that brushes over the fact that the lasers draw more than 100 times the power from the grid than any fusion reaction yields. That means either energy gains or laser efficiency would need to improve by two orders of magnitude to break even in any practical sense. The NIF’s fuel pellets are also extremely expensive, says Kritcher, each one pricing in at an estimated $100,000. Then, producing a reasonable amount of power would mean dramatically increasing the frequency of NIF’s shots—a feat barely on the horizon for a reactor that requires months to load up the next nanosecond-long burst.

“Those are the biggest challenges,” Kritcher says. “But I think if we overcome those, it’s really not that hard at that point.”


UPDATE: 8 Feb. 2024: The story was corrected to attribute the final quote to Annie Kritcher, not Saskia Mordijck, as the story originally stated.
6 Feb. 2024 6 p.m. ET: The story was updated to include news of the University of Rochester’s Laboratory for Laser Energetics new research findings.

  • ✇Ars Technica - All content
  • Why The New York Times might win its copyright lawsuit against OpenAITimothy B. Lee
    Enlarge (credit: Aurich Lawson | Getty Images) The day after The New York Times sued OpenAI for copyright infringement, the author and systems architect Daniel Jeffries wrote an essay-length tweet arguing that the Times “has a near zero probability of winning” its lawsuit. As we write this, it has been retweeted 288 times and received 885,000 views. “Trying to get everyone to license training data is not going to work because that's not what copyright is about,” Jeffries wrot
     

Why The New York Times might win its copyright lawsuit against OpenAI

20. Únor 2024 v 15:05
Why The New York Times might win its copyright lawsuit against OpenAI

Enlarge (credit: Aurich Lawson | Getty Images)

The day after The New York Times sued OpenAI for copyright infringement, the author and systems architect Daniel Jeffries wrote an essay-length tweet arguing that the Times “has a near zero probability of winning” its lawsuit. As we write this, it has been retweeted 288 times and received 885,000 views.

“Trying to get everyone to license training data is not going to work because that's not what copyright is about,” Jeffries wrote. “Copyright law is about preventing people from producing exact copies or near exact copies of content and posting it for commercial gain. Period. Anyone who tells you otherwise is lying or simply does not understand how copyright works.”

This article is written by two authors. One of us is a journalist who has been on the copyright beat for nearly 20 years. The other is a law professor who has taught dozens of courses on IP and Internet law. We’re pretty sure we understand how copyright works. And we’re here to warn the AI community that it needs to take these lawsuits seriously.

Read 67 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Reddit sells training data to unnamed AI company ahead of IPOBenj Edwards
    Enlarge (credit: Reddit) On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site's content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month. Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investor
     

Reddit sells training data to unnamed AI company ahead of IPO

19. Únor 2024 v 22:10
In this photo illustration the American social news

Enlarge (credit: Reddit)

On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site's content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month.

Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investors of an anticipated IPO, Bloomberg said. The Bloomberg source speculates that the contract could serve as a model for future agreements with other AI companies.

After an era where AI companies utilized AI training data without expressly seeking any rightsholder permission, some tech firms have more recently begun entering deals where some content used for training AI models similar to GPT-4 (which runs the paid version of ChatGPT) comes under license. In December, for example, OpenAI signed an agreement with German publisher Axel Springer (publisher of Politico and Business Insider) for access to its articles. Previously, OpenAI has struck deals with other organizations, including the Associated Press. Reportedly, OpenAI is also in licensing talks with CNN, Fox, and Time, among others.

Read 4 remaining paragraphs | Comments

❌
❌