FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇IEEE Spectrum
  • Taiwan Reboots Its Solar-Power FishpondsPeter Fairley
    A maze of brackish and freshwater ponds covers Taiwan’s coastal plain, supporting aquaculture operations that produce roughly NT $30 billion (US $920 million) worth of seafood every year. Taiwan’s government is hoping that the more than 400 square kilometers of fishponds can simultaneously produce a second harvest: solar power. What is aquavoltaics? That’s the impetus behind the new 42.9-megawatt aquavoltaics facility in the southern city of Tainan. To build it, Taipei-based Hongde Renewable
     

Taiwan Reboots Its Solar-Power Fishponds

19. Srpen 2024 v 14:00


A maze of brackish and freshwater ponds covers Taiwan’s coastal plain, supporting aquaculture operations that produce roughly NT $30 billion (US $920 million) worth of seafood every year. Taiwan’s government is hoping that the more than 400 square kilometers of fishponds can simultaneously produce a second harvest: solar power.

What is aquavoltaics?

That’s the impetus behind the new 42.9-megawatt aquavoltaics facility in the southern city of Tainan. To build it, Taipei-based Hongde Renewable Energy bought 57.6 hectares of abandoned land in Tainan’s fishpond-rich Qigu district, created earthen berms to delineate the two dozen ponds, and installed solar panels along the berms and over six reservoir ponds.

Tony Chang, general manager of the Hongde subsidiary Star Aquaculture, says 18 of the ponds are stocked with mullet (prized for their roe) and shrimp, while milkfish help clean the water in the reservoir ponds. In 2023, the first full year of operation, Chang says his team harvested over 100,000 kilograms of seafood. This August, they began stocking a cavernous indoor facility, also festooned with photovoltaics, to cultivate white-legged shrimp.

A number of other countries have been experimenting with aquavoltaics, including China, Chile, Bangladesh, and Norway, extending the concept to large solar arrays floating on rivers and bays. But nowhere else is the pairing of aquaculture and solar power seen as so crucial to the economy. Taiwan is striving to massively expand renewable generation to sustain its semiconductor fabs, and solar is expected to play a large role. But on this densely populated island—slightly larger than Maryland, smaller than the Netherlands—there’s not a lot of open space to install solar panels. The fishponds are hard to ignore. By the end of 2025, the government is looking to install 4.4 gigawatts of aquavoltaics to help meet its goal of 20 GW of solar generation.

Is Taiwan’s aquavoltaics plan unrealistic?

Meanwhile, though, solar developers are struggling to deliver on Taiwan’s ambitious goals, even as some projections suggest Taiwan will need over eight times more solar by 2050. And aquavoltaics in particular have come under scrutiny from environmental groups. In 2020, for example, reporter Cai Jiashan visited 100 solar plants built on agricultural land, including fishponds, and found dozens of cases where solar developers built more solar capacity than the law intended, or secured permits based on promises of continued farming that weren’t kept.

two men in water with a plastic basket with fish Star Aquaculture grows milkfish to help clean water for its breeding ponds.HDRenewables

On 7 July 2020, Taiwan’s Ministry of Agriculture responded by restricting solar development on farmland, in what the solar industry called the “Double-Seven Incident.” Many aquavoltaic projects were canceled while others were delayed. The latter included a 10-MW facility in Tainan that Google had announced to great fanfare in 2019 as its first renewable-energy investment in Asia, to supply power for the company’s Taiwan data centers. The array finally started up in 2023, three years behind schedule.

Critics of Taiwan’s renewed aquavoltaic plans thus see the government’s goal as unrealistic. Yuping Chen, executive director of the Taiwan Environment and Planning Association, a Taipei-based nonprofit dedicated to resolving conflicts between solar energy and agriculture, says of aquavoltaics, “It is claimed to be crucial by the government, but it’s impossible to realize.”

How aquavoltaics could revive fishing, boost revenue

Solar developers and government officials who endorse aquavoltaics argue that such projects could revive the island’s traditional fishing community. Taiwan’s fishing villages are aging and shrinking as younger people take city jobs. Climate change has also taken a toll. Severe storms damage fishpond embankments, while extreme heat and rainfall stress the fish.

4.4


Gigawatts of aquavoltaics that Taiwan wants to install by the end of 2025

Solar development could help reverse these trends. Several recent studies examining fishponds in Taiwan found that adding solar improves profitability, providing an opportunity to reinvigorate communities if agrivoltaic investors share their returns. Alan Wu, deputy director of the Green Energy Initiative at Taiwan’s Industrial Technology Research Institute, says the Hsinchu-based lab has opened a research station in Tainan to connect solar and aquaculture firms. ITRI is helping aquavoltaics facilities boost their revenues by figuring out how they can raise “species of high economic value that are normally more difficult to raise,” Wu says.

Such high-value products include the 27,000 pieces of sun-dried mullet roe that Hongde Renewable Energy’s Tainan site produced last year. The new indoor facility, meanwhile, should boost yields of the relatively pricey whiteleg shrimp. Chang expects the indoor harvests to fetch $500,000 to $600,000 annually, compared to $800,000 to $900,000 from the larger outdoor ponds.

The solar roof over the 100,000-liter indoor growth tanks protects the 2.7 million shrimp against weather and bird droppings. Chang says a patent-pending drain mechanically removes waste from each tank, and also sucks out the shrimp when they’re ready for harvest.

On left, photo of a white bird with a long flat black bill sitting on a rock. On right, photo of a black and white bird standing in tall grass. Land that Star Aquaculture set aside for wildlife now attracts endangered birds like the black-faced spoonbill [left] and the oriental stork [right].iStock (2)

The company has also set aside 9 percent of the site for wildlife, in response to concerns from conservationists. “Egrets, endangered oriental storks, and black-faced spoonbills continue to use the site,” Chang says. “If it was all covered with PV, it could impact their habitat.”

Such measures may not satisfy environmentalists, though. In a review published last month, researchers at Fudan University in Shanghai and two Chinese power firms concluded that China’s floating aquavoltaic installations—some of which already span 5 square kilometers—will “inevitably” alter the marine environment.

Aquavoltaic facilities that are entirely indoors may be an even harder sell as they scale up. Toshiba is backing such a plant in Tainan, to generate 120 MW for an unspecified “semiconductor manufacturer,” with plans for a 360-MW expansion. The resulting buildings could exclude wildlife from 5 square kilometers of habitat. Indoor projects could compensate by protecting land elsewhere. But, as Chen of the Taiwan Environment and Planning Association notes, developers of such sites may not take such measures unless they’re required by law to do so.

  • ✇Liliputing
  • Waveshare UPS HAT easily adds battery backup to a Raspberry PiLee Mathews
    The Waveshare UPS HAT (E) offers an inexpensive, easy way to add battery backup to a Raspberry Pi project. It’s compatible with Raspberry Pi 5, 4B and 3B+ boards and accepts four 21700 lithium-ion batteries (not included). Just like a traditional UPS from a company like APC or CyberPower, the UPS HAT (E) springs into […] The post Waveshare UPS HAT easily adds battery backup to a Raspberry Pi appeared first on Liliputing.
     

Waveshare UPS HAT easily adds battery backup to a Raspberry Pi

3. Srpen 2024 v 19:00

The Waveshare UPS HAT (E) offers an inexpensive, easy way to add battery backup to a Raspberry Pi project. It’s compatible with Raspberry Pi 5, 4B and 3B+ boards and accepts four 21700 lithium-ion batteries (not included). Just like a traditional UPS from a company like APC or CyberPower, the UPS HAT (E) springs into […]

The post Waveshare UPS HAT easily adds battery backup to a Raspberry Pi appeared first on Liliputing.

  • ✇Latest
  • RFK Jr. Pays Lip Service to the Debt While Pushing Policies That Would Increase ItJohn Stossel
    Robert F. Kennedy Jr. won applause at the Libertarian National Convention by criticizing government lockdowns and deficit spending, and saying America shouldn't police the world. It made me want to interview him. This month, I did. He said intelligent things about America's growing debt: "President Trump said that he was going to balance the budget and instead he (increased the debt more) than every president in United States history—$8 trillion.
     

RFK Jr. Pays Lip Service to the Debt While Pushing Policies That Would Increase It

1. Srpen 2024 v 00:30
Robert F. Kennedy Jr. and John Stossel | Stossel TV

Robert F. Kennedy Jr. won applause at the Libertarian National Convention by criticizing government lockdowns and deficit spending, and saying America shouldn't police the world.

It made me want to interview him. This month, I did.

He said intelligent things about America's growing debt:

"President Trump said that he was going to balance the budget and instead he (increased the debt more) than every president in United States history—$8 trillion. President Biden is on track now to beat him."

It's good to hear a candidate actually talk about our debt.

"When the debt is this large…you have to cut dramatically, and I'm going to do that," he says.

But looking at his campaign promises, I don't see it.

He promises "affordable" housing via a federal program backing 3 percent mortgages.

"Imagine that you had a rich uncle who was willing to cosign your mortgage!" gushes his campaign ad. "I'm going to make Uncle Sam that rich uncle!"

I point out that such giveaways won't reduce our debt.

"That's not a giveaway," Kennedy replies. "Every dollar that I spend as president is going to go toward building our economy."

That's big government nonsense, like his other claim: "Every million dollars we spend on child care creates 22 jobs!"

Give me a break.

When I pressed him about specific cuts, Kennedy says, "I'll cut the military in half…cut it to about $500 billion….We are not the policemen of the world."

"Stop giving any money to Ukraine?" I ask.

"Negotiate a peace," Kennedy replies. "Biden has never talked to Putin about this, and it's criminal."

He never answered whether he'd give money to Ukraine. He did answer about Israel.

"Yes, of course we should,"

"[Since] you don't want to cut this spending, what would you cut?"

"Israel spending is rather minor," he responds. "I'm going to pick the most wasteful programs, put them all in one bill, and send them to Congress with an up and down vote."

Of course, Congress would just vote it down.

Kennedy's proposed cuts would hardly slow down our path to bankruptcy. Especially since he also wants new spending that activists pretend will reduce climate change.

At a concert years ago, he smeared "crisis" skeptics like me, who believe we can adjust to climate change, screaming at the audience, "Next time you see John Stossel and [others]… these flat-earthers, these corporate toadies—lying to you. This is treason, and we need to start treating them now as traitors!"

Now, sitting with him, I ask, "You want to have me executed for treason?"

"That statement," he replies, "it's not a statement that I would make today….Climate is existential. I think it's human-caused climate change. But I don't insist other people believe that. I'm arguing for free markets and then the lowest cost providers should prevail in the marketplace….We should end all subsidies and let the market dictate."

That sounds good: "Let the market dictate."

But wait, Kennedy makes money from solar farms backed by government guaranteed loans. He "leaned on his contacts in the Obama administration to secure a $1.6 billion loan guarantee," wrote The New York Times.

"Why should you get a government subsidy?" I ask.

"If you're creating a new industry," he replies, "you're competing with the Chinese. You want the United States to own pieces of that industry."

I suppose that means his government would subsidize every industry leftists like.

Yet when a wind farm company proposed building one near his family's home, he opposed it.

"Seems hypocritical," I say.

"We're exterminating the right whale in the North Atlantic through these wind farms!" he replies.

I think he was more honest years ago, when he complained that "turbines…would be seen from Cape Cod, Martha's Vineyard… Nantucket….[They] will steal the stars and nighttime views."

Kennedy was once a Democrat, but now Democrats sue to keep him off ballots. Former Clinton Labor Secretary Robert Reich calls him a "dangerous nutcase."

Kennedy complains that Reich won't debate him.

"Nobody will," he says. "They won't have me on any of their networks."

Well, obviously, I will.

I especially wanted to confront him about vaccines.

In a future column, Stossel TV will post more from our hourlong discussion.

COPYRIGHT 2024 BY JFS PRODUCTIONS INC.

The post RFK Jr. Pays Lip Service to the Debt While Pushing Policies That Would Increase It appeared first on Reason.com.

  • ✇Ars Technica - All content
  • Silicon plus perovskite solar reaches 34 percent efficiencyJohn Timmer
    Enlarge / Some solar panels, along with a diagram of a perovskite's crystal structure. (credit: Subhakitnibhat Kewiko) As the price of silicon panels has continued to come down, we've reached the point where they're a small and shrinking cost of building a solar farm. That means that it might be worth spending more to get a panel that converts more of the incoming sunlight to electricity, since it allows you to get more out of the price paid to get each panel installed. But s
     

Silicon plus perovskite solar reaches 34 percent efficiency

2. Srpen 2024 v 20:36
Solar panels with green foliage behind them, and a diagram of a chemical's structure in the foreground.

Enlarge / Some solar panels, along with a diagram of a perovskite's crystal structure. (credit: Subhakitnibhat Kewiko)

As the price of silicon panels has continued to come down, we've reached the point where they're a small and shrinking cost of building a solar farm. That means that it might be worth spending more to get a panel that converts more of the incoming sunlight to electricity, since it allows you to get more out of the price paid to get each panel installed. But silicon panels are already pushing up against physical limits on efficiency. Which means our best chance for a major boost in panel efficiency may be to combine silicon with an additional photovoltaic material.

Right now, most of the focus is on pairing silicon with a class of materials called perovskites. Perovskite crystals can be layered on top of silicon, creating a panel with two materials that absorb different areas of the spectrum—plus, perovskites can be made from relatively cheap raw materials. Unfortunately, it has been difficult to make perovskites that are both high-efficiency and last for the decades that the silicon portion will.

Lots of labs are attempting to change that, though. And two of them reported some progress this week, including a perovskite/silicon system that achieved 34 percent efficiency.

Read 15 remaining paragraphs | Comments

  • ✇Slashdot
  • Could AI Speed Up the Design of Nuclear Reactors?EditorDavid
    A professor at Brigham Young University "has figured out a way to shave critical years off the complicated design and licensing processes for modern nuclear reactors," according to an announcement from the university. "AI is teaming up with nuclear power." The typical time frame and cost to license a new nuclear reactor design in the United States is roughly 20 years and $1 billion. To then build that reactor requires an additional five years and between $5 and $30 billion. By using AI in the
     

Could AI Speed Up the Design of Nuclear Reactors?

3. Srpen 2024 v 16:34
A professor at Brigham Young University "has figured out a way to shave critical years off the complicated design and licensing processes for modern nuclear reactors," according to an announcement from the university. "AI is teaming up with nuclear power." The typical time frame and cost to license a new nuclear reactor design in the United States is roughly 20 years and $1 billion. To then build that reactor requires an additional five years and between $5 and $30 billion. By using AI in the time-consuming computational design process, [chemical engineering professor Matt] Memmott estimates a decade or more could be cut off the overall timeline, saving millions and millions of dollars in the process — which should prove critical given the nation's looming energy needs.... "Being able to reduce the time and cost to produce and license nuclear reactors will make that power cheaper and a more viable option for environmentally friendly power to meet the future demand...." Engineers deal with elements from neutrons on the quantum scale all the way up to coolant flow and heat transfer on the macro scale. [Memmott] also said there are multiple layers of physics that are "tightly coupled" in that process: the movement of neutrons is tightly coupled to the heat transfer which is tightly coupled to materials which is tightly coupled to the corrosion which is coupled to the coolant flow. "A lot of these reactor design problems are so massive and involve so much data that it takes months of teams of people working together to resolve the issues," he said... Memmott's is finding AI can reduce that heavy time burden and lead to more power production to not only meet rising demands, but to also keep power costs down for general consumers... Technically speaking, Memmott's research proves the concept of replacing a portion of the required thermal hydraulic and neutronics simulations with a trained machine learning model to predict temperature profiles based on geometric reactor parameters that are variable, and then optimizing those parameters. The result would create an optimal nuclear reactor design at a fraction of the computational expense required by traditional design methods. For his research, he and BYU colleagues built a dozen machine learning algorithms to examine their ability to process the simulated data needed in designing a reactor. They identified the top three algorithms, then refined the parameters until they found one that worked really well and could handle a preliminary data set as a proof of concept. It worked (and they published a paper on it) so they took the model and (for a second paper) put it to the test on a very difficult nuclear design problem: optimal nuclear shield design. The resulting papers, recently published in academic journal Nuclear Engineering and Design, showed that their refined model can geometrically optimize the design elements much faster than the traditional method. In two days Memmott's AI algorithm determined an optimal nuclear-reactor shield design that took a real-world molten salt reactor company spent six months. "Of course, humans still ultimately make the final design decisions and carry out all the safety assessments," Memmott says in the announcement, "but it saves a significant amount of time at the front end.... "Our demand for electricity is going to skyrocket in years to come and we need to figure out how to produce additional power quickly. The only baseload power we can make in the Gigawatt quantities needed that is completely emissions free is nuclear power." Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

  • ✇IEEE Spectrum
  • Powering Planes With Microwaves Is Not the Craziest IdeaIan McKay
    Imagine it’s 2050 and you’re on a cross-country flight on a new type of airliner, one with no fuel on board. The plane takes off, and you rise above the airport. Instead of climbing to cruising altitude, though, your plane levels out and the engines quiet to a low hum. Is this normal? No one seems to know. Anxious passengers crane their necks to get a better view out their windows. They’re all looking for one thing. Then it appears: a massive antenna array on the horizon. It’s sending out a p
     

Powering Planes With Microwaves Is Not the Craziest Idea

Od: Ian McKay
24. Červen 2024 v 15:00


Imagine it’s 2050 and you’re on a cross-country flight on a new type of airliner, one with no fuel on board. The plane takes off, and you rise above the airport. Instead of climbing to cruising altitude, though, your plane levels out and the engines quiet to a low hum. Is this normal? No one seems to know. Anxious passengers crane their necks to get a better view out their windows. They’re all looking for one thing.

Then it appears: a massive antenna array on the horizon. It’s sending out a powerful beam of electromagnetic radiation pointed at the underside of the plane. After soaking in that energy, the engines power up, and the aircraft continues its climb. Over several minutes, the beam will deliver just enough energy to get you to the next ground antenna located another couple hundred kilometers ahead.

The person next to you audibly exhales. You sit back in your seat and wait for your drink. Old-school EV-range anxiety is nothing next to this.

Electromagnetic waves on the fly

Beamed power for aviation is, I admit, an outrageous notion. If physics doesn’t forbid it, federal regulators or nervous passengers probably will. But compared with other proposals for decarbonizing aviation, is it that crazy?

Batteries, hydrogen, alternative carbon-based fuels—nothing developed so far can store energy as cheaply and densely as fossil fuels, or fully meet the needs of commercial air travel as we know it. So, what if we forgo storing all the energy on board and instead beam it from the ground? Let me sketch what it would take to make this idea fly.

Beamed Power for Aviation


Fly by Microwave: Warm up to a new kind of air travel

For the wireless-power source, engineers would likely choose microwaves because this type of electromagnetic radiation can pass unruffled through clouds and because receivers on planes could absorb it completely, with nearly zero risk to passengers.

To power a moving aircraft, microwave radiation would need to be sent in a tight, steerable beam. This can be done using technology known as a phased array, which is commonly used to direct radar beams. With enough elements spread out sufficiently and all working together, phased arrays can also be configured to focus power on a point a certain distance away, such as the receiving antenna on a plane.

Phased arrays work on the principle of constructive and destructive interference. The radiation from the antenna elements will, of course, overlap. In some directions the radiated waves will interfere destructively and cancel out one another, and in other directions the waves will fall perfectly in phase, adding together constructively. Where the waves overlap constructively, energy radiates in that direction, creating a beam of power that can be steered electronically.

How far we can send energy in a tight beam with a phased array is governed by physics—specifically, by something called the diffraction limit. There’s a simple way to calculate the optimal case for beamed power: D1 D2 > λ R. In this mathematical inequality, D1 and D2 are the diameters of the sending and receiving antennas, λ is the wavelength of the radiation, and R is the distance between those antennas.

Now, let me offer some ballpark numbers to figure out how big the transmitting antenna (D1) must be. The size of the receiving antenna on the aircraft is probably the biggest limiting factor. A medium-size airliner has a wing and body area of about 1,000 square meters, which should provide for the equivalent of a receiving antenna that’s 30 meters wide (D2) built into the underside of the plane.

If physics doesn’t forbid it, federal regulators or nervous passengers probably will.

Next, let’s guess how far we would need to beam the energy. The line of sight to the horizon for someone in an airliner at cruising altitude is about 360 kilometers long, assuming the terrain below is level. But mountains would interfere, plus nobody wants range anxiety, so let’s place our ground antennas every 200 km along the flight path, each beaming energy half of that distance. That is, set R to 100 km.

Finally, assume the microwave wavelength (λ) is 5 centimeters. This provides a happy medium between a wavelength that’s too small to penetrate clouds and one that’s too large to gather back together on a receiving dish. Plugging these numbers into the equation above shows that in this scenario the diameter of the ground antennas (D1) would need to be at least about 170 meters. That’s gigantic, but perhaps not unreasonable. Imagine a series of three or four of these antennas, each the size of a football stadium, spread along the route, say, between LAX and SFO or between AMS and BER.

Power beaming in the real world

While what I’ve described is theoretically possible, in practice engineers have beamed only a fraction of the amount of power needed for an airliner, and they’ve done that only over much shorter distances.

NASA holds the record from an experiment in 1975, when it beamed 30 kilowatts of power over 1.5 km with a dish the size of a house. To achieve this feat, the team used an analog device called a klystron. The geometry of a klystron causes electrons to oscillate in a way that amplifies microwaves of a particular frequency—kind of like how the geometry of a whistle causes air to oscillate and produce a particular pitch.

Klystrons and their cousins, cavity magnetrons (found in ordinary microwave ovens), are quite efficient because of their simplicity. But their properties depend on their precise geometry, so it’s challenging to coordinate many such devices to focus energy into a tight beam.

In more recent years, advances in semiconductor technology have allowed a single oscillator to drive a large number of solid-state amplifiers in near-perfect phase coordination. This has allowed microwaves to be focused much more tightly than was possible before, enabling more-precise energy transfer over longer distances.

In 2022, the Auckland-based startup Emrod showed just how promising this semiconductor-enabled approach could be. Inside a cavernous hangar in Germany owned by Airbus, the researchers beamed 550 watts across 36 meters and kept over 95 percent of the energy flowing in a tight beam—far better than could be achieved with analog systems. In 2021, the U.S. Naval Research Laboratory showed that these techniques could handle higher power levels when it sent more than a kilowatt between two ground antennas over a kilometer apart. Other researchers have energized drones in the air, and a few groups even intend to use phased arrays to beam solar power from satellites to Earth.

A rectenna for the ages

So beaming energy to airliners might not be entirely crazy. But please remain seated with your seat belts fastened; there’s some turbulence ahead for this idea. A Boeing 737 aircraft at takeoff requires about 30 megawatts—a thousand times as much power as any power-beaming experiment has demonstrated. Scaling up to this level while keeping our airplanes aerodynamic (and flyable) won’t be easy.

Consider the design of the antenna on the plane, which receives and converts the microwaves to an electric current to power the aircraft. This rectifying antenna, or rectenna, would need to be built onto the underside surfaces of the aircraft with aerodynamics in mind. Power transmission will be maximized when the plane is right above the ground station, but it would be far more limited the rest of the time, when ground stations are far ahead or behind the plane. At those angles, the beam would activate only either the front or rear surfaces of the aircraft, making it especially hard to receive enough power.

With 30 MW blasting onto that small of an area, power density will be an issue. If the aircraft is the size of Boeing 737, the rectenna would have to cram about 25 W into each square centimeter. Because the solid-state elements of the array would be spaced about a half-wavelength—or 2.5 cm—apart, this translates to about 150 W per element—perilously close to the maximum power density of any solid-state power-conversion device. The top mark in the 2016 IEEE/Google Little Box Challenge was about 150 W per cubic inch (less than 10 W per cubic centimeter).

The rectenna will also have to weigh very little and minimize the disturbance to the airflow over the plane. Compromising the geometry of the rectenna for aerodynamic reasons might lower its efficiency. State-of-the art power-transfer efficiencies are only about 30 percent, so the rectenna can’t afford to compromise too much.

A Boeing 737 aircraft at takeoff requires about 30 megawatts—a thousand times as much power as any power-beaming experiment has demonstrated.

And all of this equipment will have to work in an electric field of about 7,000 volts per meter—the strength of the power beam. The electric field inside a microwave oven, which is only about a third as strong, can create a corona discharge, or electric arc, between the tines of a metal fork, so just imagine what might happen inside the electronics of the rectenna.

And speaking of microwave ovens, I should mention that, to keep passengers from cooking in their seats, the windows on any beamed-power airplane would surely need the same wire mesh that’s on the doors of microwave ovens—to keep those sizzling fields outside the plane. Birds, however, won’t have that protection.

Fowl flying through our power beam near the ground might encounter a heating of more than 1,000 watts per square meter—stronger than the sun on a hot day. Up higher, the beam will narrow to a focal point with much more heat. But because that focal point would be moving awfully fast and located higher than birds typically fly, any roasted ducks falling from the sky would be rare in both senses of the word. Ray Simpkin, chief science officer at Emrod, told me it’d take “more than 10 minutes to cook a bird” with Emrod’s relatively low-power system.

Legal challenges would surely come, though, and not just from the National Audubon Society. Thirty megawatts beamed through the air would be about 10 billion times as strong as typical signals at 5-cm wavelengths (a band currently reserved for amateur radio and satellite communications). Even if the transmitter could successfully put 99 percent of the waves into a tight beam, the 1 percent that’s leaked would still be a hundred million times as strong as approved transmissions today.

And remember that aviation regulators make us turn off our cellphones during takeoff to quiet radio noise, so imagine what they’ll say about subjecting an entire plane to electromagnetic radiation that’s substantially stronger than that of a microwave oven. All these problems are surmountable, perhaps, but only with some very good engineers (and lawyers).

Compared with the legal obstacles and the engineering hurdles we’d need to overcome in the air, the challenges of building transmitting arrays on the ground, huge as they would have to be, seem modest. The rub is the staggering number of them that would have to be built. Many flights occur over mountainous terrain, producing a line of sight to the horizon that is less than 100 km. So in real-world terrain we’d need more closely spaced transmitters. And for the one-third of airline miles that occur over oceans, we would presumably have to build floating arrays. Clearly, building out the infrastructure would be an undertaking on the scale of the Eisenhower-era U.S. interstate highway system.

Decarbonizing with the world’s largest microwave

People might be able to find workarounds for many of these issues. If the rectenna is too hard to engineer, for example, perhaps designers will find that they don’t have to turn the microwaves back into electricity—there are precedents for using heat to propel airplanes. A sawtooth flight path—with the plane climbing up as it approaches each emitter station and gliding down after it passes by—could help with the power-density and field-of-view issues, as could flying-wing designs, which have much more room for large rectennas. Perhaps using existing municipal airports or putting ground antennas near solar farms could reduce some of the infrastructure cost. And perhaps researchers will find shortcuts to radically streamline phased-array transmitters. Perhaps, perhaps.

To be sure, beamed power for aviation faces many challenges. But less-fanciful options for decarbonizing aviation have their own problems. Battery-powered planes don’t even come close to meeting the needs of commercial airlines. The best rechargeable batteries have about 5 percent of the effective energy density of jet fuel. At that figure, an all-electric airliner would have to fill its entire fuselage with batteries—no room for passengers, sorry—and it’d still barely make it a tenth as far as an ordinary jet. Given that the best batteries have improved by only threefold in the past three decades, it’s safe to say that batteries won’t power commercial air travel as we know it anytime soon.

Any roasted ducks falling from the sky would be rare in both senses of the word.

Hydrogen isn’t much further along, despite early hydrogen-powered flights occurring nearly 40 years ago. And it’s potentially dangerous—enough that some designs for hydrogen planes have included two separate fuselages: one for fuel and one for people to give them more time to get away if the stuff gets explode-y. The same factors that have kept hydrogen cars off the road will probably keep hydrogen planes out of the sky.

Synthetic and biobased jet fuels are probably the most reasonable proposal. They’ll give us aviation just as we know it today, just at a higher cost—perhaps 20 to 50 percent more expensive per ticket. But fuels produced from food crops can be worse for the environment than the fossil fuels they replace, and fuels produced from CO2 and electricity are even less economical. Plus, all combustion fuels could still contribute to contrail formation, which makes up more than half of aviation’s climate impact.

The big problem with the “sane” approach for decarbonizing aviation is that it doesn’t present us with a vision of the future at all. At the very best, we’ll get a more expensive version of the same air travel experience the world has had since the 1970s.

True, beamed power is far less likely to work. But it’s good to examine crazy stuff like this from time to time. Airplanes themselves were a crazy idea when they were first proposed. If we want to clean up the environment and produce a future that actually looks like a future, we might have to take fliers on some unlikely sounding schemes.

  • ✇IEEE Spectrum
  • Tsunenobu Kimoto Leads the Charge in Power DevicesWillie D. Jones
    Tsunenobu Kimoto, a professor of electronic science and engineering at Kyoto University, literally wrote the book on silicon carbide technology. Fundamentals of Silicon Carbide Technology, published in 2014, covers properties of SiC materials, processing technology, theory, and analysis of practical devices. Kimoto, whose silicon carbide research has led to better fabrication techniques, improved the quality of wafers and reduced their defects. His innovations, which made silicon carbide semi
     

Tsunenobu Kimoto Leads the Charge in Power Devices

23. Červen 2024 v 20:00


Tsunenobu Kimoto, a professor of electronic science and engineering at Kyoto University, literally wrote the book on silicon carbide technology. Fundamentals of Silicon Carbide Technology, published in 2014, covers properties of SiC materials, processing technology, theory, and analysis of practical devices.

Kimoto, whose silicon carbide research has led to better fabrication techniques, improved the quality of wafers and reduced their defects. His innovations, which made silicon carbide semiconductor devices more efficient and more reliable and thus helped make them commercially viable, have had a significant impact on modern technology.

Tsunenobu Kimoto


Employer

Kyoto University

Title

Professor of electronic science and engineering

Member grade

Fellow

Alma mater

Kyoto University

For his contributions to silicon carbide material and power devices, the IEEE Fellow was honored with this year’s IEEE Andrew S. Grove Award, sponsored by the IEEE Electron Devices Society.

Silicon carbide’s humble beginnings

Decades before a Tesla Model 3 rolled off the assembly line with an SiC inverter, a small cadre of researchers, including Kimoto, foresaw the promise of silicon carbide technology. In obscurity they studied it and refined the techniques for fabricating power transistors with characteristics superior to those of the silicon devices then in mainstream use.

Today MOSFETs and other silicon carbide transistors greatly reduce on-state loss and switching losses in power-conversion systems, such as the inverters in an electric vehicle used to convert the battery’s direct current to the alternating current that drives the motor. Lower switching losses make the vehicles more efficient, reducing the size and weight of their power electronics and improving power-train performance. Silicon carbide–based chargers, which convert alternating current to direct current, provide similar improvements in efficiency.

But those tools didn’t just appear. “We had to first develop basic techniques such as how to dope the material to make n-type and p-type semiconductor crystals,” Kimoto says. N-type crystals’ atomic structures are arranged so that electrons, with their negative charges, move freely through the material’s lattice. Conversely, the atomic arrangement of p-type crystals’ contains positively charged holes.

Kimoto’s interest in silicon carbide began when he was working on his Ph.D. at Kyoto University in 1990.

“At that time, few people were working on silicon carbide devices,” he says. “And for those who were, the main target for silicon carbide was blue LED.

“There was hardly any interest in silicon carbide power devices, like MOSFETs and Schottky barrier diodes.”

Kimoto began by studying how SiC might be used as the basis of a blue LED. But then he read B. Jayant Baliga’s 1989 paper “Power Semiconductor Device Figure of Merit for High-Frequency Applications” in IEEE Electron Device Letters, and he attended a presentation by Baliga, the 2014 IEEE Medal of Honor recipient, on the topic.

“I was convinced that silicon carbide was very promising for power devices,” Kimoto says. “The problem was that we had no wafers and no substrate material,” without which it was impossible to fabricate the devices commercially.

In order to get silicon carbide power devices, “researchers like myself had to develop basic technology such as how to dope the material to make p-type and n-type crystals,” he says. “There was also the matter of forming high-quality oxides on silicon carbide.” Silicon dioxide is used in a MOSFET to isolate the gate and prevent electrons from flowing into it.

The first challenge Kimoto tackled was producing pure silicon carbide crystals. He decided to start with carborundum, a form of silicon carbide commonly used as an abrasive. Kimoto took some factory waste materials—small crystals of silicon carbide measuring roughly 5 millimeters by 8 mm­—and polished them.

He found he had highly doped n-type crystals. But he realized having only highly doped n-type SiC would be of little use in power applications unless he also could produce lightly doped (high purity) n-type and p-type SiC.

Connecting the two material types creates a depletion region straddling the junction where the n-type and p-type sides meet. In this region, the free, mobile charges are lost because of diffusion and recombination with their opposite charges, and an electric field is established that can be exploited to control the flow of charges across the boundary.

“Silicon carbide is a family with many, many brothers.”

By using an established technique, chemical vapor deposition, Kimoto was able to grow high-purity silicon carbide. The technique grows SiC as a layer on a substrate by introducing gasses into a reaction chamber.

At the time, silicon carbide, gallium nitride, and zinc selenide were all contenders in the race to produce a practical blue LED. Silicon carbide, Kimoto says, had only one advantage: It was relatively easy to make a silicon carbide p-n junction. Creating p-n junctions was still difficult to do with the other two options.

By the early 1990s, it was starting to become clear that SiC wasn’t going to win the blue-LED sweepstakes, however. The inescapable reality of the laws of physics trumped the SiC researchers’ belief that they could somehow overcome the material’s inherent properties. SiC has what is known as an indirect band gap structure, so when charge carriers are injected, the probability of the charges recombining and emitting photons is low, leading to poor efficiency as a light source.

While the blue-LED quest was making headlines, many low-profile advances were being made using SiC for power devices. By 1993, a team led by Kimoto and Hiroyuki Matsunami demonstrated the first 1,100-volt silicon carbide Schottky diodes, which they described in a paper in IEEE Electron Device Letters. The diodes produced by the team and others yielded fast switching that was not possible with silicon diodes.

“With silicon p-n diodes,” Kimoto says, “we need about a half microsecond for switching. But with a silicon carbide, it takes only 10 nanoseconds.”

The ability to switch devices on and off rapidly makes power supplies and inverters more efficient because they waste less energy as heat. Higher efficiency and less heat also permit designs that are smaller and lighter. That’s a big deal for electric vehicles, where less weight means less energy consumption.

Kimoto’s second breakthrough was identifying which form of the silicon carbide material would be most useful for electronics applications.

“Silicon carbide is a family with many, many brothers,” Kimoto says, noting that more than 100 variants with different silicon-carbon atomic structures exist.

The 6H-type silicon carbide was the default standard phase used by researchers targeting blue LEDs, but Kimoto discovered that the 4H-type has much better properties for power devices, including high electron mobility. Now all silicon carbide power devices and wafer products are made with the 4H-type.

Silicon carbide power devices in electric vehicles can improve energy efficiency by about 10 percent compared with silicon, Kimoto says. In electric trains, he says, the power required to propel the cars can be cut by 30 percent compared with those using silicon-based power devices.

Challenges remain, he acknowledges. Although silicon carbide power transistors are used in Teslas, other EVs, and electric trains, their performance is still far from ideal because of defects present at the silicon dioxide–SiC interface, he says. The interface defects lower the performance and reliability of MOS-based transistors, so Kimoto and others are working to reduce the defects.

A career sparked by semiconductors

When Kimoto was an only child growing up in Wakayama, Japan, near Osaka, his parents insisted he study medicine, and they expected him to live with them as an adult. His father was a garment factory worker; his mother was a homemaker. His move to Kyoto to study engineering “disappointed them on both counts,” he says.

His interest in engineering was sparked, he recalls, when he was in junior high school, and Japan and the United States were competing for semiconductor industry supremacy.

At Kyoto University, he earned bachelor’s and master’s degrees in electrical engineering, in 1986 and 1988. After graduating, he took a job at Sumitomo Electric Industries’ R&D center in Itami. He worked with silicon-based materials there but wasn’t satisfied with the center’s research opportunities.

He returned to Kyoto University in 1990 to pursue his doctorate. While studying power electronics and high-temperature devices, he also gained an understanding of material defects, breakdown, mobility, and luminescence.

“My experience working at the company was very valuable, but I didn’t want to go back to industry again,” he says. By the time he earned his doctorate in 1996, the university had hired him as a research associate.

He has been there ever since, turning out innovations that have helped make silicon carbide an indispensable part of modern life.

Growing the silicon carbide community at IEEE

Kimoto joined IEEE in the late 1990s. An active volunteer, he has helped grow the worldwide silicon carbide community.

He is an editor of IEEE Transactions on Electron Devices, and he has served on program committees for conferences including the International Symposium on Power Semiconductor Devices and ICs and the IEEE Workshop on Wide Bandgap Power Devices and Applications.

“Now when we hold a silicon carbide conference, more than 1,000 people gather,” he says. “At IEEE conferences like the International Electron Devices Meeting or ISPSD, we always see several well-attended sessions on silicon carbide power devices because more IEEE members pay attention to this field now.”

  • ✇Latest
  • The Supreme Court's Dubious Use of History in Department of State v. MunozIlya Somin
    Justice Amy Coney Barrett. (Eric Lee/Pool via CNP/Polaris/Newscom)   In its important recent immigration decision in Department of State v. Munoz, the Supreme Court ruled there are virtually no constitutional limits on the federal government's power to bar non-citizen spouses of American citizens from entering the country. In the process, Justice Amy Coney Barrett's majority opinion (written on behalf of herself and the five other conservative ju
     

The Supreme Court's Dubious Use of History in Department of State v. Munoz

24. Červen 2024 v 17:00
Supreme-Court-building-Wikimedia | Wikimedia
Justice Amy Coney Barrett.
Justice Amy Coney Barrett. (Eric Lee/Pool via CNP/Polaris/Newscom)

 

In its important recent immigration decision in Department of State v. Munoz, the Supreme Court ruled there are virtually no constitutional limits on the federal government's power to bar non-citizen spouses of American citizens from entering the country. In the process, Justice Amy Coney Barrett's majority opinion (written on behalf of herself and the five other conservative justices) commits serious errors in historical analysis, and violates Justice Barrett's own well-taken strictures about the appropriate use of history in constitutional analysis.

Sandra Munoz is a US citizen whose husband, Luis Asencio-Cordero (a citizen of El Salvador) was barred from entering the US to come live with her, because US consular officials claimed he had ties to the MS-13 criminal drug gang (which connectoin Ascencio-Cordero denies). Munoz filed suit, claiming that, given that the constitutional right to marriage was implicated, the State Department was at the very least required to reveal the evidence that supposedly proved her husband's connection to the gang.

In arguing that there is no originalist or historical justification for US citizens to claim a right to entry for their non-citizen spouses, Justice Barrett cites historical evidence from the 1790s:

From the beginning, the admission of noncitizens into the country was characterized as "of favor [and] not of right." J. Madison, Report of 1800 (Jan. 7, 1800)….  (emphasis added); see also 2 Records of the Federal Convention of 1787, p. 238 (M. Farrand ed. 1911) (recounting Gouverneur Morris's observation that "every Society from a great nation down to a club ha[s] the right of declaring the conditions on which new members should be admitted"); Debate on Virginia Resolutions, in The Virginia Report of 1799–1800, p. 31 (1850) ("[B]y the law of nations, it is left in the power of all states to take such measures about the admission of strangers as they think convenient"). Consistent with this view, the 1798 Act Concerning Aliens gave the President complete discretion to remove "all such aliens as he shall judge dangerous to the peace and safety of the United States." 1 Stat. 571 (emphasis deleted). The Act made no exception for spouses—or, for that matter, other family members.

Almost everything in this passage is either false or misleading. The quote from James Madison's Report of 1800, does not, in fact, indicate that Madison believed the federal government has blanket authority to exclude immigrants for whatever reason it wants. Far from it. Madison was arguing that the Alien Friends Act of 1798 (part of the notorious Alien and Sedition Acts) was unconstitutional because the federal government lacks such power. Here is the passage where the quote occurs:

One argument offered in justification of this power exercised over aliens, is, that the admission of them into the country being of favor not of right, the favor is at all times revokable.

To this argument it might be answered, that allowing the truth of the inference, it would be no proof of what is required. A question would still occur, whether the constitution had vested the discretionary power of admitting aliens in the federal government or in the state governments.

Note that Madison does not even admit that admission of immigrants is "a favor."  He just assumes it is for the sake of argument, then goes on to argue that the Alien Act is unconstitutional regardless, because the relevant power isn't given to the federal government (this is what he argues in the rest of the Alien Act section of his Report). The 1798 Act Concerning Aliens, also quoted by Justice Barrett, is the very same Alien Friends Act denounced as unconstitutional by Madison, Thomas Jefferson, and many others. Opposition to the Act was so widespread that no one was ever actually deported under it, before Thomas Jefferson allowed it to expire upon becoming president in 1801.

I think Jefferson and Madison were right to argue the Alien Friends Act was unconstitutional. But, at the very least, legislation whose constitutionality was so widely questioned at the time cannot be relied on as strong evidence of the original scope of federal power in this area.

The quote by Gouverneur Morris at the Constitutional Convention is not about immigration restrictions at all. It is part of a speech defending his proposal that people must be required to have been citizens for at least fourteen years before being eligible to become US senators. The proposal was rejected by the Convention (which eventually decided on a nine-year requirement). It was denounced by several other prominent members of the Convention, including James Madison and Benjamin Franklin. Madison argued it was "unnecessary, and improper" and would "give a tincture of illiberality to the Constitution" (see Records of the Federal Convention of 1787, Vol. 2, pp. 235-37 (Max Farrand, ed., 1911)).

Morris's speech in favor of this failed proposal is not a reliable guide to the sentiments of the Convention. Still less is it indicative of the original meaning understood by the general public at the time of ratification (which is the relevant criterion for most originalists, including Justice Barrett, who has said the original meaning of a constitutional provision is "the meaning that it had at the time people ratified it").

Finally, the Debate on the Virginia Resolutions in the Virginia Report of 1799-1800, also quoted by Justice Barrett, was a record of debates in the Virginia state legislature over the Virginia Resolution (drafted by Madison) a statement asserting that the Alien Friends Act is unconstitutional. The passage Barrett quotes is from a speech by a dissenting member of the Virginia state legislature opposing the Resolution. The majority, however, sided with Madison.

Given this history, the debate over the Resolution cannot be relied on to justify virtually unlimited federal power over immigration by spouses of citizens, or any other migrants. And because Madison and the majority in the state legislature argued that the entire Alien Friends Act was unconstitutional, they understandably did not bother to argue that there was a separate issue regarding exclusion of non-citizen spouses of citizens. To my knowledge, no such case involving spouses came up during the short time the Act was in force.

Justice Barrett also relies on dubious 19th century history:

The United States had relatively open borders until the late 19th century. But once Congress began to restrict immigration, "it enacted a complicated web of regulations
that erected serious impediments to a person's ability to bring a spouse into the United States." Din, 576 U. S., at 96 (plurality opinion). One of the first federal immigration
statutes, the Immigration Act of 1882, required executive officials to "examine" noncitizens and deny "permi[ssion] to land" to "any convict, lunatic, idiot, or any person unable to take care of himself or herself without becoming a public charge." 22 Stat. 214. The Act provided no exception for citizens' spouses. And when Congress drafted a successor statute that expanded the grounds of inadmissibility, it again gave no special treatment to the marital relationship….

This legislation was enacted almost a century after the Founding. So its relevance to original meaning is highly questionable, at best. Moreover, it was adopted in an era of widspread nativist and racist hostility to Chinese immigration, at a time when the Supreme Court also upheld a wide range of domestic racially discriminatory legislation, as well. The Immigration Act of 1882 was enacted by the same Congress and in the same year as the deeply racist Chinese Exclusion Act. The latter legislation was upheld by the Supreme Court in a terrible 1889 decision that completely ignored the arguments Madison and other Founders had raised against a broad federal power over immigration. The immigration policies and legal decisions of this era were part and parcel of the same mentality that also led to Plessy v. Ferguson.

In her recent concurring opinion in United States v. Rahimi, an important Second Amendment case, Justice Barrett warned about careless reliance on post-ratification history in constitutional interpretation:

[F]or an originalist, the history that matters most is the history surrounding the ratification of the text; that backdrop illuminates the meaning of the enacted law. History (or tradition) that long postdates ratification does not serve that function. To be
sure, postenactment history can be an important tool. For example, it can "reinforce our understanding of the Constitution's original meaning"; "liquidate ambiguous constitutional provisions"; provide persuasive evidence of the original meaning; and, if stare decisis applies, control the outcome…. But generally speaking, the use of postenactment history requires some justification other than originalism simpliciter….

As I have explained elsewhere, evidence of "tradition" unmoored from original meaning is not binding law… And scattered cases or regulations pulled from history may have little bearing on the meaning of the text.

Here, Barrett relies heavily on "evidence of 'tradition' unmoored from original meaning" and "scattered… regulations" enacted more than a century after ratification. In fairness, the nineteenth century laws in question were enacted closer in time to the ratification of the Fourteenth Amendment in 1868, which is where the Supreme Court has said the right to marry arises from (albeit, when it comes to the federal government, the right is read back into the Fifth Amendment).  But the 1880s was still a long time after ratification. Moreover, the laws in question were enacted at a time when racial and ethnic bigotry undermined enforcement of much of the original meaning of the Fourteenth Amendment, and such bigotry heavily influenced immigration legislation and jurisprudence.

Barrett also relies on the history in part because the Supreme Court's test for whether the Due Process Clauses of the Fifth and Fourteenth Amendment protect an unenumerated right  (like right to marry) require the right to be "deeply rooted in this Nation's history and tradition." But a combination of badly misinterpreted 1790s history and 19th century history heavily tinged by racial and ethnic bigotry are poor means for applying that test.

As Justice Barrett recognizes later in her opinion, in later years Congress did in fact enact legislation giving spouses of US citizens a presumptive right to enter the United States, though there are exceptions, such as the one for "unlawful activities" at issue in this case. That suggests there may in fact be a historically rooted right to spousal migration, even if not an absolute one (most other constitutional rights aren't completely absolute, either).

Overall, I think Amy Coney Barrett has been a pretty good justice since her controversial appointment just before the 2020 election. But Munoz is far from her finest hour.

The Court's badly flawed handling of history doesn't necessarily mean the bottom-line decision was wrong. Even the dissenting liberal justices agreed the government was justified in denying Asencio-Cordero a visa, reasoning that the possible ties to MS-13 were a sufficient justification to outweigh the right to marry in an immigration case (and, as Justice Gorsuch notes in a concurring opinion, the government did eventually reveal the evidence in question). Alternatively, one can argue the right to marriage doesn't necessarily include a broad right to have your spouse present in the same jurisdiction. There may be other possible justifications for the outcome, as well.

But the Supreme Court should not have relied on a badly flawed interpretation of post-enactment history to justify a sweeping power to run roughshod over marriage rights in immigration cases, even in situations where the right to marry might otherwise impose a constraint. That's especially true given that similar reasoning could potentially be used to apply to other constitutional rights. If the Alien Friends Act of 1798 and 1880s immigration legislation qualify as relevant evidence, they could be used to justify almost any immigration restriction.

Obviously, Munoz is far from the first Supreme Court decision where the justices effectively exempted immigration restrictions from constitutional constraints that apply to other federal laws. Trump v. Hawaii, the 2018 travel ban decision is another recent example, and there are other such cases going back to the 19th century. But Munoz is still notable for its particularly slipshod historical analysis.

The post The Supreme Court's Dubious Use of History in Department of State v. Munoz appeared first on Reason.com.

  • ✇Semiconductor Engineering
  • Efficient TNN Inference on RISC-V Processing Cores With Minimal HW OverheadTechnical Paper Link
    A new technical paper titled “xTern: Energy-Efficient Ternary Neural Network Inference on RISC-V-Based Edge Systems” was published by researchers at ETH Zurich and Universita di Bologna. Abstract “Ternary neural networks (TNNs) offer a superior accuracy-energy trade-off compared to binary neural networks. However, until now, they have required specialized accelerators to realize their efficiency potential, which has hindered widespread adoption. To address this, we present xTern, a lightweight e
     

Efficient TNN Inference on RISC-V Processing Cores With Minimal HW Overhead

11. Červen 2024 v 02:28

A new technical paper titled “xTern: Energy-Efficient Ternary Neural Network Inference on RISC-V-Based Edge Systems” was published by researchers at ETH Zurich and Universita di Bologna.

Abstract
“Ternary neural networks (TNNs) offer a superior accuracy-energy trade-off compared to binary neural networks. However, until now, they have required specialized accelerators to realize their efficiency potential, which has hindered widespread adoption. To address this, we present xTern, a lightweight extension of the RISC-V instruction set architecture (ISA) targeted at accelerating TNN inference on general-purpose cores. To complement the ISA extension, we developed a set of optimized kernels leveraging xTern, achieving 67% higher throughput than their 2-bit equivalents. Power consumption is only marginally increased by 5.2%, resulting in an energy efficiency improvement by 57.1%. We demonstrate that the proposed xTern extension, integrated into an octa-core compute cluster, incurs a minimal silicon area overhead of 0.9% with no impact on timing. In end-to-end benchmarks, we demonstrate that xTern enables the deployment of TNNs achieving up to 1.6 percentage points higher CIFAR-10 classification accuracy than 2-bit networks at equal inference latency. Our results show that xTern enables RISC-V-based ultra-low-power edge AI platforms to benefit from the efficiency potential of TNNs.”

Find the technical paper here. Published May 2024.

Rutishauser, Georg, Joan Mihali, Moritz Scherer, and Luca Benini. “xTern: Energy-Efficient Ternary Neural Network Inference on RISC-V-Based Edge Systems.” arXiv preprint arXiv:2405.19065 (2024).

The post Efficient TNN Inference on RISC-V Processing Cores With Minimal HW Overhead appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Chip Industry Week In ReviewThe SE Staff
    Rapidus and IBM are jointly developing mass production capabilities for chiplet-based advanced packages. The collaboration builds on an existing agreement to develop 2nm process technology. Vanguard and NXP will jointly establish VisionPower Semiconductor Manufacturing Company (VSMC) in Singapore to build a $7.8 billion, 12-inch wafer plant. This is part of a global supply chain shift “Out of China, Out of Taiwan,” according to TrendForce. Alphawave joined forces with Arm to develop an advanced
     

Chip Industry Week In Review

7. Červen 2024 v 09:01

Rapidus and IBM are jointly developing mass production capabilities for chiplet-based advanced packages. The collaboration builds on an existing agreement to develop 2nm process technology.

Vanguard and NXP will jointly establish VisionPower Semiconductor Manufacturing Company (VSMC) in Singapore to build a $7.8 billion, 12-inch wafer plant. This is part of a global supply chain shift “Out of China, Out of Taiwan,” according to TrendForce.

Alphawave joined forces with Arm to develop an advanced chiplet based on Arm’s Neoverse Compute Subystems for AI/ML. The chiplet contains the Neoverse N3 CPU core cluster and Arm Coherent Mesh Network, and will be targeted at HPC in data centers, AI/ML applications, and 5G/6G infrastructure.

ElevATE Semiconductor and GlobalFoundries will partner for high-voltage chips to be produced at GF’s facility in Essex Junction, Vermont, which GF bought from IBM. The chips are essential for semiconductor testing equipment, aerospace, and defense systems.

NVIDIA, OpenAI, and Microsoft are under investigation by the U.S. Federal Trade Commission and Justice Department for violation of antitrust laws in the generative AI industry, according to the New York Times.

Quick links to more news:

Market Reports
Global
In-Depth
Education and Training
Security
Product News
Research
Events and Further Reading


Global

Apollo Global Management will invest $11 billion in Intel’s Fab 34 in Ireland, thereby acquiring a 49% stake in Intel’s Irish manufacturing operations.

imec and ASML opened their jointly run High-NA EUV Lithography Lab in Veldhoven, the Netherlands. The lab will be used to prepare  the next-generation litho for high-volume manufacturing, expected to begin in 2025 or 2026.

Expedera opened a new semiconductor IP design center in India. The location, the sixth of its kind for the company, is aimed at helping to make up for a shortfall in trained technicians, researchers, and engineers in the semiconductor sector.

Foxconn will build an advanced computing center in Taiwan with NVIDIA’s Blackwell platform at its core. The site will feature GB200 servers, which consist of 64 racks and 4,608 GPUs, and will be completed by 2026.

Intel and its 14 partner companies in Japan will use Sharp‘s LCD plants to research semiconductor production technology, a cost reduction move that should also produce income for Sharp, according to Nikkei Asia.

Japan is considering legislation to support the commercial production of advanced semiconductors, per Reuters.

Saudi Arabia aims to establish at least 50 semiconductor design companies as part of a new National Semiconductor Hub, funded with over $266 million.

Air Liquide is opening a new industrial gas production facility in Idaho, which will produce ultra-pure nitrogen and other gases for Micron’s new fab.

Microsoft will invest 33.7 billion Swedish crowns ($3.2 billion) to expand its cloud and AI infrastructure in Sweden over a two-year period, reports Bloomberg. The company also will invest $1 billion to establish a new data center in northwest Indiana.

AI data centers could consume as much as 9.1% of the electricity generated in the U.S. by 2030, according to a white paper published by the Electric Power Research Institute. That would more than double the electricity currently consumed by data centers, though EPRI notes this is a worst case scenario and advances in efficiency could be a mitigating factor.


Markets and Money

The Semiconductor Industry Association (SIA) announced global semiconductor sales increased 15.8% year-over-year in April, and the group projected a market growth of 16% in 2024. Conversely, global semiconductor equipment billings contracted 2% year-over-year to US$26.4 billion in Q1 2024, while quarter-over-quarter billings dropped 6% during the same period, according to SEMI‘s Worldwide Semiconductor Equipment Market Statistics (WWSEMS) Report.

Cadence completed its acquisition of BETA CAE Systems International, a provider of multi-domain, engineering simulation solutions.

Cisco‘s investment arm launched a $1 billion fund to aid AI startups as part of its AI innovation strategy. Nearly $200 million has already been earmarked.

The power and RF GaN markets will grow beyond US$2.45 billion and US$1.9 billion in 2029, respectively, according to Yole, which is offering a webinar on the topic.

The micro LED chip market is predicted to reach $580 million by 2028, driven by head-mounted devices and automotive applications, according to TrendForce. The cost of Micro LED chips may eventually come down due to size miniaturization.


In-Depth

Semiconductor Engineering published its Automotive, Security, and Pervasive Computing newsletter this week, featuring these top stories:

More reporting this week:


Security

Scott Best, Rambus senior director of Silicon Security Products, delivered a keynote at the Hardwear.io conference this week (below), detailing a $60 billion reverse engineering threat for hardware in just three markets — $30 billion for printer consumables, $20 billion for rechargeable batteries with some type of authentication, and $10 billion for medical devices such as sonogram probes.


Photo source: Ed Sperling/Semiconductor Engineering

wolfSSL debuted wolfHSM for automotive hardware security modules, with its cryptographic library ported to run in automotive HSMs like Infineon’s Aurix Tricore TC3XX.

Cisco integrated AMD Pensando data processing units (DPUs) with its Hypershield security architecture for defending AI-scale data centers.

OMNIVISION released an intelligent CMOS image sensor for human presence detection, infrared facial authentication, and always-on technology with a single sensing camera. And two new image sensors for industrial and consumer security surveillance cameras.

Digital Catapult announced a new cohort of companies will join Digital Security by Design’s Technology Access Program, gaining access to an Arm Morello prototype evaluation hardware kit based on Capability Hardware Enhanced RISC Instructions (CHERI), to find applications across critical UK sectors.

University of Southampton researchers used formal verification to evaluate the hardware reliability of a RISC-V ibex core in the presence of soft errors.

Several institutions published their students’ master’s and PhD work:

  • Virginia Tech published a dissertation proposing sPACtre, a defense mechanism that aims to prevent Spectre control-flow attacks on existing hardware.
  • Wright State University published a thesis proposing an approach that uses various machine learning models to bring an improvement in hardware Trojan identification with power signal side channel analysis
  • Wright State University published a thesis examining the effect of aging on the reliability of SRAM PUFs used for secure and trusted microelectronics IC applications.
  • Nanyang Technological University published a Final Year Project proposing a novel SAT-based circuit preprocessing attack based on the concept of logic cones to enhance the efficacy of SAT attacks on complex circuits like multipliers.

The Cybersecurity and Infrastructure Security Agency (CISA) issued a number of alerts/advisories.


Education and Training

Renesas and the Indian Institute of Technology Hyderabad (IIT Hyderabad) signed a three-year MoU to collaborate on VLSI and embedded semiconductor systems, with a focus on R&D and academic interactions to advance the “Make in India” strategy.

Charlie Parker, senior machine learning engineer at Tignis, presented a talk on “Why Every Fab Should Be Using AI.

Penn State and the National Sun Yat-Sen University (NSYSU) in Taiwan partnered to develop educational and research programs focused on semiconductors and photonics.

Rapidus and Hokkaido University partnered on education and research to enhance Japan’s scientific and technological capabilities and develop human resources for the semiconductor industry.

The University of Minnesota named Steve Koester its first “Chief Semiconductor Officer,” and launched a website devoted to semiconductor and microelectronics research and education.

The state of Michigan invested $10 million toward semiconductor workforce development.


Product News

Siemens reported breakthroughs in high-level C++ verification that will be used in conjunction with its Catapult software. Designers will be able to use formal property checking via the Catapult Formal Assert software and reachability coverage analysis through Catapult Formal CoverCheck.

Infineon released several products:

Augmental, an MIT Media Lab spinoff, released a tongue-based computer controller, dubbed the MouthPad.

NVIDIA revealed a new line of products that will form the basis of next-gen AI data centers. Along with partners ASRock Rack, ASUS, GIGABYTE, Ingrasys, and others, the NVIDIA GPUs and networking tech will offer cloud, on-premises, embedded, and edge AI systems. NVIDIA founder and CEO Jensen Huang showed off the company’s upcoming Rubin platform, which will succeed its current Blackwell platform. The new system will feature new GPUs, an Arm-based CPU and advanced networking with NVLink 6, CX9 SuperNIC and X1600 converged InfiniBand/Ethernet switch.

Intel showed off its Xeon 6 processors at Computex 2024. The company also unveiled architectural details for its Lunar Lake client computing processor, which will use 40% less SoC power, as well as a new NPU, and X2 graphic processing unit cores for gaming.


Research

imec released a roadmap for superconducting digital technology to revolutionize AI/ML.

CEA-Leti reported breakthroughs in three projects it considers key to the next generation of CMOS image sensors. The projects involved embedding AI in the CIS and stacking multiple dies to create 3D architectures.

Researchers from MIT’s Computer Science & Artificial Intelligence Laboratory (MIT-CSAIL) used a type of generative AI, known as diffusion models, to train multi-purpose robots, and designed the Grasping Neural Process for more intelligent robotic grasping.

IBM and Pasqal partnered to develop a common approach to quantum-centric supercomputing and to promote application research in chemistry and materials science.

Stanford University and Q-NEXT researchers investigated diamond to find the source of its temperamental nature when it comes to emitting quantum signals.

TU Wien researchers investigated how AI categorizes images.

In Canada:

  • Simon Fraser University received funding of over $80 million from various sources to upgrade the supercomputing facility at the Cedar National Host Site.
  • The Digital Research Alliance of Canada announced $10.28 million to renew the University of Victoria’s Arbutus cloud infrastructure.
  • The Canadian government invested $18.4 million in quantum research at the University of Waterloo.

Events and Further Reading

Find upcoming chip industry events here, including:

Event Date Location
SNUG Europe: Synopsys User Group Jun 10 – 11 Munich
IEEE RAS in Data Centers Summit: Reliability, Availability and Serviceability Jun 11 – 12 Santa Clara, CA
AI for Semiconductors (MEPTEC) Jun 12 – 13 Online
3D & Systems Summit Jun 12 – 14 Dresden, Germany
PCI-SIG Developers Conference Jun 12 – 13 Santa Clara, CA
Standards for Chiplet Design with 3DIC Packaging (Part 1) Jun 14 Online
AI Hardware and Edge AI Summit: Europe Jun 18 – 19 London, UK
Standards for Chiplet Design with 3DIC Packaging (Part 2) Jun 21 Online
DAC 2024 Jun 23 – 27 San Francisco
RISC-V Summit Europe 2024 Jun 24 – 28 Munich
Leti Innovation Days 2024 Jun 25 – 27 Grenoble, France
Find All Upcoming Events Here

Upcoming webinars are here.


Semiconductor Engineering’s latest newsletters:

Automotive, Security and Pervasive Computing
Systems and Design
Low Power-High Performance
Test, Measurement and Analytics
Manufacturing, Packaging and Materials

 

The post Chip Industry Week In Review appeared first on Semiconductor Engineering.

  • ✇Latest
  • The Collective-Action Constitution in an Era of Polarization and Animosity: An Elegy?Neil Siegel
    (Oxford University Press.) The study conducted by The Collective-Action Constitution offers several lessons. First, when states disagree about the existence or severity of collective-action problems, such problems do not simply exist or not in a technical, scientific way. Cost-benefit collective-action problems have an objective structure, but their existence and significance require assessing the extent to which states are externalizing costs th
     

The Collective-Action Constitution in an Era of Polarization and Animosity: An Elegy?

7. Červen 2024 v 16:30
Collective Action Constitution | Oxford University Press.
Oxford University Press.
(Oxford University Press.)

The study conducted by The Collective-Action Constitution offers several lessons. First, when states disagree about the existence or severity of collective-action problems, such problems do not simply exist or not in a technical, scientific way. Cost-benefit collective-action problems have an objective structure, but their existence and significance require assessing the extent to which states are externalizing costs that are greater than the benefits they are internalizing, and such assessments may require normative judgments in addition to factfinding.

Second, the assessor that matters most is either the Constitution itself or the governmental institution with the most democratic legitimacy to make such judgment calls. This institution is Congress—the first branch of government—where all states and all Americans are represented, in contrast to individual state governments, where only one state and some Americans are represented. In McCulloch v. Maryland (1819), Chief Justice Marshall explained this key difference between the democratic legitimacy of the states and the people collectively in Congress and the democratic legitimacy of states individually outside it. Congress is also more broadly representative of all states and all people than is the presidency, which does not include both political parties at a given time, or balance interests to anywhere near the same extent that Congress does.

Third, if it is acting within its enumerated powers, Congress need only comply with the voting rules set forth in the Constitution; Congress need not first establish that all or most states agree that a collective-action problem exists and is sufficiently serious to warrant federal regulation. In other words, Congress need not poll states apart from establishing sufficient support in the federal legislative process. Because states are represented in Congress—because congressional majorities represent (albeit imperfectly) the constitutionally relevant views of the states collectively—proceeding otherwise would overrepresent the states, effectively letting them vote twice. One main reason that the Constitution is too difficult to amend, Chapter 10 argues, is that Article V essentially lets states vote twice.

Fourth, Congress's central role in deciding whether and how to solve collective-action problems for the states connects constitutional provisions, principles, and ideas that may otherwise seem to have little to do with one another. These include, for example, the Interstate Compacts Clause (see Chapter 3), the Interstate Commerce Clause (Chapter 5), the congressional approval exception to the dormant commerce principle (Chapter 5), Article III's opening clause, which lets Congress decide whether to create lower federal courts (Chapter 7), Article IV's provisions expressly or implicitly authorizing Congress to legislate (Chapter 8), and democratic-process rights and theory (Chapter 9).

Congress's paramount role in the constitutional scheme raises questions (explored in Part III of the book) about the operation of the Constitution's system of separated and  interrelated powers in contemporary times. It is one thing  to argue that the Constitution was originally intended, and designed, to render the federal government operating through (super)majority rule more likely to solve multistate collective-action problems than the states operating through unanimity rule. It is another thing to show that this is generally true in practice. As George Washington implied in his letter accompanying submission of the U.S. Constitution to the Confederation Congress that begins The Collective-Action Constitution, to protect state autonomy and individual liberty, the Framers created a bicameral legislature and a separation-of-powers system, both of which make it more difficult to legislate than in a unicameral legislature and a parliamentary system. But the Framers did not imagine that the availability of veto threats would come to dominate the policymaking process in situations having nothing to do with perceived encroachments on the presidency or bills that the president thinks unconstitutional. Nor were the Framers responsible for modern subconstitutional "veto gates" in Congress, especially the Senate filibuster, which makes it even harder to legislate. Finally, the Framers did not foresee the polarized, antagonistic nature of contemporary American politics.

These developments have meant that bicameralism and the separation (and interrelation) of powers often do not merely qualify Congress's ability to legislate. The horizontal structure and contemporary politics likely make it too hard for Congress to do so, particularly given the magnitude and geographic scope of the problems facing the nation and the extent to which Americans look mainly to the federal government, not the states, to solve them. The Constitution's greatest defect in modern times is probably that Congress often cannot execute its legislative responsibilities in the constitutional scheme. A result has been power shifts from Congress to the executive branch, the federal courts, and the states. The main workaround for congressional gridlock has been more frequent unilateral action by the executive. Other partial and potentially worrisome workarounds include efforts by federal courts to "update" the meaning of federal statutes and greater exercises of state regulatory authority. Sufficient solutions to the problem of gridlock may not exist any time soon given the practical impossibility of amending the Constitution, the unlikelihood that veto practice will become more restrained, and the long periods required for political realignments to occur. Ending the legislative filibuster in the Senate by majority vote would, however, have the likely salutary (but not cost-free) consequence of changing the typical voting threshold in this chamber from a three-fifths supermajority to majority rule.

Given the difficulty of legislating in the current era, it might be thought that The Collective-Action Constitution offers an elegy—an account of how the U.S. constitutional system was supposed to function or used to function but functions no longer. To readers who regard the book as an elegy during an era of presidential administration, judicial supremacy, and assertive state legislation, I offer the words of Richard Hooker, who long ago deemed his own book an elegy and justified writing it anyway: "Though for no other cause, yet for this; that posterity may know we have not loosely through silence permitted things to pass away as in a dream." (My learned colleague, H. Jefferson Powell, furnished this quote.)

In truth, however, The Collective-Action Constitution does not offer an elegy.  The constitutional structure still has much to commend it relative to relying on the states to act collectively outside Congress. When problems are national or international in scope, the relevant comparison is not between Congress's ability to combat a problem and one state's ability to do so, but between the ability of the political branches to act and the ability of the states to act collectively through unanimous agreement. Collective-action problems would almost certainly be more severe if the federal government were dissolved and states had to unanimously agree to protect the environment; regulate interstate and foreign commerce; build interstate infrastructure; conclude international agreements; contribute revenue to a common treasury and troops to a common military (or coordinate separate militaries); disburse funds held in common; respond to economic downturns; provide a minimum safety net; and handle pandemics, among many other problems. Congress still legislates today, and it does so much more frequently than most (let alone all) states form interstate compacts.

As for the executive branch, Presidents lack the ability to legislate in a formal and legitimate fashion, and so they cannot address the above problems and those with a similar structure to anywhere near the same extent that Congress can. Presidential action is less enduring and far reaching than legislation. Normatively, moreover, executive unilateralism poses risks of democratic deficits and backsliding that congressional power does not.

The federal judiciary's most important job is largely (although not entirely) to get out of the way. The Collective-Action Constitution cautions the U.S. Supreme Court and lower federal courts—both of which can be substantially more assertive than the Founders envisioned in reviewing the constitutionality of federal laws (see Chapter 7)—not to significantly restrict federal power in the years ahead, whether through constitutional-law holdings contracting congressional power or administrative-law decisions diminishing agency power. The nation will continue to face pressing problems that spill across state (or national) borders, so federal action will be needed to address them effectively. In general, the federal government has the authority to act. And given the horizontal structure and the era of partisan polarization and animosity in which Americans will continue to live, there are already major impediments to the ability and willingness of members of Congress to overcome their own collective-action problems and legislate. Especially in modern times, legal doctrine should facilitate, not impede, realization of the Constitution's main structural purpose—its commitment to collective action.

The post The Collective-Action Constitution in an Era of Polarization and Animosity: An Elegy? appeared first on Reason.com.

  • ✇Ars Technica - All content
  • DARPA’s planned nuclear rocket would use enough fuel to build a bombJacek Krywko
    Enlarge (credit: OLE-CNX) High-assay low-enriched uranium (HALEU) has been touted as the go-to fuel for powering next-gen nuclear reactors, which include the sodium-cooled TerraPower or the space-borne system powering Demonstration Rocket for Agile Cislunar Operations (DRACO). That’s because it was supposed to offer higher efficiency while keeping uranium enrichment “well below the threshold needed for weapons-grade material,” according to the US Department of Energy. This ju
     

DARPA’s planned nuclear rocket would use enough fuel to build a bomb

10. Červen 2024 v 20:56
A lump of rock, next to the periodic table entry for uranium, all against a black background.

Enlarge (credit: OLE-CNX)

High-assay low-enriched uranium (HALEU) has been touted as the go-to fuel for powering next-gen nuclear reactors, which include the sodium-cooled TerraPower or the space-borne system powering Demonstration Rocket for Agile Cislunar Operations (DRACO). That’s because it was supposed to offer higher efficiency while keeping uranium enrichment “well below the threshold needed for weapons-grade material,” according to the US Department of Energy.

This justified huge government investments in HALEU production in the US and UK, as well as relaxed security requirements for facilities using it as fuel. But now, a team of scientists has published an article in Science that argues that you can make a nuclear bomb using HALEU.

“I looked it up and DRACO space reactor will use around 300 kg of HALEU. This is marginal, but I would say you could make one a weapon with that much,” says Edwin Lyman, the director of Nuclear Power Safety at the Union of Concerned Scientists and co-author of the paper.

Read 15 remaining paragraphs | Comments

  • ✇Slashdot
  • Battery-Powered California Faces Lower Blackout Risk This SummerBeauHD
    An anonymous reader quotes a report from Bloomberg: California expects to avoid rolling blackouts this summer as new solar plants and large batteries plug into the state's grid at a rapid clip. The state's electricity system has been strained by years of drought, wildfires that knock out transmission lines and record-setting heat waves. But officials forecast Wednesday new resources added to the grid in the last four years would give California ample supplies for typical summer weather. Since
     

Battery-Powered California Faces Lower Blackout Risk This Summer

Od: BeauHD
1. Červen 2024 v 02:02
An anonymous reader quotes a report from Bloomberg: California expects to avoid rolling blackouts this summer as new solar plants and large batteries plug into the state's grid at a rapid clip. The state's electricity system has been strained by years of drought, wildfires that knock out transmission lines and record-setting heat waves. But officials forecast Wednesday new resources added to the grid in the last four years would give California ample supplies for typical summer weather. Since 2020, California has added 18.5 gigawatts of new resources. Of that, 6.6 gigawatts were batteries, 6.3 gigawatts were solar and 1.4 gigawatts were a combination of solar and storage. One gigawatt can power about 750,000 homes. In addition, the state's hydropower plants will be a reliable source of electricity after two wet winters in a row ended California's most recent drought. Those supplies would hold even if California experiences another heat wave as severe as the one that triggered rolling blackouts across the state in August 2020, officials said in a briefing Wednesday. In the most dire circumstances, the state now has backup resources that can supply an extra 5 gigawatts of electricity, including gas-fired power plants that only run during emergencies.

Read more of this story at Slashdot.

  • ✇Semiconductor Engineering
  • How Quickly Can You Take Your Idea To Chip Design?Kira Jones
    Gone are the days of expensive tapeouts only done by commercial companies. Thanks to Tiny Tapeout, students, hobbyists, and more can design a simple ASIC or PCB design and actually send it to a foundry for a small fraction of the usual cost. Learners from all walks of life can use the resources to learn how to design a chip, without signing an NDA or installing licenses, faster than ever before. Whether you’re a digital, analog, or mixed-signal designer, there’s support for you. We’re excited to
     

How Quickly Can You Take Your Idea To Chip Design?

16. Květen 2024 v 09:04

Gone are the days of expensive tapeouts only done by commercial companies. Thanks to Tiny Tapeout, students, hobbyists, and more can design a simple ASIC or PCB design and actually send it to a foundry for a small fraction of the usual cost. Learners from all walks of life can use the resources to learn how to design a chip, without signing an NDA or installing licenses, faster than ever before. Whether you’re a digital, analog, or mixed-signal designer, there’s support for you.

We’re excited to support our academic network in participating in this initiative to gain more hands-on experience that will prepare them for a career in the semiconductor industry. We have professors incorporating it into the classroom, giving students the exciting opportunity to take their ideas from concept to reality.

“It gives people this joy when we design the chip in class. The 50 students that took the class last year, they designed a chip and Google funded it, and every time they got their design on the chip, their eyes got really big. I love being able to help students do that, and I want to do that all over the country,” said Matt Morrison, associate teaching professor in computer science and engineering, University of Notre Dame.

We also advise and encourage extracurricular design teams to challenge themselves outside the classroom. We partner with multiple design teams focused on creating an environment for students to explore the design flow process from RTL-to-GDS, and Tiny Tapeout provides an avenue for them.

“Just last year, SiliconJackets was founded by Zachary Ellis and me as a Georgia Tech club that takes ideas to SoC tapeout. Today, I am excited to share that we submitted the club’s first-ever design to Tiny Tapeout 6. This would not have been possible without the help from our advisors, and industry partners at Apple and Cadence,” said Nealson Li, SiliconJackets vice president and co-founder.

Tiny Tapeout also creates a culture of knowledge sharing, allowing participants to share their designs online, collaborate with one another, and build off an existing design. This creates a unique opportunity to learn from others’ experiences, enabling faster learning and more exposure.

“One of my favorite things about this project is that you’re not only going to get your design, but everybody else’s as well. You’ll be able to look through the chips’ data sheet and try out someone else’s design. In our previous runs, we’ve seen some really interesting designs, including RISC-V CPUs, FPGAs, ring oscillators, synthesizers, USB devices, and loads more,” said Matt Venn, science & technology communicator and electronic engineer.

Tiny Tapeout is on its seventh run, launched on April 22, 2024, and will remain open until June 1, 2024, or until all the slots fill up! With each run, more unique designs are created, more knowledge is shared, and more of the future workforce is developed. Check out the designs that were just submitted for Tiny Tapeout 6.

What can you expect when you participate?

  • Access to training materials
  • Ability to create your own design using one of the templates
  • Support from the FAQs or Tiny Tapeout community

Interested in learning more? Check out their webpage. Want to use Cadence tools for your design? Check out our University Program and what tools students can access for free. We can’t wait to see what you come up with and how it’ll help you launch a career in the electronics industry!

The post How Quickly Can You Take Your Idea To Chip Design? appeared first on Semiconductor Engineering.

  • ✇Slashdot
  • Could Atomically Thin Layers Bring A 19x Energy Jump In Battery Capacitors?EditorDavid
    Researchers believe they've discovered a new material structure that can improve the energy storage of capacitors. The structure allows for storage while improving the efficiency of ultrafast charging and discharging. The new find needs optimization but has the potential to help power electric vehicles. * An anonymous reader shared this report from Popular Mechanics: In a study published in Science, lead author Sang-Hoon Bae, an assistant professor of mechanical engineering and materials sci
     

Could Atomically Thin Layers Bring A 19x Energy Jump In Battery Capacitors?

12. Květen 2024 v 20:34
Researchers believe they've discovered a new material structure that can improve the energy storage of capacitors. The structure allows for storage while improving the efficiency of ultrafast charging and discharging. The new find needs optimization but has the potential to help power electric vehicles. * An anonymous reader shared this report from Popular Mechanics: In a study published in Science, lead author Sang-Hoon Bae, an assistant professor of mechanical engineering and materials science, demonstrates a novel heterostructure that curbs energy loss, enabling capacitors to store more energy and charge rapidly without sacrificing durability... Within capacitors, ferroelectric materials offer high maximum polarization. That's useful for ultra-fast charging and discharging, but it can limit the effectiveness of energy storage or the "relaxation time" of a conductor. "This precise control over relaxation time holds promise for a wide array of applications and has the potential to accelerate the development of highly efficient energy storage systems," the study authors write. Bae makes the change — one he unearthed while working on something completely different — by sandwiching 2D and 3D materials in atomically thin layers, using chemical and nonchemical bonds between each layer. He says a thin 3D core inserts between two outer 2D layers to produce a stack that's only 30 nanometers thick, about 1/10th that of an average virus particle... The sandwich structure isn't quite fully conductive or nonconductive. This semiconducting material, then, allows the energy storage, with a density up to 19 times higher than commercially available ferroelectric capacitors, while still achieving 90 percent efficiency — also better than what's currently available. Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

  • ✇Slashdot
  • Are Small Modular Nuclear Reactors Costly and Unviable?EditorDavid
    The Royal Institution of Australia is a national non-profit hub for science communication, publishing the science magazine Cosmos four times a year. This month they argued that small modular nuclear reactors "don't add up as a viable energy source." Proponents assert that SMRs would cost less to build and thus be more affordable. However, when evaluated on the basis of cost per unit of power capacity, SMRs will actually be more expensive than large reactors. This 'diseconomy of scale' was dem
     

Are Small Modular Nuclear Reactors Costly and Unviable?

12. Květen 2024 v 13:34
The Royal Institution of Australia is a national non-profit hub for science communication, publishing the science magazine Cosmos four times a year. This month they argued that small modular nuclear reactors "don't add up as a viable energy source." Proponents assert that SMRs would cost less to build and thus be more affordable. However, when evaluated on the basis of cost per unit of power capacity, SMRs will actually be more expensive than large reactors. This 'diseconomy of scale' was demonstrated by the now-terminated proposal to build six NuScale Power SMRs (77 megawatts each) in Idaho in the United States. The final cost estimate of the project per megawatt was around 250 percent more than the initial per megawatt cost for the 2,200 megawatts Vogtle nuclear power plant being built in Georgia, US. Previous small reactors built in various parts of America also shut down because they were uneconomical. The cost was four to six times the cost of the same electricity from wind and solar photovoltaic plants, according to estimates from the Australian Commonwealth Scientific and Industrial Research Organisation and the Australian Energy Market Operator. "The money invested in nuclear energy would save far more carbon dioxide if it were instead invested in renewables," the article agues: Small reactors also raise all of the usual concerns associated with nuclear power, including the risk of severe accidents, the linkage to nuclear weapons proliferation, and the production of radioactive waste that has no demonstrated solution because of technical and social challenges. One 2022 study calculated that various radioactive waste streams from SMRs would be larger than the corresponding waste streams from existing light water reactors... Nuclear energy itself has been declining in importance as a source of power: the fraction of the world's electricity supplied by nuclear reactors has declined from a maximum of 17.5 percent in 1996 down to 9.2 percent in 2022. All indications suggest that the trend will continue if not accelerate. The decline in the global share of nuclear power is driven by poor economics: generating power with nuclear reactors is costly compared to other low-carbon, renewable sources of energy and the difference between these costs is widening. Thanks to Slashdot reader ZipNada for sharing the article.

Read more of this story at Slashdot.

  • ✇Slashdot
  • 'Tungsten Wall' Leads To Nuclear Fusion BreakthroughBeauHD
    A tokamak in France achieved a new record in fusion plasma by using tungsten to encase its reaction, which enabled the sustainment of hotter and denser plasma for longer periods than previous carbon-based designs. Quartz reports: A tokamak is a torus- (doughnut-) shaped fusion device that confines plasma using magnetic fields, allowing scientists to fiddle with the superheated material and induce fusion reactions. The recent achievement was made in WEST (tungsten (W) Environment in Steady-state
     

'Tungsten Wall' Leads To Nuclear Fusion Breakthrough

Od: BeauHD
11. Květen 2024 v 09:00
A tokamak in France achieved a new record in fusion plasma by using tungsten to encase its reaction, which enabled the sustainment of hotter and denser plasma for longer periods than previous carbon-based designs. Quartz reports: A tokamak is a torus- (doughnut-) shaped fusion device that confines plasma using magnetic fields, allowing scientists to fiddle with the superheated material and induce fusion reactions. The recent achievement was made in WEST (tungsten (W) Environment in Steady-state Tokamak), a tokamak operated by the French Alternative Energies and Atomic Energy Commission (CEA). WEST was injected with 1.15 gigajoules of power and sustained a plasma of about 50 million degrees Celsius for six minutes. It achieved this record after scientists encased the tokamak's interior in tungsten, a metal with an extraordinarily high melting point. Researchers from Princeton Plasma Physics Laboratory used an X-ray detector inside the tokamak to measure aspects of the plasma and the conditions that made it possible. "These are beautiful results," said Xavier Litaudon, a scientist with CEA and chair of the Coordination on International Challenges on Long duration OPeration (CICLOP), in a PPPL release. "We have reached a stationary regime despite being in a challenging environment due to this tungsten wall."

Read more of this story at Slashdot.

  • ✇IEEE Spectrum
  • A Skeptic’s Take on Beaming Power to Earth from SpaceHenri Barde
    The accelerating buildout of solar farms on Earth is already hitting speed bumps, including public pushback against the large tracts of land required and a ballooning backlog of requests for new transmission lines and grid connections. Energy experts have been warning that electricity is likely to get more expensive and less reliable unless renewable power that waxes and wanes under inconstant sunlight and wind is backed up by generators that can run whenever needed. To space enthusiasts, that
     

A Skeptic’s Take on Beaming Power to Earth from Space

9. Květen 2024 v 17:00


The accelerating buildout of solar farms on Earth is already hitting speed bumps, including public pushback against the large tracts of land required and a ballooning backlog of requests for new transmission lines and grid connections. Energy experts have been warning that electricity is likely to get more expensive and less reliable unless renewable power that waxes and wanes under inconstant sunlight and wind is backed up by generators that can run whenever needed. To space enthusiasts, that raises an obvious question: Why not stick solar power plants where the sun always shines?

Space-based solar power is an idea so beautiful, so tantalizing that some argue it is a wish worth fulfilling. A constellation of gigantic satellites in geosynchronous orbit (GEO) nearly 36,000 kilometers above the equator could collect sunlight unfiltered by atmosphere and uninterrupted by night (except for up to 70 minutes a day around the spring and fall equinoxes). Each megasat could then convert gigawatts of power into a microwave beam aimed precisely at a big field of receiving antennas on Earth. These rectennas would then convert the signal to usable DC electricity.

The thousands of rocket launches needed to loft and maintain these space power stations would dump lots of soot, carbon dioxide, and other pollutants into the stratosphere, with uncertain climate impacts. But that might be mitigated, in theory, if space solar displaced fossil fuels and helped the world transition to clean electricity.

The glamorous vision has inspired numerous futuristic proposals. Japan’s space agency has presented a road map to deployment. Space authorities in China aim to put a small test satellite in low Earth orbit (LEO) later this decade. Ideas to put megawatt-scale systems in GEO sometime in the 2030s have been floated but not yet funded.

The U.S. Naval Research Laboratory has already beamed more than a kilowatt of power between two ground antennas about a kilometer apart. It also launched in 2023 a satellite that used a laser to transmit about 1.5 watts, although the beam traveled less than 2 meters and the system had just 11 percent efficiency. A team at Caltech earlier this year wrapped up a mission that used a small satellite in LEO to test thin-film solar cells, flexible microwave-power circuitry, and a small collapsible deployment mechanism. The energy sent Earthward by the craft was too meager to power a lightbulb, but it was progress nonetheless.

The European Space Agency (ESA) debuted in 2022 its space-based solar-power program, called Solaris, with an inspiring (but entirely fantastical) video animation. The program’s director, Sanjay Vijendran, told IEEE Spectrum that the goal of the effort is not to develop a power station for space. Instead, the program aims to spend three years and €60 million (US $65 million) to figure out whether solar cells, DC-to-RF converters, assembly robots, beam-steering antennas, and other must-have technologies will improve drastically enough over the next 10 to 20 years to make orbital solar power feasible and competitive. Low-cost, low-mass, and space-hardy versions of these technologies would be required, but engineers trying to draw up detailed plans for such satellites today find no parts that meet the tough requirements.

A chart showing efficiency of research and commercial solar cells. Not so fast: The real-world efficiency of commercial, space-qualified solar cells has progressed much more slowly than records set in highly controlled research experiments, which often use exotic materials or complex designs that cannot currently be mass-produced. Points plotted here show the highest efficiency reported in five-year intervals.HENRI BARDE; DATA FROM NATIONAL RENEWABLE ENERGY LABORATORY (RESEARCH CELLS) AND FROM MANUFACTURER DATA SHEETS AND PRESENTATIONS (COMMERCIAL CELLS)

With the flurry of renewed attention, you might wonder: Has extraterrestrial solar power finally found its moment? As the recently retired head of space power systems at ESA—with more than 30 years of experience working on power generation, energy storage, and electrical systems design for dozens of missions, including evaluation of a power-beaming experiment proposed for the International Space Station—I think the answer is almost certainly no.

Despite mounting buzz around the concept, I and many of my former colleagues at ESA are deeply skeptical that these large and complex power systems could be deployed quickly enough and widely enough to make a meaningful contribution to the global energy transition. Among the many challenges on the long and formidable list of technical and societal obstacles: antennas so big that we cannot even simulate their behavior.

Here I offer a road map of the potential chasms and dead ends that could doom a premature space solar project to failure. Such a misadventure would undermine the credibility of the responsible space agency and waste capital that could be better spent improving less risky ways to shore up renewable energy, such as batteries, hydrogen, and grid improvements. Champions of space solar power could look at this road map as a wish list that must be fulfilled before orbital solar power can become truly appealing to electrical utilities.

Space Solar Power at Peak Hype—Again

For decades, enthusiasm for the possibility of drawing limitless, mostly clean power from the one fusion reactor we know works reliably—the sun—has run hot and cold. A 1974 study that NASA commissioned from the consultancy Arthur D. Little bullishly recommended a 20-year federal R&D program, expected to lead to a commercial station launching in the mid-1990s. After five years of work, the agency delivered a reference architecture for up to 60 orbiting power stations, each delivering 5 to 10 gigawatts of baseload power to major cities. But officials gave up on the idea when they realized that it would cost over $1 trillion (adjusted for inflation) and require hundreds of astronauts working in space for decades, all before the first kilowatt could be sold.

NASA did not seriously reconsider space solar until 1995, when it ordered a “fresh look” at the possibility. That two-year study generated enough interest that the U.S. Congress funded a small R&D program, which published plans to put up a megawatt-scale orbiter in the early 2010s and a full-size power plant in the early 2020s. Funding was cut off a few years later, with no satellites developed.

An illustration of scale between buildings on earth and the satellites.  Because of the physics of power transmission from geosynchronous orbit, space power satellites must be enormous—hundreds of times larger than the International Space Station and even dwarfing the tallest skyscrapers—to generate electricity at a competitive price. The challenges for their engineering and assembly are equally gargantuan. Chris Philpot

Then, a decade ago, private-sector startups generated another flurry of media attention. One, Solaren, even signed a power-purchase agreement to deliver 200 megawatts to utility customers in California by 2016 and made bold predictions that space solar plants would enter mass production in the 2020s. But the contract and promises went unfulfilled.

The repeated hype cycles have ended the same way each time, with investors and governments balking at the huge investments that must be risked to build a system that cannot be guaranteed to work. Indeed, in what could presage the end of the current hype cycle, Solaris managers have had trouble drumming up interest among ESA’s 22 member states. So far only the United Kingdom has participated, and just 5 percent of the funds available have been committed to actual research work.

Even space-solar advocates have recognized that success clearly hinges on something that cannot be engineered: sustained political will to invest, and keep investing, in a multidecade R&D program that ultimately could yield machines that can’t put electricity on the grid. In that respect, beamed power from space is like nuclear fusion, except at least 25 years behind.

In the 1990s, the fusion community succeeded in tapping into national defense budgets and cobbled together the 35-nation, $25 billion megaproject ITER, which launched in 2006. The effort set records for delays and cost overruns, and yet a prototype is still years from completion. Nevertheless, dozens of startups are now testing new fusion-reactor concepts. Massive investments in space solar would likely proceed in the same way. Of course, if fusion succeeds, it would eclipse the rationale for solar-energy satellites.

Space Industry Experts Run the Numbers

The U.S. and European space agencies have recently released detailed technical analyses of several space-based solar-power proposals. [See diagrams.] These reports make for sobering reading.

SPS-ALPHA Mark-III


An illustration of a satellite and the earth,

Proposed by: John C. Mankins, former NASA physicist

Features: Thin-film reflectors (conical array) track the sun and concentrate sunlight onto an Earth-facing energy-conversion array that has photovoltaic (PV) panels on one side, microwave antennas on the other, and power distribution and control electronics in the middle. Peripheral modules adjust the station’s orbit and orientation.

MR-SPS


An illustration of a satellite and the earth,

Proposed by: China Academy of Space Technology

Features: Fifty PV solar arrays, each 200 meters wide and 600 meters long, track the sun and send power through rotating high-power joints and perpendicular trusses to a central microwave-conversion and transmission array that points 128,000 antenna modules at the receiving station on Earth.

CASSIOPeiA


An illustration of a satellite and the earth,

Proposed by: Ian Cash, chief architect, Space Solar Group Holdings

Features: Circular thin-film reflectors track the sun and bounce light onto a helical array that includes myriad small PV cells covered by Fresnel-lens concentrators, power-conversion electronics, and microwave dipole antennas. The omnidirectional antennas must operate in sync to steer the beam as the station rotates relative to the Earth.

SPS (Solar power satellite)


An illustration of a satellite and the earth,

Proposed by: Thales Alenia Space

Features: Nearly 8,000 flexible solar arrays, each 10 meters wide and 80 meters long, are unfurled from roll-out modules and linked together to form two wings. The solar array remains pointed at the sun, so the central transmitter must rotate and also operate with great precision as a phased-array antenna to continually steer the beam onto the ground station.

Electricity made this way, NASA reckoned in its 2024 report, would initially cost 12 to 80 times as much as power generated on the ground, and the first power station would require at least $275 billion in capital investment. Ten of the 13 crucial subsystems required to build such a satellite—including gigawatt-scale microwave beam transmission and robotic construction of kilometers-long, high-stiffness structures in space—rank as “high” or “very high” technical difficulty, according to a 2022 report to ESA by Frazer-Nash, a U.K. consultancy. Plus, there is no known way to safely dispose of such enormous structures, which would share an increasingly crowded GEO with crucial defense, navigation, and communications satellites, notes a 2023 ESA study by the French-Italian satellite maker Thales Alenia Space.

An alternative to microwave transmission would be to beam the energy down to Earth as reflected sunlight. Engineers at Arthur D. Little described the concept in a 2023 ESA study in which they proposed encircling the Earth with about 4,000 aimable mirrors in LEO. As each satellite zips overhead, it would shine an 8-km-wide spotlight onto participating solar farms, allowing the farms to operate a few extra hours each day (if skies are clear). In addition to the problems of clouds and light pollution, the report noted the thorny issue of orbital debris, estimating that each reflector would be penetrated about 75 billion times during its 10-year operating life.

My own assessment, presented at the 2023 European Space Power Conference and published by IEEE, pointed out dubious assumptions and inconsistencies in four space-solar designs that have received serious attention from government agencies. Indeed, the concepts detailed so far all seem to stand on shaky technical ground.

Massive Transmitters and Receiving Stations

The high costs and hard engineering problems that prevent us from building orbital solar-power systems today arise mainly from the enormity of these satellites and their distance from Earth, both of which are unavoidable consequences of the physics of this kind of energy transmission. Only in GEO can a satellite stay (almost) continuously connected to a single receiving station on the ground. The systems must beam down their energy at a frequency that passes relatively unimpeded through all kinds of weather and doesn’t interfere with critical radio systems on Earth. Most designs call for 2.45 or 5.8 gigahertz, within the range used for Wi-Fi. Diffraction will cause the beam to spread as it travels, by an amount that depends on the frequency.

Thales Alenia Space estimated that a transmitter in GEO must be at least 750 meters in diameter to train the bright center of a 5.8-GHz microwave beam onto a ground station of reasonable area over that tremendous distance—65 times the altitude of LEO satellites like Starlink. Even using a 750-meter transmitter, a receiver station in France or the northern United States would fill an elliptical field covering more than 34 square kilometers. That’s more than two-thirds the size of Bordeaux, France, where I live.

“Success hinges on something that cannot be engineered: sustained political will to keep investing in a multidecade R&D program that ultimately could yield machines that can’t put electricity on the grid.”

Huge components come with huge masses, which lead to exorbitant launch costs. Thales Alenia Space estimated that the transmitter alone would weigh at least 250 tonnes and cost well over a billion dollars to build, launch, and ferry to GEO. That estimate, based on ideas from the Caltech group that have yet to be tested in space, seems wildly optimistic; previous detailed transmitter designs are about 30 times heavier.

Because the transmitter has to be big and expensive, any orbiting solar project will maximize the power it sends through the beam, within acceptable safety limits. That’s why the systems evaluated by NASA, ESA, China, and Japan are all scaled to deliver 1–2 GW, the maximum output that utilities and grid operators now say they are willing to handle. It would take two or three of these giant satellites to replace one large retiring coal or nuclear power station.

Energy is lost at each step in the conversion from sunlight to DC electricity, then to microwaves, then back to DC electricity and finally to a grid-compatible AC current. It will be hard to improve much on the 11 percent end-to-end efficiency seen in recent field trials. So the solar arrays and electrical gear must be big enough to collect, convert, and distribute around 9 GW of power in space just to deliver 1 GW to the grid. No electronic switches, relays, and transformers have been designed or demonstrated for spacecraft that can handle voltages and currents anywhere near the required magnitude.

Some space solar designs, such as SPS-ALPHA and CASSIOPeiA, would suspend huge reflectors on kilometers-long booms to concentrate sunlight onto high-efficiency solar cells on the back side of the transmitter or intermingled with antennas. Other concepts, such as China’s MR-SPS and the design proposed by Thales Alenia Space, would send the currents through heavy, motorized rotating joints that allow the large solar arrays to face the sun while the transmitter pivots to stay fixed on the receiving station on Earth.

An illustration of overlapping red rings over a blue circle All space solar-power concepts that send energy to Earth via a microwave beam would need a large receiving station on the ground. An elliptical rectenna field 6 to 10 kilometers wide would be covered with antennas and electronics that rectify the microwaves into DC power. Additional inverters would then convert the electricity to grid-compatible AC current.Chris Philpot

The net result, regardless of approach, is an orbiting power station that spans several kilometers, totals many thousands of tonnes, sends gigawatts of continuous power through onboard electronics, and comprises up to a million modules that must be assembled in space—by robots. That is a gigantic leap from the largest satellite and solar array ever constructed in orbit: the 420-tonne, 109-meter International Space Station (ISS), whose 164 solar panels produce less than 100 kilowatts to power its 43 modules.

The ISS has been built and maintained by astronauts, drawing on 30 years of prior experience with the Salyut, Skylab, and Mir space stations. But there is no comparable incremental path to a robot-assembled power satellite in GEO. Successfully beaming down a few megawatts from LEO would be an impressive achievement, but it wouldn’t prove that a full-scale system is feasible, nor would the intermittent power be particularly interesting to commercial utilities.

T Minus...Decades?

NASA’s 2024 report used sensitivity analysis to look for advances, however implausible, that would enable orbital solar power to be commercially competitive with nuclear fission and other low-emissions power. To start, the price of sending a tonne of cargo to LEO on a large reusable rocket, which has fallen 36 percent over the past 10 years, would have to drop by another two-thirds, to $500,000. This assumes that all the pieces of the station could be dropped off in low orbit and then raised to GEO over a period of months by space tugs propelled by electrical ion thrusters rather than conventional rockets. The approach would slow the pace of construction and add to the overall mass and cost. New tugs would have to be developed that could tow up to 100 times as much cargo as the biggest electric tugs do today. And by my calculations, the world’s annual production of xenon—the go-to propellant for ion engines—is insufficient to carry even a single solar-power satellite to GEO.

Thales Alenia Space looked at a slightly more realistic option: using a fleet of conventional rockets as big as SpaceX’s new Starship—the largest rocket ever built—to ferry loads from LEO to GEO, and then back to LEO for refueling from an orbiting fuel depot. Even if launch prices plummeted to $200,000 a tonne, they calculated, electricity from their system would be six times as expensive as NASA’s projected cost for a terrestrial solar farm outfitted with battery storage—one obvious alternative.

What else would have to go spectacularly right? In NASA’s cost-competitive scenario, the price of new, specialized spaceships that could maintain the satellite for 30 years—and then disassemble and dispose of it—would have to come down by 90 percent. The efficiency of commercially produced, space-qualified solar cells would have to soar from 32 percent today to 40 percent, while falling in cost. Yet over the past 30 years, big gains in the efficiency of research cells have not translated well to the commercial cells available at low cost [see chart, “Not So Fast”].

Is it possible for all these things to go right simultaneously? Perhaps. But wait—there’s more that can go wrong.

The Toll of Operating a Solar Plant in Space

Let’s start with temperature. Gigawatts of power coursing through the system will make heat removal essential because solar cells lose efficiency and microcircuits fry when they get too hot. A couple of dozen times a year, the satellite will pass suddenly into the utter darkness of Earth’s shadow, causing temperatures to swing by around 300 °C, well beyond the usual operating range of electronics. Thermal expansion and contraction may cause large structures on the station to warp or vibrate.

Then there’s the physical toll of operating in space. Vibrations and torques exerted by altitude-control thrusters, plus the pressure of solar radiation on the massive sail-like arrays, will continually bend and twist the station this way and that. The sprawling arrays will suffer unavoidable strikes from man-made debris and micrometeorites, perhaps even a malfunctioning construction robot. As the number of space power stations increases, we could see a rapid rise in the threat of Kessler syndrome, a runaway cascade of collisions that is every space operator’s nightmare.

Probably the toughest technical obstacle blocking space solar power is a basic one: shaping and aiming the beam. The transmitter is not a dish, like a radio telescope in reverse. It’s a phased array, a collection of millions of little antennas that must work in near-perfect synchrony, each contributing its piece to a collective waveform aimed at the ground station.

Like people in a stadium crowd raising their arms on cue to do “the wave,” coordination of a phased array is essential. It will work properly only if every element on the emitter syncs the phase of its transmission to align precisely with the transmission of its neighbors and with an incoming beacon signal sent from the ground station. Phase errors measured in picoseconds can cause the microwave beam to blur or drift off its target. How can the system synchronize elements separated by as much as a kilometer with such incredible accuracy? If you have the answer, please patent and publish it, because this problem currently has engineers stumped.

There is no denying the beauty of the idea of turning to deep space for inexhaustible electricity. But nature gets a vote. As Lao Tzu observed long ago in the Tao Te Ching, “The truth is not always beautiful, nor beautiful words the truth.”

  • ✇IEEE Spectrum
  • Femtosecond Lasers Solve Solar Panels’ Recycling IssueEmily Waltz
    Solar panels are built to last 25 years or more in all kinds of weather. Key to this longevity is a tight seal of the photovoltaic materials. Manufacturers achieve the seal by laminating a panel’s silicon cells with polymer sheets between glass panes. But the sticky polymer is hard to separate from the silicon cells at the end of a solar panel’s life, making recycling the materials more difficult.Researchers at the U.S. National Renewable Energy Lab (NREL) in Golden, Colo., say they’ve found a b
     

Femtosecond Lasers Solve Solar Panels’ Recycling Issue

9. Květen 2024 v 16:35


Solar panels are built to last 25 years or more in all kinds of weather. Key to this longevity is a tight seal of the photovoltaic materials. Manufacturers achieve the seal by laminating a panel’s silicon cells with polymer sheets between glass panes. But the sticky polymer is hard to separate from the silicon cells at the end of a solar panel’s life, making recycling the materials more difficult.

Researchers at the U.S. National Renewable Energy Lab (NREL) in Golden, Colo., say they’ve found a better way to seal solar modules. Using a femtosecond laser, the researchers welded together solar panel glass without the use of polymers such as ethylene vinyl acetate. These glass-to-glass precision welds are strong enough for outdoor solar panels, and are better at keeping out corrosive moisture, the researchers say.

A short video shows a femtosecond laser welding a circular object in a larger rectangle on a workbench. A femtosecond laser welds a small piece of test glass.NREL

“Solar panels are not easily recycled,” says David Young, a senior scientist at NREL. “There are companies that are doing it now, but it’s a tricky play between cost and benefit, and most of the problem is with the polymers.” With no adhesive polymers involved, recycling facilities can more easily separate and reuse the valuable materials in solar panels such as silicon, silver, copper, and glass.

Because of the polymer problem, many recycling facilities just trash the polymer-covered silicon cells and recover only the aluminum frames and glass encasements, says Silvana Ovaitt, a photovoltaic (PV) analyst at NREL. This partial recycling wastes the most valuable materials in the modules.

“At some point there’s going to be a huge amount of spent panels out there, and we want to get it right, and make it easy to recycle.” —David Young, NREL

Finding cost-effective ways to recycle all the materials in solar panels will become increasingly important. Manufacturers globally are deploying enough solar panels to produce an additional 240 gigawatts each year. That rate is projected to increase to 3 terawatts by 2030, Ovaitt says. By 2050, anywhere from 54 to 160 million tonnes of PV panels will have reached the end of their life-spans, she says.

“At some point there’s going to be a huge amount of spent panels out there, and we want to get it right, and make it easy to recycle,” says Young. “There’s no reason not to.” A change in manufacturing could help alleviate the problem—although not for at least another 25 years, when panels constructed with the new technique would be due to be retired.

In NREL’s technique, the glass that encases the solar cells in a PV panel is welded together by precision melting. The precision melting is accomplished with femtosecond lasers, which pack a tremendous number of photons into a very short time scale--about 1 millionth of 1 billionth of a second. The number of photons emitted per second from the laser is so intense that it changes the optical absorption process in the glass, says Young. The process changes from linear (normal absorption) to nonlinear, which allows the glass to absorb energy from the photons that it would normally not absorb, he says.

The intense beam, focused near the interface of the two sheets of glass, generates a small plasma of ionized glass atoms. This plasma allows the glass to absorb most of the photons from the laser and locally melt the two glass sheets to form a weld. Because there’s no open surface, there is no evaporation of the molten glass during the welding process. The lack of evaporation from the molten pool allows the glass to cool in a stress-free state, leaving a very strong weld.

A blue colored micrograph shows 5 horizontal lines and a scale bar of 481 \u00b5m. A femtosecond laser creates precision welds between two glass plates.David Young/NREL

In stress tests conducted by the NREL group, the welds proved almost as strong as the glass itself, as if there were no weld at all. Young and his colleagues described their proof-of-concept technique in a paper published 21 February in the IEEE Journal of Photovoltaics.

This is the first time a femtosecond laser has been used to test glass-to-glass welds for solar modules, the authors say. The cost of such lasers has declined over the last few years, so researchers are finding uses for them in a wide range of applications. For example, femtosecond lasers have been used to create 3D midair plasma displays and to turn tellurite glass into a semiconductor crystal. They’ve also been used to weld glass in medical devices.

Prior to femtosecond lasers, research groups attempted to weld solar panel glass with nanosecond lasers. But those lasers, with pulses a million times as long as those of a femtosecond laser, couldn’t create a glass-to-glass weld. Researchers tried using a filler material called glass frit in the weld, but the bonds of the dissimilar materials proved too brittle and weak for outdoor solar panel designs, Young says.

In addition to making recycling easier, NREL’s design may make solar panels last longer. Polymers are poor barriers to moisture compared with glass, and the material degrades over time. This lets moisture into the solar cells, and eventually leads to corrosion. “Current solar modules aren’t watertight,” says Young. That will be a problem for perovskite cells, a leading next-generation solar technology that is extremely sensitive to moisture and oxygen.

“If we can provide a different kind of seal where we can eliminate the polymers, not only do we get a better module that lasts longer, but also one that is much easier to recycle,” says Young.

  • ✇Slashdot
  • AI Needs So Much Electricity That Tech Companies Are Getting Into Energy Businessmsmash
    An anonymous reader shares a report: To accommodate tech companies' pivots to artificial intelligence, tech companies are increasingly investing in ways to power AI's immense electricity needs. Most recently, OpenAI CEO Sam Altman invested in Exowatt, a company using solar power to feed data centers, according to the Wall Street Journal. That's on the heals of OpenAI partner, Microsoft, working on getting approval for nuclear energy to help power its AI operations. Last year Amazon, which is a m
     

AI Needs So Much Electricity That Tech Companies Are Getting Into Energy Business

Od: msmash
22. Duben 2024 v 18:01
An anonymous reader shares a report: To accommodate tech companies' pivots to artificial intelligence, tech companies are increasingly investing in ways to power AI's immense electricity needs. Most recently, OpenAI CEO Sam Altman invested in Exowatt, a company using solar power to feed data centers, according to the Wall Street Journal. That's on the heals of OpenAI partner, Microsoft, working on getting approval for nuclear energy to help power its AI operations. Last year Amazon, which is a major investor in AI company Anthropic, said it invested in more than 100 renewable energy projects, making it the "world's largest corporate purchaser of renewable energy for the fourth year in a row."

Read more of this story at Slashdot.

  • ✇Slashdot
  • What Happened After Amazon Electrified Its Delivery Fleet?EditorDavid
    Bloomberg looks at America's biggest operator of private electrical vehicle charging infrastructure: Amazon. "In a little more than two years, Amazon has installed more than 17,000 chargers at about 120 warehouses around the U.S." — and had Rivian build 13,500 custom electric delivery vans. Amazon has a long way to go. The Seattle-based company says its operations emitted about 71 million metric tons of carbon dioxide equivalent in 2022, up by almost 40% since Jeff Bezos's 2019 vow that his com
     

What Happened After Amazon Electrified Its Delivery Fleet?

22. Duben 2024 v 09:44
Bloomberg looks at America's biggest operator of private electrical vehicle charging infrastructure: Amazon. "In a little more than two years, Amazon has installed more than 17,000 chargers at about 120 warehouses around the U.S." — and had Rivian build 13,500 custom electric delivery vans. Amazon has a long way to go. The Seattle-based company says its operations emitted about 71 million metric tons of carbon dioxide equivalent in 2022, up by almost 40% since Jeff Bezos's 2019 vow that his company would eventually stop contributing to the emissions warming the planet. Many of Amazon's emissions come from activities — air freight, ocean shipping, construction and electronics manufacturing, to name a few — that lack a clear, carbon-free alternative, today or any time soon. The company has not made much progress on decarbonization of long-haul trucking, whose emissions tend to be concentrated in industrial and outlying areas rather than the big cities that served as the backdrop for Amazon's electric delivery vehicle rollout... Another lesson Amazon learned is one the company isn't keen to talk about: Going green can be expensive, at least initially. Based on the type of chargers Amazon deploys — almost entirely midtier chargers called Level 2 in the industry — the hardware likely cost between $50 million and $90 million, according to Bloomberg estimates based on cost estimates supplied by the National Renewable Energy Laboratory. Factoring in costs beyond the plugs and related hardware — like digging through a parking lot to lay wires or set up electrical panels and cabinets — could double that sum. Amazon declined to comment on how much it spent on its EV charging push. In addition to the expense of the chargers, electric vehicle-fleet operators are typically on the hook for utility upgrades. When companies request the sort of increases to electrical capacity that Amazon has — the Maple Valley warehouse has three megawatts of power for its chargers — they tend to pay for them, making the utility whole for work done on behalf of a single customer. Amazon says it pays upgrade costs as determined by utilities, but that in some locations the upgrades fit within the standard service power companies will handle out of their own pocket. The article also includes this quote from Kellen Schefter, transportation director at the Edison Electric Institute trade group (which worked with Amazon on its electricity needs). "Amazon's scale matters. If Amazon can show that it meets their climate goals while also meeting their package-delivery goals, we can show this all actually works."

Read more of this story at Slashdot.

  • ✇IEEE Spectrum
  • Caltech’s SSPD-1 Is a New Idea for Space-Based SolarW. Wayt Gibbs
    The idea of powering civilization from gigantic solar plants in orbit is older than any space program, but despite seven decades of rocket science, the concept—to gather near-constant sunlight tens of thousands of kilometers above the equator, beam it to Earth as microwaves, and convert it to electricity—still remains tantalizingly over the horizon. Several recently published deep-dive analyses commissioned by NASA and the European Space Agency have thrown cold water on the hope that space solar
     

Caltech’s SSPD-1 Is a New Idea for Space-Based Solar

11. Duben 2024 v 23:29


The idea of powering civilization from gigantic solar plants in orbit is older than any space program, but despite seven decades of rocket science, the concept—to gather near-constant sunlight tens of thousands of kilometers above the equator, beam it to Earth as microwaves, and convert it to electricity—still remains tantalizingly over the horizon. Several recently published deep-dive analyses commissioned by NASA and the European Space Agency have thrown cold water on the hope that space solar power could affordably generate many gigawatts of clean energy in the near future. And yet the dream lives on.

The dream achieved a kind of lift-off in January 2023. That’s when SSPD-1, a solar space-power demonstrator satellite carrying a bevy of new technologies designed at the California Institute of Technology, blasted into low Earth orbit for a year-long mission. Mindful of concerns about the technical feasibility of robotic in-space assembly of satellites, each an order of magnitude larger than the International Space Station, the Caltech team has been looking at very different approaches to space solar power.

For an update on what the SSPD-1 mission achieved and how it will shape future concepts for space solar-power satellites, IEEE Spectrum spoke with Ali Hajimiri, an IEEE Fellow, professor of electrical engineering at Caltech, and codirector of the school’s space-based solar power project. The interview has been condensed and edited for length and clarity.

SSPD-1 flew with several different testbeds. Let’s start with the MAPLE (Microwave Array for Power-transfer Low-orbit Experiment) testbed for wireless power transmission: When you and your team went up on the roof of your building on campus in May 2023 and aimed your antennas to where the satellite was passing over, did your equipment pick up actual power being beamed down or just a diagnostic signal?

portrait of a man smiling for the camera wearing a collared shirt Ali Hajimiri is the codirector of Caltech’s space-based solar power project.Caltech

Ali Hajimiri: I would call it a detection. The primary purpose of the MAPLE experiment was to demonstrate wireless energy transfer in space using flexible, lightweight structures and also standard CMOS integrated circuits. On one side are the antennas that transmit the power, and on the flip side are our custom CMOS chips that are part of the power-transfer electronics. The point of these things is to be very lightweight, to reduce the cost of launch into space, and to be very flexible for storage and deployment, because we want to wrap it and unwrap it like a sail.

I see—wrap them up to fit inside a rocket and then unwrap and stretch them flat once they are released into orbit.

Hajimiri: MAPLE’s primary objective was to demonstrate that these flimsy-looking arrays and CMOS integrated circuits can operate in space. And not only that, but that they can steer wireless energy transfer to different targets in space, different receivers. And by energy transfer I mean net power out at the receiver side. We did demonstrate power transfer in space, and we made a lot of measurements. We are writing up the details now and will publish those results.

The second part of this experiment—really a stretch goal—was to demonstrate that ability to point the beam to the right place on Earth and see whether we picked up the expected power levels. Now, the larger the transmission array is in space, the greater the ability to focus the energy to a smaller spot on the ground.

Right, because diffraction of the beam limits the size of the spot, as a function of the transmitter size and the frequency of the microwaves.

Hajimiri: Yes. The array we had in space for MAPLE was very small. As a result, the transmitter spread the power over a very large area. So we captured a very small fraction of the energy—that’s why I call it a detection; it was not net positive power. But we measured it. We wanted to see: Do we get what we predict from our calculations? And we found it was in the right range of power levels we expected from an experiment like that.

So, comparable in power to the signals that come down in standard communication satellite operations.

Hajimiri: But done using this flexible, lightweight system—that’s what makes it better. You can imagine developing the next generation of communication satellites or space-based sensors being built with these to make the system significantly cheaper and lighter and easier to deploy. The satellites used now for Starlink and Kuiper—they work great, but they are bulky and heavy. With this technology for the next generation, you could deploy hundreds of them with a very small and much cheaper launch. It could lead to a much more effective Internet in the sky.

Tell me about ALBA, the experiment on the mission that tested 32 different and novel kinds of photovoltaic solar cells to see how they perform in space. What were the key takeaways?

Hajimiri: My Caltech colleague Harry Atwater led that experiment. What works best on Earth is not necessarily what works best in space. In space there is a lot of radiation damage, and they were able to measure degradation rates over months. On the other hand, there is no water vapor in space, no air oxidation, which is good for materials like perovskites that have problems with those things. So Harry and his team are exploring the trade-offs and developing a lot of new cells that are much cheaper and lighter: Cells made with thin films of perovskites or semiconductors like gallium arsenide, cells that use quantum dots, or use waveguides or other optics to concentrate the light. Many of these cells show very large promise. Very thin layers of gallium arsenide, in particular, seem very conducive to making cells that are lightweight but very high performance and much lower in cost because they need very little semiconductor material.

Many of the design concepts for solar-power satellites, including one your group published in a 2022 preprint, incorporate concentrators to reduce the amount of photovoltaic area and mass needed.

Hajimiri: A challenge with that design is the rather narrow acceptance angle: Things have to be aligned just right so that the focused sunlight hits the cell properly. That’s one of the reasons we’ve pulled away from that approach and moved toward a flat design.

distorted view of inside of a box with different colors with different colors A view from inside MAPLE: On the right is the array of flexible microwave power transmitters, and on the left are receivers they transmit that power to.Caltech

There are some other major differences between the Caltech power satellite design and the other concepts out there. For example, the other designs I’ve seen would use microwaves in the Wi-Fi range, between 2 and 6 gigahertz, because cheap components are available for those frequencies. But yours is at 10 GHz?

Hajimiri: Exactly—and it’s a major advantage because when you double the frequency, the size of the systems in space and on the ground go down by a factor of four. We can do that basically because we build our own microchips and have a lot of capabilities in millimeter-wave circuit design. We’ve actually demonstrated some of these flexible panels that work at 28 GHz.

And your design avoids the need for robots to do major assembly of components in space?

Hajimiri: Our idea is to deploy a fleet of these sail-like structures that then all fly in close formation. They are not attached to each other. That translates to a major cost reduction. Each one of them has little thrusters on the edges, and it contains internal sensors that let it measure its own shape as it flies and then correct the phase of its transmission accordingly. Each would also track its own position relative to the neighbors and its angle to the sun.

From your perspective as an electrical engineer, what are the really hard problems still to be solved?

Hajimiri: Time synchronization between all parts of the transmitter array is incredibly crucial and one of the most interesting challenges for the future.

Because the transmitter is a phased array, each of the million little antennas in the array has to synchronize precisely with the phase of its neighbors in order to steer the beam onto the receiver station on the ground.

Hajimiri: Right. To give you a sense of the level of timing precision that we need across an array like this: We have to reduce phase noise and timing jitter to just a few picoseconds across the entire kilometer-wide transmitter. In the lab, we do that with wires of precise length or optical fibers that feed into CMOS chips with photodiodes built into them. We have some ideas about how to do that wirelessly, but we have no delusions: This is a long journey.

What other challenges loom large?

Hajimiri: The enormous scale of the system and the new manufacturing infrastructure needed to make it is very different from anything humanity has ever built. If I were to rank the challenges, I would put getting the will, resources, and mindshare behind a project of this magnitude as number one.

  • ✇Slashdot
  • Data Centers Are Turning to an Old Source of Power: CoalEditorDavid
    The Washington Post reports on a new situation in Virginia: There, massive data centers with computers processing nearly 70 percent of global digital traffic are gobbling up electricity at a rate officials overseeing the power grid say is unsustainable unless two things happen: Several hundred miles of new transmission lines must be built, slicing through neighborhoods and farms in Virginia and three neighboring states. And antiquated coal-powered electricity plants that had been scheduled to g
     

Data Centers Are Turning to an Old Source of Power: Coal

20. Duben 2024 v 19:34
The Washington Post reports on a new situation in Virginia: There, massive data centers with computers processing nearly 70 percent of global digital traffic are gobbling up electricity at a rate officials overseeing the power grid say is unsustainable unless two things happen: Several hundred miles of new transmission lines must be built, slicing through neighborhoods and farms in Virginia and three neighboring states. And antiquated coal-powered electricity plants that had been scheduled to go offline will need to keep running to fuel the increasing need for more power, undermining clean energy goals... The $5.2 billion effort has fueled a backlash against data centers through the region, prompting officials in Virginia to begin studying the deeper impacts of an industry they've long cultivated for the hundreds of millions of dollars in tax revenue it brings to their communities. Critics say it will force residents near the [West Virginia] coal plants to continue living with toxic pollution, ironically to help a state — Virginia — that has fully embraced clean energy. And utility ratepayers in the affected areas will be forced to pay for the plan in the form of higher bills, those critics say. But PJM Interconnection, the regional grid operator, says the plan is necessary to maintain grid reliability amid a wave of fossil fuel plant closures in recent years, prompted by the nation's transition to cleaner power. Power lines will be built across four states in a $5.2 billion effort that, relying on coal plants that were meant to be shuttered, is designed to keep the electric grid from failing amid spiking energy demands. Cutting through farms and neighborhoods, the plan converges on Northern Virginia, where a growing data center industry will need enough extra energy to power 6 million homes by 2030... There are nearly 300 data centers now in Virginia. With Amazon Web Services pursuing a $35 billion data center expansion in Virginia, rural portions of the state are the industry's newest target for development. The growth means big revenue for the localities that host the football-field-size buildings. Loudoun [County] collects $600 million in annual taxes on the computer equipment inside the buildings, making it easier to fund schools and other services. Prince William [County], the second-largest market, collects $100 million per year. The article adds that one data center "can require 50 times the electricity of a typical office building, according to the U.S. Department of Energy. "Multiple-building data center complexes, which have become the norm, require as much as 14 to 20 times that amount." One small power company even told the grid operator that data centers were already consuming 59% of the power they produce...

Read more of this story at Slashdot.

  • ✇Semiconductor Engineering
  • Maximizing Energy Efficiency For Automotive ChipsWilliam Ruby
    Silicon chips are central to today’s sophisticated advanced driver assistance systems, smart safety features, and immersive infotainment systems. Industry sources estimate that now there are over 1,000 integrated circuits (ICs), or chips, in an average ICE car, and twice as many in an average EV. Such a large amount of electronics translates into kilowatts of power being consumed – equivalent to a couple of dishwashers running continuously. For an ICE vehicle, this puts a lot of stress on the ve
     

Maximizing Energy Efficiency For Automotive Chips

7. Březen 2024 v 09:06

Silicon chips are central to today’s sophisticated advanced driver assistance systems, smart safety features, and immersive infotainment systems. Industry sources estimate that now there are over 1,000 integrated circuits (ICs), or chips, in an average ICE car, and twice as many in an average EV. Such a large amount of electronics translates into kilowatts of power being consumed – equivalent to a couple of dishwashers running continuously. For an ICE vehicle, this puts a lot of stress on the vehicle’s electrical and charging system, leading automotive manufacturers to consider moving to 48V systems (vs. today’s mainstream 12V systems). These 48V systems reduce the current levels in the vehicle’s wiring, enabling the use of lower cost smaller-gauge wire, as well as delivering higher reliability. For EVs, higher energy efficiency of on-board electronics translates directly into longer range – the primary consideration of many EV buyers (second only to price). Driver assistance and safety features often employ redundant component techniques to ensure reliability, further increasing vehicle energy consumption. Lack of energy efficiency for an EV also means more frequent charging, further stressing the power grid and producing a detrimental effect on the environment. All these considerations necessitate the need for a comprehensive energy-efficient design methodology for automotive ICs.

What’s driving demand for compute power in cars?

Classification and processing of massive amounts of data from multiple sources in automotive applications – video, audio, radar, lidar – results in a high degree of complexity in automotive ICs as software algorithms require large amounts of compute power. Hardware architectural decisions, and even hardware-software partitioning, must be done with energy efficiency in mind. There is a plethora of tradeoffs at this stage:

  • Flexibility of a general-purpose CPU-based architecture vs. efficiency of a dedicated digital signal processor (DSP) vs. a hardware accelerator
  • Memory sub-system design: how much is required, how it will be partitioned, how much precision is really needed, just to name a few considerations

In order to enable reliable decisions, architects must have access to a system that models, in a robust manner, power, performance, and area (PPA) characteristics of the hardware, as well as use cases. The idea is to eliminate error-prone estimates and guesswork.

To improve energy efficiency, automotive IC designers also must adopt many of the power reduction techniques traditionally used by architects and engineers in the low-power application space (e.g. mobile or handheld devices), such as power domain shutoff, voltage and frequency scaling, and effective clock and data gating. These techniques can be best evaluated at the hardware design level (register transfer level, or RTL) – but with the realistic system workload. As a system workload – either a boot sequence or an application – is millions of clock cycles long, only an emulation-based solution delivers a practical turnaround time (TAT) for power analysis at this stage. This power analysis can reveal intervals of wasted power – power consumption bugs – whether due to active clocks when the data stream is not active, redundant memory access when the address for the read operation doesn’t change for many clock cycles (and/or when the address and data input don’t change for the write operation over many cycles), or unnecessary data toggles while clocks are gated off.

To cope with the huge amount of data and the requirement to process that data in real time (or near real time), automotive designers employ artificial intelligence (AI) algorithms, both in software and in hardware. Millions of multiply-accumulate (MAC) operations per second and other arithmetic-intensive computations to process these algorithms give rise to a significant amount of wasted power due to glitches – multiple signal transitions per clock cycle. At the RTL stage, with the advanced RTL power analysis tools available today, it is possible to measure the amount of wasted power due to glitches as well as to identify glitch sources. Equipped with this information, an RTL design engineer can modify their RTL source code to lower the glitch activity, reduce the size of the downstream logic, or both, to reduce power.

Working together with the RTL design engineer is another critical persona – the verification engineer. In order to verify the functional behavior of the design, the verification engineer is no longer dealing just with the RTL source: they also have to verify the proper functionality of the global power reduction techniques such as power shutoff and voltage/frequency scaling. Doing so requires a holistic approach that leverages a comprehensive description of power intent, such as the Unified Power Format (UPF). All verification technologies – static, formal, emulation, and simulation – can then correctly interpret this power intent to form an effective verification methodology.

Power intent also carries through to the implementation part of the flow, as well as signoff. During the implementation process, power can be further optimized through physical design techniques while conforming to timing and area constraints. Highly accurate power signoff is then used to check conformance to power specifications before tape-out.

Design and verification flow for more energy-efficient automotive SoCs

Synopsys delivers a complete end-to-end solution that allows IC architects and designers to drive energy efficiency in automotive designs. This solution spans the entire design flow from architecture to RTL design and verification, to emulation-driven power analysis, to implementation and, ultimately, to power signoff. Automotive IC design teams can now put in place a rigorous methodology that enables intelligent architectural decisions, RTL power analysis with consistent accuracy, power-aware physical design, and foundry-certified power signoff.

The post Maximizing Energy Efficiency For Automotive Chips appeared first on Semiconductor Engineering.

  • ✇GAME PRESS
  • Nové záložní napájení UPS s čistou sinusoidou pro citlivou elektronikuMobile Press
    Společnost CyberPower výrobce záložních napájecích řešení, představuje nové modely záložních zdrojů (UPS) v řadě PFC Sinewave – CyberPower CP1350/1600EPFCLCD – s čistou sinusoidou na výstupu, díky čemuž podporují zdroje s aktivním PFC (Active PFC). Jsou primárně určeny pro kancelářské prostředí, malé servery, grafické pracovní stanice i do domácnosti. Snadná obsluha a nové technologie Řada PFC Sinewave využívá topologii line-interactive s funkcí automatické regulace napětí (AVR), která zajišťuj
     

Nové záložní napájení UPS s čistou sinusoidou pro citlivou elektroniku

13. Únor 2024 v 07:40

nove-zalozni-napajeni-ups-s cistou-sinusoidou-pro-citlivou-elektroniku

Společnost CyberPower výrobce záložních napájecích řešení, představuje nové modely záložních zdrojů (UPS) v řadě PFC Sinewave – CyberPower CP1350/1600EPFCLCD – s čistou sinusoidou na výstupu, díky čemuž podporují zdroje s aktivním PFC (Active PFC).

Jsou primárně určeny pro kancelářské prostředí, malé servery, grafické pracovní stanice i do domácnosti.

Snadná obsluha a nové technologie

Řada PFC Sinewave využívá topologii line-interactive s funkcí automatické regulace napětí (AVR), která zajišťuje stabilizovaný čistý sinusový výstup pro připojená zařízení. Jsou tak chráněna před přepětím, podpětím a napěťovými rázy při výpadku energie.

Modely jsou také kompatibilní se zařízeními vybavenými zdroji s aktivním PFC. UPS upraví svůj výstupní výkon tak, aby odpovídal vstupnímu výkonu napájeného zdroje. Zabraňuje tím jeho přetěžování a následnému poškození.

Zdroj: CyberPower

V provedení tower nabízí na zadní straně zásuvky typu Schuko pro připojení celkem šesti zařízení. Všechny zásuvky poskytují přepěťovou ochranu i záložní napájení z baterie. UPS zajišťuje ochranu i datového vedení až do rychlosti 1 Gb/s přes porty RJ45. Stačí do něj připojit kabel z modemu nebo ze zdi. V případě přepětí, např. při bouřce, je veškerá datová síť, routery, připojené servery, počítače, televize i herní konzole chráněny před zničujícím výbojem.

UPS jsou vybaveny i novým barevným LCD panelem, který poskytuje správcům rychlý přehled o stavu napájecího systému a doprovází ho i zvukové alarmy pro změnu stavu baterie, nízké napětí, přetížení nebo chybu. Pro sledování stavu a konfiguraci v reálném čase je však vhodnější použít software PowerPanel Business, který je k dispozici zdarma ke stažení ze stránek výrobce. UPS lze volitelně doplnit o datovou kartu RMCARD205 pro plnou funkcionalitu SNMP či karty pro cloudové připojení (RJ45 nebo Wi-Fi) RCCARD100 nebo RWCCARD100.

Vzhledem k očekávanému umístění UPS hned v blízkosti uživatelů, nabízí také USB porty pro nabíjení běžné elektroniky, konkrétně jeden USB-A a jeden USB-C. Rozdíl mezi modely je pak v nabíjecích standardech, nižší model umí maximálně 5 V/2,4 A, vyšší zvládne na USB-C až 15 V/ 2 A.

Zdroj: CyberPower

Výkon a výdrž

Nové UPS jsou dodávány ve dvou výkonových provedeních, model CP1350EPFCLCD má kapacitu 1350 VA a výkon 810 W a model CP1600EPFCLCD nabídne kapacitu 1600 VA a výkon až 1000 W.  Při poloviční zátěži jsou schopné zajistit napájení ještě po dobu 9,6/9,7 minut a při plné zátěži pak 2,1/2,6 minut. Tento čas je určen pro bezpečné uložení souborů, aby nedošlo ke ztrátě dat, nebo k jejich poškození.

Nejdůležitější vlastnosti CyberPower CP1350/1600EPFCLCD:

  • Line-interactive topologie
  • Technologie úspory energie
  • Kompatibilní s aktivním PFC
  • Čistý sinusový výstup
  • Automatická regulace napětí (AVR)
  • Barevný naklápěcí LCD panel a indikátor stavu
  • Možnost vzdálené správy SNMP/HTTP (volitelně) i cloudového připojení
  • Software pro správu PowerPanel zdarma
  • Přepěťová ochrana a ochrana proti přepětí
  • Nabíjecí porty USB-C a USB-A
  • Ochrana datové sítě (porty RJ45/11)

Oba modely jsou dostupné v ČR od února 2024.

Článek Nové záložní napájení UPS s čistou sinusoidou pro citlivou elektroniku se nejdříve objevil na MOBILE PRESS.

Článek Nové záložní napájení UPS s čistou sinusoidou pro citlivou elektroniku se nejdříve objevil na GAME PRESS.

  • ✇Semiconductor Engineering
  • Chip Industry Week In ReviewThe SE Staff
    By Adam Kovac, Karen Heyman, and Liz Allan. India approved the construction of two fabs and a packaging house, for a total investment of about $15.2 billion, according to multiple sources. One fab will be jointly owned by Tata and Taiwan’s Powerchip. The second fab will be a joint investment between CG Power, Japan’s Renesas Electronics, and Thailand’s Stars Microelectronics. Tata will run the packaging facility, as well. India expects these efforts will add 20,000 advanced technology jobs and 6
     

Chip Industry Week In Review

1. Březen 2024 v 09:01

By Adam Kovac, Karen Heyman, and Liz Allan.

India approved the construction of two fabs and a packaging house, for a total investment of about $15.2 billion, according to multiple sources. One fab will be jointly owned by Tata and Taiwan’s Powerchip. The second fab will be a joint investment between CG Power, Japan’s Renesas Electronics, and Thailand’s Stars Microelectronics. Tata will run the packaging facility, as well. India expects these efforts will add 20,000 advanced technology jobs and 60,000 indirect jobs, according to the Times of India. The country has been talking about building a fab for at least the past couple of decades, but funding never materialized.

The U.S. Department of Commerce (DoC) issued a CHIPS Act-based Notice of Funding Opportunity for R&D to establish and accelerate domestic capacity for advanced packaging substrates and substrate materials. The U.S. Secretary of Commerce said the government is prioritizing CHIPS Act funding for projects that will be operational by 2030 and anticipates America will produce 20% of the world’s leading-edge logic chips by the end of the decade.

The top three foundries plan to implement backside power delivery as soon as the 2nm node, setting the stage for faster and more efficient switching in chips, reduced routing congestion, and lower noise across multiple metal layers. But this novel approach to optimizing logic performance depends on advances in lithography, etching, polishing, and bonding processes.

Intel spun out Altera as a standalone FPGA company, the culmination of a rebranding and reorganization of its former Programmable Solutions Group. The move follows Intel’s decision to keep Intel Foundry at arm’s length, with a clean line between the foundry and the company’s processor business.

Multiple new hardware micro-architecture vulnerabilities were published in the latest Common Weakness Enumeration release this week, all related to transient execution (CWE 1420-1423).

The U.S. Office of the National Cyber Director (ONCD) published a technical report calling for the adoption of memory safe programming languages, aiming to reduce the attack surface in cyberspace and anticipate systemic security risk with better diagnostics. The DoC also is seeking information ahead of an inquiry into Chinese-made connected vehicles “to understand the extent of the technology in these cars that can capture wide swaths of data or remotely disable or manipulate connected vehicles.”

Quick links to more news:

Design and Power
Manufacturing and Test
Automotive
Security
Pervasive Computing and AI
Events

Design and Power

Micron began mass production of a new high-bandwidth chip for AI. The company said the HBM3E will be a key component in NVIDIA’s H2000 Tensor Core GPUs, set to begin shipping in the second quarter of 2024. HBM is a key component of 2.5D advanced packages.

Samsung developed a 36GB HBM3E 12H DRAM, saying it sets new records for bandwidth. The company achieved this by using advanced thermal compression non-conductive film, which allowed it to cram 12 layers into the area normally taken up by 8. This is a novel way of increasing DRAM density.

Keysight introduced QuantumPro, a design and simulation tool, plus workflow, for quantum computers. It combines five functionalities into the Advanced Design System (ADS) 2024 platform. Keysight also introduced its AI Data Center Test Platform, which includes pre-packaged benchmarking apps and dataset analysis tools.

Synopsys announced a 1.6T Ethernet IP solution, including 1.6T MAC and PCS Ethernet controllers, 224G Ethernet PHY IP, and verification IP.

Tenstorrent, Japan’s Leading-Edge Semiconductor Technology Center (LSTC) , and Rapidus are co-designing AI chips. LSTC will use Tenstorrent’s RISC-V and Chiplet IP for its forthcoming edge 2nm AI accelerator.

This week’s Systems and Design newsletter features these top stories:

  • 2.5D Integration: Big Chip Or Small PCB: Defining whether a 5D device is a PCB shrunk to fit into a package or a chip that extends beyond the limits of a single die can have significant design consequences.
  • Commercial Chiplets: Challenges of establishing a commercial chiplet.
  • Accellera Preps New Standard For Clock-Domain Crossing: New standard aims to streamline the clock-domain crossing flow.
  • Thinking Big: From Chips To Systems: Aart de Geus discusses the shift from chips to systems, next-generation transistors, and what’s required to build multi-die devices.
  • Integration challenges for RISC-V: Modifying the source code allows for democratization of design, but it adds some hurdles for design teams (video).

Demand for high-end AI servers is driven by four American companies, which will account for 60% of global demand in 2024, according to Trendforce. NVIDIA is projected to continue leading the market, with AMD closing the gap due its lower cost model.

The EU consortium PREVAIL is accepting design proposals as it seeks to develop next-gen edge-AI technologies. Anchors include CEA-Leti, Fraunhofer-Gesellschaft, imec, and VTT, which will use their 300mm fabrication, design, and test facilities to validate prototypes.

Siemens joined an initiative to expand educational opportunities in the semiconductor space around the world. The Semiconductor Education Alliance was launched by Arm in 2023 and focuses on helping teach skills in IC design and EDA.

Q-CTRL announced partnerships with six firms that it says will expand access to its performance-management software and quantum technologies. Wolfram, Aqarios, and qBraid will integrate Q-CTRL’s Fire Opal technology into their products, while Qblox, Keysight, and Quantware will utilize Q-CTRL’s Boulder Opal hardware system.

NTT, Red Hat, NVIDIA, and Fujitsu teamed up to provide data pipeline acceleration and contain orchestration technologies targeted at real-time AI analysis of massive data sets at the edge.

Manufacturing and Test

The U.S. Department of Energy (DOE)’s Office of Electricity launched the American-Made Silicon Carbide (SiC)  Packaging Prize. This $2.25 million contest invites competitors to propose, design, build, and test state-of-the-art SiC semiconductor packaging prototypes.

Applied Materials introduced products and solutions for patterning issues in the “angstrom era,” including line edge roughness, tip-to-tip spacing limitations, bridge defects, and edge placement errors.

imec reported progress made in EUV processes, masks and metrology in preparation for high-NA EUV. It also identified advanced node lithography and etch related processes that contribute the most to direct emissions of CO2, along with proposed solutions.

proteanTecs will participate in the Arm Total Design ecosystem, which now includes more than 20 companies united around a charter to accelerate and simplify the development of custom SoCs based on Arm Neoverse compute subsystems.

NikkeiAsia took an in-depth look at Japan’s semiconductor ecosystem and concluded it is ripe for revival with investments from TSMC, Samsung, and Micron, among others. TrendForce came to a similar conclusion, pointing to the fast pace of Japan’s resurgence, including the opening of TSMC’s fab.

FormFactor closed its sale of its Suzhou and Shanghai companies to Grand Junction Semiconductor for $25M in cash.

The eBeam Initiative celebrated its 15th anniversary and welcomed a new member, FUJIFILM. The group also uncorked its fourth survey of its members technology using deep learning in the photomask-to-wafer manufacturing flow.

Automotive

Apple shuttered its electric car project after 10 years of development. The chaotic effort cost the company billions of dollars, according to The New York Times.

Infineon released new automotive programmable SoCs with fifth-gen human machine interface (HMI) technology, offering improved sensitivity in three packages. The MCU offers up to 84 GPIOs and 384 KB of flash memory. The company also released automotive and industrial-grade 750V G1 discrete SiC MOSFETs aimed at applications such as EV charging, onboard chargers, DC-DC converters, energy, solid state circuit breakers, and data centers.

Cadence expanded its Tensilica IP portfolio to boost computation for automotive sensor fusion applications. Vision, radar, lidar, and AI processing are combined in a single DSP for multi-modal, sensor-based system designs.

Ansys will continue translating fast computing into fast cars, as the company’s partnership with Oracle Red Bull Racing was renewed. The Formula 1 team uses Ansys technology to improve car aerodynamics and ensure the safety of its vehicles.

Lazer Sport adopted Siemens’ Xcelerator portfolio to connect 3D design with 3D printing for prototyping and digital simulation of its sustainable KinetiCore cycling helmet.

The chair of the U.S. Federal Communications Commission (FCC) suggested automakers that sell internet-connected cars should be subject to a telecommunications law aiming to protect domestic violence survivors, reports CNBC. This is due to emerging cases of stalking through vehicle location tracking technology and remote control of functions like locking doors or honking the horn.

BYD‘s CEO said the company does not plan to enter the U.S. market because it is complicated and electrification has slowed down, reports Yahoo Finance. Meanwhile, the first shipment of BYD vehicles arrived in Europe, according to DW News.

Ascent Solar Technologiessolar module products will fly on NASA’s upcoming Lightweight Integrated Solar Array and AnTenna (LISA-T) mission.

Security

Researchers at Texas A&M University and the University of Delaware proposed the first red-team attack on graph neural network (GNN)-based techniques in hardware security.

A panel of four experts discuss mounting concerns over quantum security, auto architectures, and supply chain resiliency.

Synopsys released its ninth annual Open Source Security and Risk Analysis report, finding that 74% of code bases contained high-risk open-source vulnerabilities, up 54% since last year.

President Biden issued an executive order to prevent the large-scale transfer of Americans’ personal data to countries of concern. Types of data include genomic, biometric, personal health, geolocation, financial, and other personally identifiable information, which bad actors can use to track and scam Americans.

The National Institute of Standards and Technology (NIST) released Cybersecurity Framework (CSF) 2.0 to provide a comprehensive view for managing cybersecurity risk.

The EU Agency for Cybersecurity (ENISA) published a study on best practices for cyber crisis management, saying the geopolitical situation continues to impact the cyber threat landscape and planning for threats and incidents is vital for crisis management.

The U.S. Department of Energy (DOE) announced $45 million to protect the energy sector from cyberattacks.

The National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and others published an advisory on Russian cyber actors using compromised routers.  Also the Cybersecurity and Infrastructure Security Agency (CISA), the UK National Cyber Security Centre (NCSC), and partners advised of tactics used by Russian Foreign Intelligence Service cyber actors to gain initial access into a cloud environment.

CISA, the FBI, and the Department of Health and Human Services (HHS) updated an advisory concerning the ALPHV Blackcat ransomware as a service (RaaS), which primarily targets the healthcare sector.

CISA also published a guide to support university cybersecurity clinics and issued other alerts.

Pervasive Computing and AI

Renesas expanded its RZ family of MPUs with a single-chip AI accelerator that offers 10 TOPS per watt power efficiency and delivers AI inference performance of up to 80 TOPS without a cooling fan. The chip is aimed at next-gen robotics with vision AI and real-time control.

Infineon launched dual-phase power modules to help data centers meet the power demands of AI GPU platforms. The company also released a family of solid-state isolators to deliver faster switching with up to 70% lower power dissipation.

Fig. 1: Infineon’s dual phase power modules: Source: Infineon

Amber Semiconductor announced a reference design for brushless motor applications using its AC to DC conversion semiconductor system to power ST‘s STM32 MCUs.

Micron released its universal flash storage (UFS) 4.0 package at just 9×13 mm, built on 232-layer 3D NAND and offering up to 1 terabyte capacity to enable next-gen phone designs and larger batteries.

LG and Meta teamed up to develop extended reality (XR) products, content, services, and platforms within the virtual space.

Microsoft and Mistral AI partnered to accelerate AI innovation and to develop and deploy Mistral’s next-gen large language models (LLMs).

Microsoft’s vice chair and president announced the company’s AI access principles, governing how it will operate AI datacenter infrastructure and other AI assets around the world.

Singtel and VMware partnered to enable enterprises to manage their connectivity and cloud infrastructure through the Singtel Paragon platform for 5G and edge cloud.

Keysight was selected as the Test Partner for the Deutsche Telekom Satellite NB-IoT Early Adopter Program, providing an end-to-end NB-IoT NTN testbed that allows designers and developers to validate reference designs for solutions using 3GPP Release 17 (Rel-17) NTN standards.

Global server shipments are predicted to increase by 2.05% in 2024, with AI servers accounting for about 12%, reports TrendForce. Also, the smartphone camera lens market is expected to rebound in 2024 with 3.8% growth driven by AI-smartphones, to reach about 4.22 billion units, reports TrendForce.

Yole released a smartphone camera comparison report with a focus on iPhone evolution and analysis of the structure, design, and teardown of each camera module, along with the CIS dimensions, technology node, and manufacturing processes.

Counterpoint released a number of 2023 reports on smartphone shipments by country and operator migrations to 5G.

Events

Find upcoming chip industry events here, including:

Event Date Location
International Symposium on FPGAs Mar 3 – 5 Monterey, CA
DVCON: Design & Verification Mar 4 – 7 San Jose, CA
ISES Japan 2024: International Semiconductor Executive Summit Mar 5 – 6 Tokyo, Japan
ISS Industry Strategy Symposium Europe Mar 6 – 8 Vienna, Austria
GSA International Semiconductor Conference Mar 13 – 14 London
Device Packaging Conference (DPC 2024) Mar 18 – 21 Fountain Hills, AZ
GOMACTech Mar 18 – 21 Charleston, South Carolina
SNUG Silicon Valley Mar 20 – 21 Santa Clara, CA
All Upcoming Events

Upcoming webinars are here, including topics such as digital twins, power challenges in data centers, and designing for 112G interface compliance.

Further Reading and Newsletters

Read the latest special reports and top stories, or check out the latest newsletters:

Systems and Design
Low Power-High Performance
Test, Measurement and Analytics
Manufacturing, Packaging and Materials
Automotive, Security and Pervasive Computing

The post Chip Industry Week In Review appeared first on Semiconductor Engineering.

  • ✇Latest
  • SCOTUS Ponders the Implications of Prosecuting Gun Owners for a Crime Invented by BureaucratsJacob Sullum
    On March 26, 2019, every American who owned a bump stock, a rifle accessory that facilitates rapid firing, was suddenly guilty of a federal felony punishable by up to 10 years in prison. That did not happen because a new law took effect; it happened because federal regulators reinterpreted an existing law to mean something they had long said it did not mean. On Wednesday, the U.S. Supreme Court considered the question of whether those bureaucrats
     

SCOTUS Ponders the Implications of Prosecuting Gun Owners for a Crime Invented by Bureaucrats

29. Únor 2024 v 01:30
gun lying on the floor | WASR, CC BY-SA 3.0

On March 26, 2019, every American who owned a bump stock, a rifle accessory that facilitates rapid firing, was suddenly guilty of a federal felony punishable by up to 10 years in prison. That did not happen because a new law took effect; it happened because federal regulators reinterpreted an existing law to mean something they had long said it did not mean.

On Wednesday, the U.S. Supreme Court considered the question of whether those bureaucrats had the authority to do that. The case, Garland v. Cargill, turns on whether bump stocks are prohibited under the "best reading" of the federal statute covering machine guns. While several justices were clearly inclined to take that view, several others had reservations.

The products targeted by the government are designed to assist bump firing, which involves pushing a rifle forward to activate the trigger by bumping it against a stationary finger, then allowing recoil energy to push the rifle backward, which resets the trigger. As long as the shooter maintains forward pressure and keeps his finger in place, the rifle will fire repeatedly. The "interpretive rule" at issue in this case, which was published in December 2018 and took effect three months later, bans stock replacements that facilitate this technique by allowing the rifle's receiver to slide back and forth.

Officially, the purpose of that rule was merely to "clarify" that bump stocks are illegal. According to the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF), they always have been, although no one (including the ATF) realized that until 2018.

Federal law defines a machine gun as a weapon that "automatically" fires "more than one shot" by "a single function of the trigger." The definition also covers parts that are "designed and intended…for use in converting a weapon" into a machine gun.

During Wednesday's oral arguments, Principal Deputy Solicitor General Brian H. Fletcher maintained that a rifle equipped with a bump stock plainly meets the criteria for a machine gun. It "fires more than one shot by a single function of the trigger," he said, because "a function of the trigger happens when some act by the shooter, usually a pull, starts a firing sequence." An ordinary semi-automatic rifle, according to Fletcher, "fires one shot for each function of the trigger because the shooter has to manually pull and release the trigger for every shot." But "a bump stock eliminates those manual movements and allows the shooter to fire many shots with one act, a forward push."

Fletcher argued that a rifle with a bump stock also "fires more than one shot automatically, that is, through a self-regulating mechanism." After "the shooter presses forward to fire the first shot," he said, "the bump stock uses the gun's recoil energy to create a continuous back-and-forth cycle that fires hundreds of shots per minute."

Jonathan F. Mitchell, the attorney representing Michael Cargill, the Texas gun shop owner who challenged the bump stock ban, argued that Fletcher was misapplying both of those criteria. First, he said, a rifle equipped with a bump stock "can fire only one shot per function of the trigger because the trigger must reset after every shot and must function again before another shot can be fired." The trigger "is the device that initiates the firing of the weapon, and the function of the trigger is what that triggering device must do to cause the weapon to fire," he added. "The phrase 'function of the trigger' can refer only to the trigger's function. It has nothing to do with the shooter or what the shooter does to the trigger because the shooter does not have a function."

Second, Mitchell said, a rifle with a bump stock "does not and cannot fire more than one shot automatically by a single function of the trigger because the shooter, in addition to causing the trigger to function, must also undertake additional manual actions to ensure a successful round of bump firing." That process "depends entirely on human effort and exertion," he explained, because "the shooter must continually and repeatedly thrust the force stock of the rifle forward with his non-shooting hand while simultaneously maintaining backward pressure on the weapon with his shooting hand. None of these acts are automated."

Justices Elena Kagan and Ketanji Brown Jackson seemed eager to accept Fletcher's reading of the law, arguing that it is consistent with what Congress was trying to do when it approved the National Firearms Act of 1934, which imposed tax and registration requirements on machine guns. Although bump stocks did not exist at the time, they suggested, the law was meant to cover any firearm that approximated a machine gun's rate of fire.

According to Fletcher, "a traditional machine gun" can "shoot in the range of 700 to 950 bullets a minute," while a semi-automatic rifle with a bump stock can "shoot between 400 and 800 rounds a minute." As he conceded, however, the statute does not refer to rate of fire. "This is not a rate-of-fire statute," he said. "It's a function statute." To ban bump stocks, in other words, the ATF has to show that they satisfy the disputed criteria.

"It seems like, yes, that this is functioning like a machine gun would," Justice Amy Coney Barrett said. "But, you know, looking at that definition, I think the question is, 'Why didn't Congress pass…legislation to make this cover it more clearly?'"

Justice Neil Gorsuch made the same point. "I can certainly understand why these items should be made illegal," he said, "but we're dealing with a statute that was enacted in the 1930s, and through many administrations, the government took the position that these bump stocks are not machine guns." That changed after a gunman murdered 60 people at a Las Vegas country music festival in October 2017, and it turned out that some of his rifles were fitted with bump stocks.

The massacre inspired several bills aimed at banning bump stocks. Noting that "the ATF lacks authority under the law to ban bump-fire stocks," Sen. Dianne Feinstein (D–Calif.) said "legislation is the only answer." President Donald Trump, by contrast, maintained that new legislation was unnecessary. After he instructed the ATF to ban bump stocks by administrative fiat, the agency bent the law to his will. Noting that "the law has not changed," Feinstein warned that the ATF's "about face," which relied partly on "a dubious analysis claiming that bumping the trigger is not the same as pulling it," would invite legal challenges.

Feinstein was right about that, and one of those challenges resulted in the decision that the government is now asking the Supreme Court to overturn. In January 2023, the U.S. Court of Appeals for the 5th Circuit rejected the ATF's redefinition of machine guns.

"A plain reading of the statutory language, paired with close consideration of the mechanics of a semi-automatic firearm, reveals that a bump stock is excluded from the technical definition of 'machinegun' set forth in the Gun Control Act and National Firearms Act," 5th Circuit Judge Jennifer Walker Elrod wrote in the majority opinion. And even if that were not true, Elrod said, "the rule of lenity," which requires construing an ambiguous criminal statute in a defendant's favor, would preclude the government from punishing people for owning bump stocks.

Gorsuch alluded to Feinstein's prescient concerns about the ATF rule's legal vulnerability: "There are a number of members of Congress, including Senator Feinstein, who said that this administrative action forestalled legislation that would have dealt with this topic directly, rather than trying to use a nearly 100-year-old statute in a way that many administrations hadn't anticipated." The ATF's attempt to do that, he said, would "render between a quarter of a million and a half million people federal felons," even though they relied on guidance from "past administrations, Republican and Democrat," that said bump stocks were legal.

Justices Brett Kavanaugh and Samuel Alito also were troubled by that reversal's implications for people who already owned bump stocks. Fletcher tried to assuage those concerns.

"ATF made [it] very clear in enacting this rule that anyone who turned in their bump stock or destroyed it before March of [2019] would not face prosecution," Fletcher said. "As a practical matter," he added, "the statute of limitations for this offense is five years," meaning prosecutions of people who owned bump stocks before the rule took effect will no longer be possible a month from now. "We have not prosecuted those people," he said. "We won't do it. And if we try to do it, I think they would have a good defense based on entrapment by estoppel," which applies when someone follows official advice in trying to comply with the law.

"What is the situation of people who have possessed bump stocks between the time of the ATF's new rule and the present day or between the time of the new rule and the 5th Circuit decision?" Alito asked. "Can they be prosecuted?" Fletcher's answer: "probably yes." That prospect, Alito said, is "disturbing."

Kavanaugh wondered about gun owners who did not destroy or surrender their bump stocks because they did not know about the ATF's rule. "For prosecuting someone now," he asked, "what mens rea showing would the government have to make to convict someone?" Fletcher said the defendant would "have to be aware of the facts" that, according to the ATF's reinterpretation of the law, make bump stocks illegal. "So even if you are not aware of the legal prohibition, you can be convicted?" Kavanaugh asked. "That's right," Fletcher replied.

"That's going to ensnare a lot of people who are not aware of the legal prohibition," Kavanaugh said. "Why not require the government to also prove that the person knew that what they were doing…was illegal?"

Gorsuch mocked Fletcher's apparent assumption that gun owners can be expected to keep abreast of the ATF's edicts. "People will sit down and read the Federal Register?" he said to laughter. "That's what they do in their evening for fun. Gun owners across the country crack it open next to the fire and the dog."

Maybe not, Fletcher admitted, but the publicity surrounding the ban and the legal controversy it provoked probably brought the matter to many people's attention. "I agree not everyone is going to find out about those things," he said, "but we've done everything the government could possibly do to make people aware."

Beyond the unfairness to gun owners who bought products they quite reasonably thought were legal, the ATF's about-face lends credibility to the complaint that its current interpretation of the law is misguided. If the ATF was wrong before, how can we be confident that it is right now?

According to the agency's new understanding of the statute, Mitchell noted, "function of the trigger" hinges on what the shooter is doing. But "function is an intransitive verb," he said. "It can't take an object grammatically. It's impossible. The trigger has to be the subject of function. It can't be the object."

Gorsuch picked up on that point, noting that the government had likened "function of the trigger" to "a stroke of a key or a throw of the dice or a swing of the bat." But "those are all things that people do," he said. Since function is an intransitive verb, "people don't function things. They may pull things, they may throw things, but they don't function things."

Gorsuch noted that the ATF is relying on "a very old statute" designed for "an obvious problem" posed by gangsters like Al Capone armed with machine guns that fired repeatedly "with a single function of the trigger—that is, the thing itself was moved once." Maybe legislators "should have written something better," he said. "One might hope they might write something better in the future. But that's the language we're stuck with."

What about the ATF's claim that a rifle equipped with a bump stock shoots "automatically"? Fletcher conceded that "an expert" can bump-fire a rifle "without any assistive device at all" and that "you can also do it if you have a lot of expertise by hooking your finger into a belt loop or using a rubber band or something else like that to hold your finger in place." But he added that "we don't think those things function automatically because the definition of 'automatically'" entails "a self-regulating mechanism."

As the government sees it, a shooter creates such a mechanism by using a bump stock, notwithstanding the "manual actions" that Mitchell highlighted. "There's nothing automatic about that," Mitchell argued. "The shooter is the one who is pushing. It's human effort, human exertion. Nothing automatic at all about this process."

Barrett asked Fletcher how the ATF would treat an elastic "bump band" marketed as an accessory to facilitate rapid firing. "Why wouldn't that then be a machine gun under the statute?" she wondered. "We think that's still not functioning automatically because that's not a self-regulating mechanism," Fletcher replied.

Mitchell, by contrast, argued that Barrett's hypothetical product and a bump stock are "indistinguishable when it comes to 'automatically.'" Bump firing with either involves "a manual action undertaken entirely by the shooter," he said. "There is no automating device….It is all being done by the shooter."

Justice Sonia Sotomayor, who was sympathetic to Fletcher's argument, nevertheless implied that the legal status of bump stocks might not be as clear as the government suggests. "The back-and-forth here leads me to believe that at best there might be some ambiguity," she said. But if the statute is in fact unclear, the 5th Circuit said, the ambiguity should be resolved in a way that protects gun owners from prosecution for a crime invented by bureaucrats.

The post SCOTUS Ponders the Implications of Prosecuting Gun Owners for a Crime Invented by Bureaucrats appeared first on Reason.com.

  • ✇Slashdot
  • Engineers Use AI To Wrangle Fusion Power For the GridBeauHD
    An anonymous reader quotes a report from Princeton Engineering: In the blink of an eye, the unruly, superheated plasma that drives a fusion reaction can lose its stability and escape the strong magnetic fields confining it within the donut-shaped fusion reactor. These getaways frequently spell the end of the reaction, posing a core challenge to developing fusion as a non-polluting, virtually limitless energy source. But a Princeton-led team composed of engineers, physicists, and data scientists
     

Engineers Use AI To Wrangle Fusion Power For the Grid

Od: BeauHD
22. Únor 2024 v 04:30
An anonymous reader quotes a report from Princeton Engineering: In the blink of an eye, the unruly, superheated plasma that drives a fusion reaction can lose its stability and escape the strong magnetic fields confining it within the donut-shaped fusion reactor. These getaways frequently spell the end of the reaction, posing a core challenge to developing fusion as a non-polluting, virtually limitless energy source. But a Princeton-led team composed of engineers, physicists, and data scientists from the University and the Princeton Plasma Physics Laboratory (PPPL) have harnessed the power of artificial intelligence to predict -- and then avoid -- the formation of a specific plasma problem in real time. In experiments at the DIII-D National Fusion Facility in San Diego, the researchers demonstrated their model, trained only on past experimental data, could forecast potential plasma instabilities known as tearing mode instabilities up to 300 milliseconds in advance. While that leaves no more than enough time for a slow blink in humans, it was plenty of time for the AI controller to change certain operating parameters to avoid what would have developed into a tear within the plasma's magnetic field lines, upsetting its equilibrium and opening the door for a reaction-ending escape. "By learning from past experiments, rather than incorporating information from physics-based models, the AI could develop a final control policy that supported a stable, high-powered plasma regime in real time, at a real reactor," said research leader Egemen Kolemen, associate professor of mechanical and aerospace engineering and the Andlinger Center for Energy and the Environment, as well as staff research physicist at PPPL. The research opens the door for more dynamic control of a fusion reaction than current approaches, and it provides a foundation for using artificial intelligence to solve a broad range of plasma instabilities, which have long been obstacles to achieving a sustained fusion reaction. The team published their findings in the journal Nature.

Read more of this story at Slashdot.

  • ✇IEEE Spectrum
  • A Peek at Intel’s Future Foundry TechSamuel K. Moore
    In an exclusive interview ahead of an invite-only event today in San Jose, Intel outlined new chip technologies it will offer its foundry customers by sharing a glimpse into its future data-center processors. The advances include more dense logic and a 16-fold increase in the connectivity within 3D-stacked chips, and they will be among the first top-end technologies the company has ever shared with chip architects from other companies. The new technologies will arrive at the culmination of a ye
     

A Peek at Intel’s Future Foundry Tech

21. Únor 2024 v 17:30


In an exclusive interview ahead of an invite-only event today in San Jose, Intel outlined new chip technologies it will offer its foundry customers by sharing a glimpse into its future data-center processors. The advances include more dense logic and a 16-fold increase in the connectivity within 3D-stacked chips, and they will be among the first top-end technologies the company has ever shared with chip architects from other companies.

The new technologies will arrive at the culmination of a years-long transformation for Intel. The processor maker is moving from being a company that produces only its own chips to becoming a foundry, making chips for others and considering its own product teams as just another customer. The San Jose event, IFS Direct Connect, is meant as a sort of coming-out party for the new business model.

Internally, Intel plans to use the combination of technologies in a server CPU code-named Clearwater Forest. The company considers the product, a system-on-a-chip with hundreds of billions of transistors, an example of what other customers of its foundry business will be able to achieve.

“Our objective is to get the compute to the best performance per watt we can achieve” from Clearwater Forest, said Eric Fetzer, director of data center technology and pathfinding at Intel. That means using the company’s most advanced fabrication technology available, Intel 18A.

3D stacking “improves the latency between compute and memory by shortening the hops, while at the same time enabling a larger cache” —Pushkar Ranade

“However, if we apply that technology throughout the entire system, you run into other potential problems,” he added. “Certain parts of the system don’t necessarily scale as well as others. Logic typically scales generation to generation very well with Moore’s Law.” But other features do not. SRAM, a CPU’s cache memory, has been lagging logic, for example. And the I/O circuits that connect a processor to the rest of a computer are even further behind.

Faced with these realities, as all makers of leading-edge processors are now, Intel broke Clearwater Forest’s system down into its core functions, chose the best-fit technology to build each, and stitched them back together using a suite of new technical tricks. The result is a CPU architecture capable of scaling to as many as 300 billion transistors.

In Clearwater Forest, billions of transistors are divided among three different types of silicon ICs, called dies or chiplets, interconnected and packaged together. The heart of the system is as many as 12 processor-core chiplets built using the Intel 18A process. These chiplets are 3D-stacked atop three “base dies” built using Intel 3, the process that makes compute cores for the Sierra Forest CPU, due out this year. Housed on the base die will be the CPU’s main cache memory, voltage regulators, and internal network. “The stacking improves the latency between compute and memory by shortening the hops, while at the same time enabling a larger cache,” says senior principal engineer Pushkar Ranade.

Finally, the CPU’s I/O system will be on two dies built using Intel 7, which in 2025 will be trailing the company’s most advanced process by a full four generations. In fact, the chiplets are basically the same as those going into the Sierra Forest and Granite Rapids CPUs, lessening the development expense.

Here’s a look at the new technologies involved and what they offer:

3D Hybrid Bonding

3D rendering of stacks of slabs with silver balls between them. The balls are larger at the bottom and smaller at the top. 3D hybrid bonding links compute dies to base dies.Intel

Intel’s current chip-stacking interconnect technology, Foveros, links one die to another using a vastly scaled-down version of how dies have long been connected to their packages: tiny “microbumps” of solder that are briefly melted to join the chips. This lets today’s version of Foveros, which is used in the Meteor Lake CPU, make one connection roughly every 36 micrometers. Clearwater Forest will use new technology, Foveros Direct 3D, which departs from solder-based methods to bring a whopping 16-fold increase in the density of 3D connections.

Called “hybrid bonding,” it’s analogous to welding together the copper pads at the face of two chips. These pads are slightly recessed and surround by insulator. The insulator on one chip affixes to the other when they are pressed together. Then the stacked chips are heated, causing the copper to expand across the gap and bind together to form a permanent link. Competitor TSMC uses a version of hybrid bonding in certain AMD CPUs to connect extra cache memory to processor-core chiplets and, in AMD’s newest GPU, to link compute chiplets to the system’s base die.

“The hybrid bond interconnects enable a substantial increase in density” of connections, says Fetzer. “That density is very important for the server market, particularly because the density drives a very low picojoule-per-bit communication.” The energy involved in data crossing from one silicon die to another can easily consume a big chunk of a product’s power budget if the per-bit energy cost is too high. Foveros Direct 3D brings that cost down below 0.05 picojoules per bit, which puts it on the same scale as the energy needed to move bits around within a silicon die.

A lot of that energy savings comes from the data traversing less copper. Say you wanted to connect a 512-wire bus on one die to the same-size bus on another so the two dies can share a coherent set of information. On each chip, these buses might be as narrow as 10–20 wires per micrometer. To get that from one die to the other using today’s 36-micrometer-pitch microbump tech would mean scattering those signals across several hundred square micrometers of silicon on one side and then gathering them across the same area on the other. Charging up all that extra copper and solder “quickly becomes both a latency and a large power problem,” says Fetzer. Hybrid bonding, in contrast, could do the bus-to-bus connection in the same area that a few microbumps would occupy.

As great as those benefits might be, making the switch to hybrid bonding isn’t easy. To forge hybrid bonds requires linking an already-diced silicon die to one that’s still attached to its wafer. Aligning all the connections properly means the chip must be diced to much greater tolerances than is needed for microbump technologies. Repair and recovery, too, require different technologies. Even the predominant way connections fail is different, says Fetzer. With microbumps, you are more likely to get a short from one bit of solder connecting to a neighbor. But with hybrid bonding, the danger is defects that lead to open connections.

Backside power

One of the main distinctions the company is bringing to chipmaking this year with its Intel 20A process, the one that will precede Intel 18A, is backside power delivery. In processors today, all interconnects, whether they’re carrying power or data, are constructed on the “front side” of the chip, above the silicon substrate. Foveros and other 3D-chip-stacking tech require through-silicon vias, interconnects that drill down through the silicon to make connections from the other side. But back-side power delivery goes much further. It puts all of the power interconnects beneath the silicon, essentially sandwiching the layer containing the transistors between two sets of interconnects.

A dark grey tower with jagged copper portions snaking up it. PowerVia puts the silicon’s power supply network below, leaving more room for data-carrying interconnects above.Intel

This arrangement makes a difference because power interconnects and data interconnects require different features. Power interconnects need to be wide to reduce resistance, while data interconnects should be narrow so they can be densely packed. Intel is set to be the first chipmaker to introduce back-side power delivery in a commercial chip, later this year with the release of the Arrow Lake CPU. Data released last summer by Intel showed that back-side power alone delivered a 6 percent performance boost.

The Intel 18A process technology’s back-side-power-delivery network technology will be fundamentally the same as what’s found in Intel 20A chips. However, it’s being used to greater advantage in Clearwater Forest. The upcoming CPU includes what’s called an “on-die voltage regulator” within the base die. Having the voltage regulation close to the logic it drives means the logic can run faster. The shorter distances let the regulator respond to changes in the demand for current more quickly, while consuming less power.

Because the logic dies use back-side power delivery, the resistance of the connection between the voltage regulator and the dies logic is that much lower. “The power via technology along with the Foveros stacking gives us a really efficient way to hook it up,” says Fetzer.

RibbonFET, the next generation

In addition to back-side power, the chipmaker is switching to a different transistor architecture with the Intel 20A process: RibbonFET. A form of nanosheet, or gate-all-around, transistor, RibbonFET replaces the FinFET, CMOS’s workhorse transistor since 2011. With Intel 18A, Clearwater Forest’s logic dies will be made with a second generation of RibbonFET process. While the devices themselves aren’t very different from the ones that will emerge from Intel 20A, there’s more flexibility to the design of the devices, says Fetzer.

Three gold ribbons pass through a dark grey block. RibbonFET is Intel’s take on nanowire transistors.Intel

“There’s a broader array of devices to support various foundry applications beyond just what was needed to enable a high-performance CPU,” which was what the Intel 20A process was designed for, he says.

Two vertical towers of dark grey blocks embedded in grainy light grey material. RibbonFET’s nanowires can have different widths depending on the needs of a logic cell.Intel

Some of that variation stems from a degree of flexibility that was lost in the FinFET era. Before FinFETs arrived, transistors in the same process could be made in a range of widths, allowing a more-or-less continuous trade-off between performance—which came with higher current—and efficiency—which required better control over leakage current. Because the main part of a FinFET is a vertical silicon fin of a defined height and width, that trade-off now had to take the form of how many fins a device had. So, with two fins you could double current, but there was no way to increase it by 25 or 50 percent.

With nanosheet devices, the ability to vary transistor widths is back. “RibbonFET technology enables different sizes of ribbon within the same technology base,” says Fetzer. “When we go from Intel 20A to Intel 18A, we offer more flexibility in transistor sizing.”

That flexibility means that standard cells, basic logic blocks designers can use to build their systems, can contain transistors with different properties. And that enabled Intel to develop an “enhanced library” that includes standard cells that are smaller, better performing, or more efficient than those of the Intel 20A process.

2nd generation EMIB

In Clearwater Forest, the dies that handle input and output connect horizontally to the base dies—the ones with the cache memory and network—using the second generation of Intel’s EMIB. EMIB is a small piece of silicon containing a dense set of interconnects and microbumps designed to connect one die to another in the same plane. The silicon is embedded in the package itself to form a bridge between dies.

3D rendering of stacks of slabs with silver balls between them. The balls are larger at the bottom and smaller at the top. Dense 2D connections are formed by a small sliver of silicon called EMIB, which is embedded in the package substrate.Intel

The technology has been in commercial use in Intel CPUs since Sapphire Rapids was released in 2023. It’s meant as a less costly alternative to putting all the dies on a silicon interposer, a slice of silicon patterned with interconnects that is large enough for all of the system’s dies to sit on. Apart from the cost of the material, silicon interposers can be expensive to build, because they are usually several times larger than what standard silicon processes are designed to make.

The second generation of EMIB debuts this year with the Granite Rapids CPU, and it involves shrinking the pitch of microbump connections from 55 micrometers to 45 micrometers as well as boosting the density of the wires. The main challenge with such connections is that the package and the silicon expand at different rates when they heat up. This phenomenon could lead to warpage that breaks connections.

What’s more, in the case of Clearwater Forest “there were also some unique challenges, because we’re connecting EMIB on a regular die to EMIB on a Foveros Direct 3D base die and a stack,” says Fetzer. This situation, recently rechristened EMIB 3.5 technology (formerly called co-EMIB), requires special steps to ensure that the stresses and strains involved are compatible with the silicon in the Foveros stack, which is thinner than ordinary chips, he says.

For more, see Intel’s whitepaper on their foundry tech.

  • ✇IEEE Spectrum
  • Momentary Fusion Breakthroughs Face Hard RealityEdd Gent
    The dream of fusion power inched closer to reality in December 2022, when researchers at Lawrence Livermore National Laboratory (LLNL) revealed that a fusion reaction had produced more energy than what was required to kick-start it. According to new research, the momentary fusion feat required exquisite choreography and extensive preparations, whose high degree of difficulty reveals a long road ahead before anyone dares hope a practicable power source could be at hand. The groundbreaking result
     

Momentary Fusion Breakthroughs Face Hard Reality

Od: Edd Gent
6. Únor 2024 v 22:43


The dream of fusion power inched closer to reality in December 2022, when researchers at Lawrence Livermore National Laboratory (LLNL) revealed that a fusion reaction had produced more energy than what was required to kick-start it. According to new research, the momentary fusion feat required exquisite choreography and extensive preparations, whose high degree of difficulty reveals a long road ahead before anyone dares hope a practicable power source could be at hand.

The groundbreaking result was achieved at the California lab’s National Ignition Facility (NIF), which uses an array of 192 high-power lasers to blast tiny pellets of deuterium and tritium fuel in a process known as inertial confinement fusion. This causes the fuel to implode, smashing its atoms together and generating higher temperatures and pressures than are found at the center of the sun. The atoms then fuse together, releasing huge amounts of energy.

“It showed there’s nothing fundamentally limiting us from being able to harness fusion in the laboratory.” —Annie Kritcher, Lawrence Livermore National Laboratory

The facility has been running since 2011, and for a long time the amount of energy produced by these reactions was significantly less than the amount of laser energy pumped into the fuel. But on 5 December 2022, researchers at NIF announced that they had finally achieved breakeven by generating 1.5 times more energy than was required to start the fusion reaction.

A new paper published yesterday in Physical Review Letters confirms the team’s claims and details the complex engineering required to make it possible. While the results underscore the considerable work ahead, Annie Kritcher, a physicist at LLNL who led design of the experiment, says it still signals a major milestone in fusion science. “It showed there’s nothing fundamentally limiting us from being able to harness fusion in the laboratory,” she says.

While the experiment was characterized as a breakthrough, Kritcher says it was actually the result of painstaking incremental improvements to the facility’s equipment and processes. In particular, the team has spent years perfecting the design of the fuel pellet and the cylindrical gold container that houses it, known as a “hohlraum”.

Why is fusion so hard?

When lasers hit the outside of this capsule, their energy is converted into X-rays that then blast the fuel pellet, which consists of a diamond outer shell coated on the inside with deuterium and tritium fuel. It’s crucial that the hohlraum is as symmetrical as possible, says Kritcher, so it distributes X-rays evenly across the pellet. This ensures the fuel is compressed equally from all sides, allowing it to reach the temperatures and pressures required for fusion. “If you don’t do that, you can basically imagine your plasmas squirting out in one direction, and you can’t squeeze it and heat it enough,” she says.

The team has since carried out six more experiments—two that have generated roughly the same amount of energy as was put in and four that significantly exceeded it.

Carefully tailoring the laser beams is also important, Kritcher says, because laser light can scatter off the hohlraum, reducing efficiency and potentially damaging laser optics. In addition, as soon as the laser starts to hit the capsule, it starts giving off a plume of plasma that interferes with the beam. “It’s a race against time,” says Kritcher. “We’re trying to get the laser pulse in there before this happens, because then you can’t get the laser energy to go where you want it to go.”

The design process is slowgoing, because the facility is capable of carrying out only a few shots a year, limiting the team’s ability to iterate. And predicting how those changes will pan out ahead of time is challenging because of our poor understanding of the extreme physics at play. “We’re blasting a tiny target with the biggest laser in the world, and a whole lot of crap is flying all over the place,” says Kritcher. “And we’re trying to control that to very, very precise levels.”

Nonetheless, by analyzing the results of previous experiments and using computer modeling, the team was able to crack the problem. They worked out that using a slightly higher power laser coupled with a thicker diamond shell around the fuel pellet could overcome the destabilizing effects of imperfections on the pellet’s surface. Moreover, they found these modifications could also help confine the fusion reaction for long enough for it to become self-sustaining. The resulting experiment ended up producing 3.15 megajoules, considerably more than the 2.05 MJ produced by the lasers.

Since then, the team has carried out six more experiments—two that have generated roughly the same amount of energy as was put in and four that significantly exceeded it. Consistently achieving breakeven is a significant feat, says Kritcher. However, she adds that the significant variability in the amount of energy produced remains something the researchers need to address.

This kind of inconsistency is unsurprising, though, says Saskia Mordijck, an associate professor of physics at the College of William & Mary in Virginia. The amount of energy generated is strongly linked to how self-sustaining the reactions are, which can be impacted by very small changes in the setup, she says. She compares the challenge to landing on the moon—we know how to do it, but it’s such an enormous technical challenge that there’s no guarantee you’ll stick the landing.

Relatedly, researchers from the University of Rochester’s Laboratory for Laser Energetics today reported in the journal Nature Physics that they have developed an inertial confinement fusion system that’s one-hundredth the size of NIF’s. Their 28 kilojoule laser system, the team noted, can at least yield more fusion energy than what is contained in the central plasma—an accomplishment that’s on the road toward NIF’s success, but still a distance away. They’re calling what they’ve developed a “spark plug“ toward more energetic reactions.

Both NIF’s and LLE’s newly reported results represent steps along a development path—where in both cases that path remains long and challenging if inertial confinement fusion is to ever become more than a research curiosity, though.

Plenty of other obstacles remain than those noted above, too. Current calculations compare energy generated against the NIF laser’s output, but that brushes over the fact that the lasers draw more than 100 times the power from the grid than any fusion reaction yields. That means either energy gains or laser efficiency would need to improve by two orders of magnitude to break even in any practical sense. The NIF’s fuel pellets are also extremely expensive, says Kritcher, each one pricing in at an estimated $100,000. Then, producing a reasonable amount of power would mean dramatically increasing the frequency of NIF’s shots—a feat barely on the horizon for a reactor that requires months to load up the next nanosecond-long burst.

“Those are the biggest challenges,” Kritcher says. “But I think if we overcome those, it’s really not that hard at that point.”


UPDATE: 8 Feb. 2024: The story was corrected to attribute the final quote to Annie Kritcher, not Saskia Mordijck, as the story originally stated.
6 Feb. 2024 6 p.m. ET: The story was updated to include news of the University of Rochester’s Laboratory for Laser Energetics new research findings.

❌
❌