It’s a pretty sure bet that you couldn’t get through a typical day without the direct support of dozens of electric motors. They’re in all of your appliances not powered by a hand crank, in the climate-control systems that keep you comfortable, and in the pumps, fans, and window controls of your car. And although there are many different kinds of electric motors, every single one of them, from the 200-kilowatt traction motor in your electric vehicle to the stepper motor in your quartz wristwatch, exploits the exact same physical phenomenon: electromagnetism.
For decades, however, engineers have been tantalized by the virtues of motors based on an entirely different principle: electrostatics. In some applications, these motors could offer an overall boost in efficiency ranging from 30 percent to close to 100 percent, according to experiment-based analysis. And, perhaps even better, they would use only cheap, plentiful materials, rather than the rare-earth elements, special steel alloys, and copious quantities of copper found in conventional motors.
“Electrification has its sustainability challenges,” notes Daniel Ludois, a professor of electrical engineering at the University of Wisconsin in Madison. But “an electrostatic motor doesn’t need windings, doesn’t need magnets, and it doesn’t need any of the critical materials that a conventional machine needs.”
Such advantages prompted Ludois to cofound a company, C-Motive Technologies, to build macro-scale electrostatic motors. “We make our machines out of aluminum and plastic or fiberglass,” he says. Their current prototype is capable of delivering torque as high as 18 newton meters and power at 360 watts (0.5 horsepower)—characteristics they claim are “the highest torque and power measurements for any rotating electrostatic machine.”
The results are reported in a paper, “Synchronous Electrostatic Machines for Direct Drive Industrial Applications,” to be presented at the 2024 IEEE Energy Conversion Congress and Exposition, which will be held from 20 to 24 October in Phoenix, Ariz. In the paper, Ludois and four colleagues describe an electrostatic machine they built, which they describe as the first such machine capable of “driving a load performing industrial work, in this case, a constant-pressure pump system.”
Making Electrostatic Motors Bigger
The machine, which is hundreds of times more powerful than any previous electrostatic motor, is “competitive with or superior to air-cooled magnetic machinery at the fractional [horsepower] scale,” the authors add. The global market for fractional horsepower motors is more than US $8.7 billion, according to consultancy Business Research Insights.
C-Motive’s 360-watt motor has a half dozen each of rotors and stators, shown in yellow in this cutaway illustration.C-Motive Technologies
Achieving macro scale wasn’t easy. Electrostatic motors have been available for years, but today, these are tiny units with power output measured in milliwatts. “Electrostatic motors are amazing once you get below about the millimeter scale, and they get better and better as they get smaller and smaller,” says Philip Krein, a professor of electrical engineering at the University of Illinois Urbana-Champaign. “There’s a crossover at which they are better than magnetic motors.” (Krein does not have any financial connection to C-Motive.)
For larger motors, however, the opposite is true. “At macro scale, electromagnetism wins, is the textbook answer,” notes Ludois. “Well, we’ve decided to challenge that wisdom.”
For this quest he and his team found inspiration in a lesser-known accomplishment of one of the United States’ founding fathers. “The fact is that Benjamin Franklin built and demonstrated a macroscopic electrostatic motor in 1747,” says Krein. “He actually used the motor as a rotisserie to grill a turkey on a riverbank in Philadelphia” (a fact unearthed by the late historian I. Bernard Cohen for his 1990 book Benjamin Franklin’s Science ).
Krein explains that the fundamental challenge in attempting to scale electrostatic motors to the macro world is energy density. “The energy density you can get in air at a reasonable scale with an electric-field system is much, much lower—many orders of magnitude lower—than the density you can get with an electromagnetic system.” Here the phrase “in air” refers to the volume within the motor, called the “air gap,” where the machine’s fields (magnetic for the conventional motor, electric for the electrostatic one) are deployed. It straddles the machine’s key components: the rotor and the stator.
Let’s unpack that. A conventional electric motor works because a rotating magnetic field, set up in a fixed structure called a stator, engages with the magnetic field of another structure called a rotor, causing that rotor to spin. The force involved is called the Lorentz force. But what makes an electrostatic machine go ‘round is an entirely different force, called the Coulomb force. This is the attractive or repulsive physical force between opposite or like electrical charges.
Overcoming the Air Gap Problem
C-Motive’s motor uses nonconductive rotor and stator disks on which have been deposited many thin, closely spaced conductors radiating outward from the disk’s center, like spokes in a bicycle wheel. Precisely timed electrostatic charges applied to these “spokes” create two waves of voltage, one in the stator and another in the rotor. The phase difference between the rotor and stator waves is timed and controlled to maximize the torque in the rotor caused by this sequence of attraction and repulsion among the spokes. To further wring as much torque as possible, the machine has half a dozen each of rotors and stators, alternating and stacked like compact discs on a spindle.
The 360-watt motor is hundreds of times more powerful than previous electrostatic motors, which have power output generally measured in milliwatts.C-Motive Technologies
The machine would be feeble if the dielectric between the charges was air. As a dielectric, air has low permittivity, meaning that an electric field in air can not store much energy. Air also has a relatively low breakdown field strength, meaning that air can support only a fairly weak electric field before it breaks down and conducts current in a blazing arc. So one of the team’s greatest challenges was producing a dielectric fluid that has a much higher permittivity and breakdown field strength than air, and that was also environmentally friendly and nontoxic. To minimize friction, this fluid also had to have very low viscosity, because the rotors would be spinning in it. A dielectric with high permittivity concentrates the electric field between oppositely charged electrodes, enabling greater energy to be stored in the space between them. After screening hundreds of candidates over several years, the C-Motive team succeeded in producing an organic liquid dielectric with low viscosity and a relative permittivity in the low 20s. For comparison, the relative permittivity of air is 1.
Another challenge was supplying the 2,000 volts their machine needs to operate. High voltages are necessary to create the intense electric fields between the rotors and stators. To precisely control these fields, C-Motive was able to take advantage of the availability of inexpensive and stupendously capable power electronics, according to Ludois. For their most recent motor, they developed a drive system based on readily available 4.5-kilovolt insulated-gate bipolar transistors, but the rate of advancement in power semiconductors means they have many attractive choices here, and will have even more in the near future.
Ludois reports that C-Motive is now testing a 750-watt (1 hp) motor in applications with potential customers. Their next machines will be in the range of 750 to 3,750 watts (1 to 5 hp), he adds. These will be powerful enough for an expanded range of applications in industrial automation, manufacturing, and heating, ventilating, and air conditioning.
It’s been a gratifying ride for Ludois. “For me, a point of creative pride is that my team and I are working on something radically different that, I hope, over the long term, will open up other avenues for other folks to contribute.”
Various scenarios to getting to net zero carbon emissions from power generation by 2050 hinge on the success of some hugely ambitious initiatives in renewable energy, grid enhancements, and other areas. Perhaps none of these are more audacious than an envisioned renaissance of nuclear power, driven by advanced-technology reactors that are smaller than traditional nuclear power reactors.
What many of these reactors have in common is that they would use a kind of fuel called high-assay low-enriched uranium (HALEU). Its composition varies, but for power generation, a typical mix contains slightly less than 20 percent by mass of the highly fissionable isotope uranium-235 (U-235). That’s in contrast to traditional reactor fuels, which range from 3 percent to 5 percent U-235 by mass, and natural uranium, which is just 0.7 percent U-235.
Now, though, a paper in Science magazine has identified a significant wrinkle in this nuclear option: HALEU fuel can theoretically be used to make a fission bomb—a fact that the paper’s authors use to argue for the tightening of regulations governing access to, and transportation of, the material. Among the five authors of the paper, which is titled “The Weapons Potential of High-Assay Low-Enriched Uranium,” is IEEE Life Fellow Richard L. Garwin. Garwin was the key figure behind the design of the thermonuclear bomb, which was tested in 1952.
The Science paper is not the first to argue for a reevaluation of the nuclear proliferation risks of HALEU fuel. A report published last year by the National Academies, “Merits and Viability of Different Nuclear Fuel Cycles and Technology Options and the Waste Aspects of Advanced Nuclear Reactors,” devoted most of a chapter to the risks of HALEU fuel. It reached similar technical conclusions to those of the Science article, but did not go as far in its recommendations regarding the need to tighten regulations.
Why is HALEU fuel concerning?
Conventional wisdom had it that U-235 concentrations below 20 percent were not usable for a bomb. But “we found this testimony in 1984 from the chief of the theoretical division of Los Alamos, who basically confirmed that, yes, indeed, it is usable down to 10 percent,” says R. Scott Kemp of MIT, another of the paper’s authors. “So you don’t even need centrifuges, and that’s what really is important here.”
Centrifuges arranged very painstakingly into cascades are the standard means of enriching uranium to bomb-grade material, and they require scarce and costly resources, expertise, and materials to operate. In fact, the difficulty of building and operating such cascades on an industrial scale has for decades served as an effective barrier to would-be builders of nuclear weapons. So any route to a nuclear weapon that bypassed enrichment would offer an undoubtedly easier alternative. The question now is, how much easier?
“It’s not a very good bomb, but it could explode and wreak all kinds of havoc.”
The difficulty of building a bomb based on HALEU is a murky subject, because many of the specific techniques and practices of nuclear weapons design are classified. But basic information about the standard type of fission weapon, known as an implosion device, has long been known publicly. (The first two implosion devices were detonated in 1945, one in the Trinity test and the other over Nagasaki, Japan.) An implosion device is based on a hollow sphere of nuclear material. In a modern weapon this material is typically plutonium-239, but it can also be a mixture of uranium isotopes that includes a percentage of U-235 ranging from 100 percent all the way down to, apparently, around 10 percent. The sphere is surrounded by shaped chemical explosives that are exploded simultaneously, creating a shockwave that physically compresses the sphere, reducing the distance between its atoms and increasing the likelihood that neutrons emitted from their nuclei will encounter other nuclei and split them, releasing more neutrons. As the sphere shrinks it goes from a subcritical state, in which that chain reaction of neutrons splitting nuclei and creating other neutrons cannot sustain itself, to a critical state, in which it can. As the sphere continues to compress it achieves supercriticality, after which an injected flood of neutrons triggers the superfast, runaway chain reaction that is a fission explosion. All this happens in less than a millisecond.
The authors of the Science paper had to walk a fine line between not revealing too many details about weapons design while still clearly indicating the scope of the challenge of building a bomb based on HALEU. They acknowledge that the amount of HALEU material needed for a 15-kiloton bomb—roughly as powerful as the one that destroyed Hiroshima during the second World War—would be relatively large: in the hundreds of kilograms, but not more than 1,000 kg. For comparison, about 8 kg of Pu-239 is sufficient to build a fission bomb of modest sophistication. Any HALEU bomb would be commensurately larger, but still small enough to be deliverable “using an airplane, a delivery van, or a boat sailed into a city harbor,” the authors wrote.
They also acknowledged a key technical challenge for any would-be weapons makers seeking to use HALEU to make a bomb: preinitiation. The large amount of U-238 in the material would produce many neutrons, which would likely result in a nuclear chain reaction occurring too soon. That would sap energy from the subsequent triggered runaway chain reaction, limiting the explosive yield and producing what’s known in the nuclear bomb business as a “fizzle.“ However, “although preinitiation may have a bigger impact on some designs than others, even those that are sensitive to it could still produce devastating explosive power,” the authors conclude.
In other words, “it’s not a very good bomb, but it could explode and wreak all kinds of havoc,” says John Lee, professor emeritus of nuclear engineering at the University of Michigan. Lee was a contributor to the 2023 National Academies report that also considered risks of HALEU fuel and made policy recommendations similar to those of the Science paper.
Critics of that paper argue that the challenges of building a HALEU bomb, while not insurmountable, would stymie a nonstate group. And a national weapons program, which would likely have the resources to surmount them, would not be interested in such a bomb, because of its limitations and relative unreliability.
“That’s why the IAEA [International Atomic Energy Agency], in their wisdom, said, ‘This is not a direct-use material,’” says Steven Nesbit, a nuclear-engineering consultant and past president of the American Nuclear Society, a professional organization. “It’s just not a realistic pathway to a nuclear weapon.”
The Science authors conclude their paper by recommending that the U.S. Congress direct the DOE’s National Nuclear Security Administration (NNSA) to conduct a “fresh review” of the risks posed by HALEU fuel. In response to an email inquiry from IEEE Spectrum, an NNSA spokesman, Craig Branson, replied: “To meet net-zero emissions goals, the United States has prioritized the design, development, and deployment of advanced nuclear technologies, including advanced and small modular reactors. Many will rely on HALEU to achieve smaller designs, longer operating cycles, and increased efficiencies over current technologies. They will be essential to our efforts to decarbonize while meeting growing energy demand. As these technologies move forward, the Department of Energy and NNSA have programs to work with willing industrial partners to assess the risk and enhance the safety, security, and safeguards of their designs.”
The Science authors also called on the U.S. Nuclear Regulatory Commission (NRC) and the IAEA to change the way they categorize HALEU fuel. Under the NRC’s current categorization, even large quantities of HALEU are now considered category II, which means that security measures focus on the early detection of theft. The authors want weapons-relevant quantities of HALEU reclassified as category I, the same as for quantities of weapons-grade plutonium or highly enriched uranium sufficient to make a bomb. Category I would require much tighter security, focusing on the prevention of theft.
Nesbit scoffs at the proposal, citing the difficulties of heisting perhaps a metric tonne of nuclear material. “Blindly applying all of the baggage that goes with protecting nuclear weapons to something like this is just way overboard,” he says.
But Lee, who performed experiments with HALEU fuel in the 1980s, agrees with his colleagues. “Dick Garwin and Frank von Hipple [and the other authors of the Science paper] have raised some proper questions,” he declares. “They’re saying the NRC should take more precautions. I’m all for that.”
But such conveniences barely hint at the massive, sweeping changes to employment predicted by some analysts. And already, in ways large and small, striking and subtle, the tech world’s notables are grappling with changes, both real and envisioned, wrought by the onset of generative AI. To get a better idea of how some of them view the future of generative AI, IEEE Spectrum asked three luminaries—an academic leader, a regulator, and a semiconductor industry executive—about how generative AI has begun affecting their work. The three, Andrea Goldsmith, Juraj Čorba, and Samuel Naffziger, agreed to speak with Spectrum at the 2024 IEEE VIC Summit & Honors Ceremony Gala, held in May in Boston.
Juraj Čorba, senior expert on digital regulation and governance, Slovak Ministry of Investments, Regional Development
Samuel Naffziger, senior vice president and a corporate fellow at Advanced Micro Devices
Andrea Goldsmith
Andrea Goldsmith is dean of engineering at Princeton University.
There must be tremendous pressure now to throw a lot of resources into large language models. How do you deal with that pressure? How do you navigate this transition to this new phase of AI?
Andrea J. Goldsmith
Andrea Goldsmith: Universities generally are going to be very challenged, especially universities that don’t have the resources of a place like Princeton or MIT or Stanford or the other Ivy League schools. In order to do research on large language models, you need brilliant people, which all universities have. But you also need compute power and you need data. And the compute power is expensive, and the data generally sits in these large companies, not within universities.
So I think universities need to be more creative. We at Princeton have invested a lot of money in the computational resources for our researchers to be able to do—well, not large language models, because you can’t afford it. To do a large language model… look at OpenAI or Google or Meta. They’re spending hundreds of millions of dollars on compute power, if not more. Universities can’t do that.
But we can be more nimble and creative. What can we do with language models, maybe not large language models but with smaller language models, to advance the state of the art in different domains? Maybe it’s vertical domains of using, for example, large language models for better prognosis of disease, or for prediction of cellular channel changes, or in materials science to decide what’s the best path to pursue a particular new material that you want to innovate on. So universities need to figure out how to take the resources that we have to innovate using AI technology.
We also need to think about new models. And the government can also play a role here. The [U.S.] government has this new initiative, NAIRR, or National Artificial Intelligence Research Resource, where they’re going to put up compute power and data and experts for educators to use—researchers and educators.
That could be a game-changer because it’s not just each university investing their own resources or faculty having to write grants, which are never going to pay for the compute power they need. It’s the government pulling together resources and making them available to academic researchers. So it’s an exciting time, where we need to think differently about research—meaning universities need to think differently. Companies need to think differently about how to bring in academic researchers, how to open up their compute resources and their data for us to innovate on.
As a dean, you are in a unique position to see which technical areas are really hot, attracting a lot of funding and attention. But how much ability do you have to steer a department and its researchers into specific areas? Of course, I’m thinking about large language models and generative AI. Is deciding on a new area of emphasis or a new initiative a collaborative process?
Goldsmith: Absolutely. I think any academic leader who thinks that their role is to steer their faculty in a particular direction does not have the right perspective on leadership. I describe academic leadership as really about the success of the faculty and students that you’re leading. And when I did my strategic planning for Princeton Engineering in the fall of 2020, everything was shut down. It was the middle of COVID, but I’m an optimist. So I said, “Okay, this isn’t how I expected to start as dean of engineering at Princeton.” But the opportunity to lead engineering in a great liberal arts university that has aspirations to increase the impact of engineering hasn’t changed. So I met with every single faculty member in the School of Engineering, all 150 of them, one-on-one over Zoom.
And the question I asked was, “What do you aspire to? What should we collectively aspire to?” And I took those 150 responses, and I asked all the leaders and the departments and the centers and the institutes, because there already were some initiatives in robotics and bioengineering and in smart cities. And I said, “I want all of you to come up with your own strategic plans. What do you aspire to in these areas? And then let’s get together and create a strategic plan for the School of Engineering.” So that’s what we did. And everything that we’ve accomplished in the last four years that I’ve been dean came out of those discussions, and what it was the faculty and the faculty leaders in the school aspired to.
So we launched a bioengineering institute last summer. We just launched Princeton Robotics. We’ve launched some things that weren’t in the strategic plan that bubbled up. We launched a center on blockchain technology and its societal implications. We have a quantum initiative. We have an AI initiative using this powerful tool of AI for engineering innovation, not just around large language models, but it’s a tool—how do we use it to advance innovation and engineering? All of these things came from the faculty because, to be a successful academic leader, you have to realize that everything comes from the faculty and the students. You have to harness their enthusiasm, their aspirations, their vision to create a collective vision.
What are the most important organizations and governing bodies when it comes to policy and governance on artificial intelligence in Europe?
Juraj Čorba
Juraj Čorba: Well, there are many. And it also creates a bit of a confusion around the globe—who are the actors in Europe? So it’s always good to clarify. First of all we have the European Union, which is a supranational organization composed of many member states, including my own Slovakia. And it was the European Union that proposed adoption of a horizontal legislation for AI in 2021. It was the initiative of the European Commission, the E.U. institution, which has a legislative initiative in the E.U. And the E.U. AI Act is now finally being adopted. It was already adopted by the European Parliament.
So this started, you said 2021. That’s before ChatGPT and the whole large language model phenomenon really took hold.
Čorba: That was the case. Well, the expert community already knew that something was being cooked in the labs. But, yes, the whole agenda of large models, including large language models, came up only later on, after 2021. So the European Union tried to reflect that. Basically, the initial proposal to regulate AI was based on a blueprint of so-called product safety, which somehow presupposes a certain intended purpose. In other words, the checks and assessments of products are based more or less on the logic of the mass production of the 20th century, on an industrial scale, right? Like when you have products that you can somehow define easily and all of them have a clearly intended purpose. Whereas with these large models, a new paradigm was arguably opened, where they have a general purpose.
So the whole proposal was then rewritten in negotiations between the Council of Ministers, which is one of the legislative bodies, and the European Parliament. And so what we have today is a combination of this old product-safety approach and some novel aspects of regulation specifically designed for what we call general-purpose artificial intelligence systems or models. So that’s the E.U.
By product safety, you mean, if AI-based software is controlling a machine, you need to have physical safety.
Čorba: Exactly. That’s one of the aspects. So that touches upon the tangible products such as vehicles, toys, medical devices, robotic arms, et cetera. So yes. But from the very beginning, the proposal contained a regulation of what the European Commission called stand-alone systems—in other words, software systems that do not necessarily command physical objects. So it was already there from the very beginning, but all of it was based on the assumption that all software has its easily identifiable intended purpose—which is not the case for general-purpose AI.
Also, large language models and generative AI in general brings in this whole other dimension, of propaganda, false information, deepfakes, and so on, which is different from traditional notions of safety in real-time software.
Čorba: Well, this is exactly the aspect that is handled by another European organization, different from the E.U., and that is the Council of Europe. It’s an international organization established after the Second World War for the protection of human rights, for protection of the rule of law, and protection of democracy. So that’s where the Europeans, but also many other states and countries, started to negotiate a first international treaty on AI. For example, the United States have participated in the negotiations, and also Canada, Japan, Australia, and many other countries. And then these particular aspects, which are related to the protection of integrity of elections, rule-of-law principles, protection of fundamental rights or human rights under international law—all these aspects have been dealt with in the context of these negotiations on the first international treaty, which is to be now adopted by the Committee of Ministers of the Council of Europe on the 16th and 17th of May. So, pretty soon. And then the first international treaty on AI will be submitted for ratifications.
So prompted largely by the activity in large language models, AI regulation and governance now is a hot topic in the United States, in Europe, and in Asia. But of the three regions, I get the sense that Europe is proceeding most aggressively on this topic of regulating and governing artificial intelligence. Do you agree that Europe is taking a more proactive stance in general than the United States and Asia?
Čorba: I’m not so sure. If you look at the Chinese approach and the way they regulate what we call generative AI, it would appear to me that they also take it very seriously. They take a different approach from the regulatory point of view. But it seems to me that, for instance, China is taking a very focused and careful approach. For the United States, I wouldn’t say that the United States is not taking a careful approach because last year you saw many of the executive orders, or even this year, some of the executive orders issued by President Biden. Of course, this was not a legislative measure, this was a presidential order. But it seems to me that the United States is also trying to address the issue very actively. The United States has also initiated the first resolution of the General Assembly at the U.N. on AI, which was passed just recently. So I wouldn’t say that the E.U. is more aggressive in comparison with Asia or North America, but maybe I would say that the E.U. is the most comprehensive. It looks horizontally across different agendas and it uses binding legislation as a tool, which is not always the case around the world. Many countries simply feel that it’s too early to legislate in a binding way, so they opt for soft measures or guidance, collaboration with private companies, et cetera. Those are the differences that I see.
Do you think you perceive a difference in focus among the three regions? Are there certain aspects that are being more aggressively pursued in the United States than in Europe or vice versa?
Čorba: Certainly the E.U. is very focused on the protection of human rights, the full catalog of human rights, but also, of course, on safety and human health. These are the core goals or values to be protected under the E.U. legislation. As for the United States and for China, I would say that the primary focus in those countries—but this is only my personal impression—is on national and economic security.
Samuel Naffziger
Samuel Naffziger is senior vice president and a corporate fellow at Advanced Micro Devices, where he is responsible for technology strategy and product architectures. Naffziger was instrumental in AMD’s embrace and development of chiplets, which are semiconductor dies that are packaged together into high-performance modules.
To what extent is large language model training starting to influence what you and your colleagues do at AMD?
Samuel Naffziger
Samuel Naffziger: Well, there are a couple levels of that. LLMs are impacting the way a lot of us live and work. And we certainly are deploying that very broadly internally for productivity enhancements, for using LLMs to provide starting points for code—simple verbal requests, such as “Give me a Python script to parse this dataset.” And you get a really nice starting point for that code. Saves a ton of time. Writing verification test benches, helping with the physical design layout optimizations. So there’s a lot of productivity aspects.
The other aspect to LLMs is, of course, we are actively involved in designing GPUs [graphics processing units] for LLM training and for LLM inference. And so that’s driving a tremendous amount of workload analysis on the requirements, hardware requirements, and hardware-software codesign, to explore.
So that brings us to your current flagship, the Instinct MI300X, which is actually billed as an AI accelerator. How did the particular demands influence that design? I don’t know when that design started, but the ChatGPT era started about two years ago or so. To what extent did you read the writing on the wall?
Naffziger: So we were just into the MI300—in 2019, we were starting the development. A long time ago. And at that time, our revenue stream from the Zen [an AMD architecture used in a family of processors] renaissance had really just started coming in. So the company was starting to get healthier, but we didn’t have a lot of extra revenue to spend on R&D at the time. So we had to be very prudent with our resources. And we had strategic engagements with the [U.S.] Department of Energy for supercomputer deployments. That was the genesis for our MI line—we were developing it for the supercomputing market. Now, there was a recognition that munching through FP64 COBOL code, or Fortran, isn’t the future, right? [laughs] This machine-learning [ML] thing is really getting some legs.
So we put some of the lower-precision math formats in, like Brain Floating Point 16 at the time, that were going to be important for inference. And the DOE knew that machine learning was going to be an important dimension of supercomputers, not just legacy code. So that’s the way, but we were focused on HPC [high-performance computing]. We had the foresight to understand that ML had real potential. Although certainly no one predicted, I think, the explosion we’ve seen today.
So that’s how it came about. And, just another piece of it: We leveraged our modular chiplet expertise to architect the 300 to support a number of variants from the same silicon components. So the variant targeted to the supercomputer market had CPUs integrated in as chiplets, directly on the silicon module. And then it had six of the GPU chiplets we call XCDs around them. So we had three CPU chiplets and six GPU chiplets. And that provided an amazingly efficient, highly integrated, CPU-plus-GPU design we call MI300A. It’s very compelling for the El Capitan supercomputer that’s being brought up as we speak.
But we also recognize that for the maximum computation for these AI workloads, the CPUs weren’t that beneficial. We wanted more GPUs. For these workloads, it’s all about the math and matrix multiplies. So we were able to just swap out those three CPU chiplets for a couple more XCD GPUs. And so we got eight XCDs in the module, and that’s what we call the MI300X. So we kind of got lucky having the right product at the right time, but there was also a lot of skill involved in that we saw the writing on the wall for where these workloads were going and we provisioned the design to support it.
Earlier you mentioned 3D chiplets. What do you feel is the next natural step in that evolution?
Naffziger: AI has created this bottomless thirst for more compute [power]. And so we are always going to be wanting to cram as many transistors as possible into a module. And the reason that’s beneficial is, these systems deliver AI performance at scale with thousands, tens of thousands, or more, compute devices. They all have to be tightly connected together, with very high bandwidths, and all of that bandwidth requires power, requires very expensive infrastructure. So if a certain level of performance is required—a certain number of petaflops, or exaflops—the strongest lever on the cost and the power consumption is the number of GPUs required to achieve a zettaflop, for instance. And if the GPU is a lot more capable, then all of that system infrastructure collapses down—if you only need half as many GPUs, everything else goes down by half. So there’s a strong economic motivation to achieve very high levels of integration and performance at the device level. And the only way to do that is with chiplets and with 3D stacking. So we’ve already embarked down that path. A lot of tough engineering problems to solve to get there, but that’s going to continue.
And so what’s going to happen? Well, obviously we can add layers, right? We can pack more in. The thermal challenges that come along with that are going to be fun engineering problems that our industry is good at solving.
Among the countless challenges of decarbonizing transportation, one of the most compelling involves electric motors. In laboratories all over the world, researchers are now chasing a breakthrough that could kick into high gear the transition to electric transportation: a rugged, compact, powerful electric motor that has high power density and the ability to withstand high temperatures—and that doesn’t have rare-earth permanent magnets.
It’s a huge challenge currently preoccupying some of the best machine designers on the planet. More than a few of them are at
ZF Friedrichshafen AG, one of the world’s largest suppliers of parts to the automotive industry. In fact, ZF astounded analysts late last year when it announced that it had built a 220-kilowatt traction motor that used no rare-earth elements. Moreover, the company announced, their new motor had characteristics comparable to the rare-earth permanent-magnet synchronous motors that now dominate in electric vehicles. Most EVs have rare-earth-magnet-based motors ranging from 150 to 300 kilowatts, and power densities between 1.1 and 3.0 kilowatts per kilogram. Meanwhile, the company says they’ve developed a rare-earth-free motor right in the middle of that range: 220 kW. (The company has not yet revealed its motor’s specific power—its kW/kg rating.)
The ZF machine is a type called a separately-excited (or doubly-excited) synchronous motor. It has electromagnets in both the stator and the rotor, so it does away with the rare-earth permanent magnets used in the rotors of nearly all EV motors on the road today. In a separately-excited synchronous motor, alternating current applied to the stator electromagnets sets up a rotating magnetic field. A separate current applied to the rotor electromagnets energizes them, producing a field that locks on to the rotating stator field, producing torque.
“As a matter of fact, 95 percent of the rare earths are mined in China. And this means that if China decides no one else will have rare earths, we can do nothing against it.”
—Otmar Scharrer, ZF Friedrichshafen AG
So far, these machines have not been used much in EVs, because they require a separate system to transfer power to the spinning rotor magnets, and there’s no ideal way to do that. Many such motors use sliders and brushes to make electrical contact to a spinning surface, but the brushes produce dust and eventually wear out. Alternatively, the power can be transferred via inductance, but in that case the apparatus is typically cumbersome, making the unit complicated and physically large and heavy.
Now, though, ZF says it has solved these problems with its experimental motor, which it calls
I2SM (for In-Rotor Inductive-Excited Synchronous Motor). Besides not using any rare earth elements, the motor offers a few other advantages in comparison with permanent-magnet synchronous motors. These are linked to the fact that this kind of motor technology offers the ability to precisely control the magnetic field in the rotor—something that’s not possible with permanent magnets. That control, in turn, permits varying the field to get much higher efficiency at high speed, for example.
With headquarters in Baden-Württemberg, Germany, ZF Friedrichshafen AG is known for a
rich R&D heritage and many commercially successful innovations dating back to 1915, when it began supplying gears and other parts for Zeppelins. Today, the company has some 168,000 employees in 31 countries. Among the customers for its motors and electric drive trains are Mercedes-Benz, BMW, and Jaguar Land Rover. (Late last year, shortly after announcing the I2SM, the company announced the sale of its 3,000,000th motor.)
Has ZF just shown the way forward for rare-earth-free EV motors? To learn more about the I
2SM and ZF’s vision of the future of EV traction motors, Spectrum reached out to Otmar Scharrer, ZF’s Senior Vice President, R&D, of Electrified Powertrain Technology. Our interview with him has been edited for concision and clarity.
IEEE Spectrum: Why is it important to eliminate or to reduce the use of rare-earth elements in traction motors?
ZF Friedrichshafen AG’s Otmar Scharrer is leading a team discovering ways to build motors that don’t depend on permanent magnets—and China’s rare-earth monopolies. ZF Group
Otmar Scharrer: Well, there are two reasons for that. One is sustainability. We call them “rare earth” because they really are rare in the earth. You need to move a lot of soil to get to these materials. Therefore, they have a relatively high footprint because, usually, they are dug out of the earth in a mine with excavators and huge trucks. That generates some environmental pollution and, of course, a change of the landscape. That is one thing. The other is that they are relatively expensive. And of course, this is something we always address cautiously as a tier one [automotive industry supplier].
And as a matter of fact, 95 percent of the rare earths are produced in China. And this means that if China decides no one else will have rare earths, we can do nothing against it. The recycling circle [for rare earth elements] will not work because there are just not enough electric motors out there. They still have an active lifetime. When you are ramping up, when you have a steep ramp up in terms of volume, you never can satisfy your demands with recycling. Recycling will only work if you have a constant business and you’re just replacing those units which are failing. I’m sure this will come, but we see this much later when the steep ramp-up has ended.
“The power density is the same as for a permanent-magnet machine, because we produce both. And I can tell you that there is no difference.”
—Otmar Scharrer, ZF Friedrichshafen AG
You had asked a very good question: How much rare-earth metal does a typical traction motor contain? I had to ask my engineers. This is an interesting question. Most of our electric motors are in the range of 150 to 300 kilowatts. This is the main range of power for passenger cars. And those motors typically have 1.5 kilograms of magnet material. And 0.5 percent to 1 percent out of this material is pure [heavy rare-earth elements]. So this is not too much. It’s only 5 to 15 grams. But, yes, it’s a very difficult-to-get material.
This is the reason for this [permanent-] magnet-free motor. The concept itself is not new. It has been used for years and years, for decades, because usually, power generation is done with this kind of electric machine. So if you have a huge power plant, for example, a gas power plant, then you would typically find such an externally-excited machine as a generator.
We did not use them for passenger cars or for mobile applications because of their weight and size. And some of that weight-and-size problem comes directly from the need to generate a magnetic field in the rotor, to replace the [permanent] magnets. You need to set copper coils under electricity. So you need to carry electric current inside the rotor. This is usually done with sliders. And those sliders generate losses. This is the one thing because you have, typically, carbon brushes touching a metal ring so that you can conduct the electricity.
Those brushes are what make the unit longer, axially, in the direction of the axle?
Scharrer: Exactly. That’s the point. And you need an inverter which is able to excite the electric machine. Normal inverters have three phases, and then you need a fourth phase to electrify the rotor. And this is a second obstacle. Many OEMs or e-mobility companies do not have this technology ready. Surprisingly enough, the first ones who brought this into serious production were [Renault]. It was a very small car, a Renault. [Editor's note: the model was the Zoe, which was manufactured from 2013 until March of this year.]
It had a relatively weak electric motor, just 75 or 80 kilowatts. They decided to do this because in an electric vehicle, there’s a huge advantage with this kind of externally excited machine. You can switch off and switch on the magnetic field. This is a great safety advantage. Why safety? Think about it. If your bicycle has a generator [for a headlight], it works like an electric motor. If you are moving and the generator is spinning, connected to the wheel, then it is generating electricity.
“We have an efficiency of approximately 96 percent. So, very little loss.”
—Otmar Scharrer, ZF Friedrichshafen AG
The same is happening in an electric machine in the car. If you are driving on the highway at 75 miles an hour, and then suddenly your whole system breaks down, what would happen? In a permanent magnet motor, you would generate enormous voltage because the rotor magnets are still rotating in the stator field. But in a permanent-magnet-free motor, nothing happens. You are just switched off. So it is self-secure. This is a nice feature.
And the second feature is even better if you drive at high speed. High speed is something like 75, 80, 90 miles an hour. It’s not too common in most countries. But it’s a German phenomenon, very important here.
People like to drive fast. Then you need to address the area of field weakening because [at high speed], the magnetic field would be too strong. You need to weaken the field. And if you don’t have [permanent] magnets, it’s easy: you just adapt the electrically-induced magnetic field to the appropriate value, and you don’t have this field-weakening requirement. And this results in much higher efficiency at high speeds.
You called this field weakening at high speed?
Scharrer: You need to weaken the magnetic field in order to keep the operation stable. And this weakening happens by additional electricity coming from the battery. And therefore, you have a lower efficiency of the electric motor.
What are the most promising concepts for future EV motors?
Scharrer: We believe that our concept is most promising, because as you pointed out a couple of minutes ago, we are growing in actual length when we do an externally excited motor. We thought a lot what we can do to overcome this obstacle. And we came to the conclusion, let’s do it inductively, by electrical inductance. And this has been done by competitors as well, but they simply replaced the slider rings with inductance transmitters.
“We are convinced that we can build the same size, the same power level of electric motors as with the permanent magnets.”
—Otmar Scharrer, ZF Friedrichshafen AG
And this did not change the situation. What we did, we were shrinking the inductive unit to the size of the rotor shaft, and then we put it inside the shaft. And therefore, we reduced this 50-to-90-millimeter growth in axial length. And therefore, as a final result, you know the motor shrinks, the housing gets smaller, you have less weight, and you have the same performance density in comparison with a PSM [permanent-magnet synchronous motor] machine.
What is an inductive exciter exactly?
Scharrer: Inductive exciter means nothing else than that you transmit electricity without touching anything. You do it with a magnetic field. And we are doing it inside of the rotor shaft. This is where the energy is transmitted from outside to the shaft [and then to the rotor electromagnets].
So the rotor shaft, is that different from the motor shaft, the actual torque shaft?
Scharrer: It’s the same.
The thing I know with inductance is in a transformer, you have coils next to each other and you can induce a voltage from the energized coil in the other coil.
Scharrer: This is exactly what is happening in our rotor shafts.
So you use coils, specially designed, and you induce voltage from one to the other?
Scharrer: Yes. And we have a very neat, small package, which has a diameter of less than 30 millimeters. If you can shrink it to that value, then you can put it inside the rotor shaft.
So of course, if you have two coils, and they’re spaced next to each other, you have a gap. So that gap enables you to spin, right? Since they’re not touching, they can spin independently. So you had to design something where the field could be transferred. In other words, they could couple even though one of them was spinning.
Scharrer: We have a coil in the rotor shaft, which is rotating with the shaft. And then we have another one that is stationary inside the rotor shaft while the shaft rotates around it. And there is an air gap in between. Everything happens inside the rotor shaft.
What is the efficiency? How much power do you lose?
Scharrer: We have an efficiency of approximately 96 percent. So, very little loss. And for the magnetic field, you don’t need a lot of energy. You need something between 10 and 15 kilowatts for the electric field. Let’s assume a transmitted power of 10 kilowatts, we’ll have losses of about 400 watts. This [relatively low level of loss] is important because we don’t cool the unit actively and therefore it needs this kind of high efficiency.
The motor isn’t cooled with liquids?
Scharrer: The motor itself is actively cooled, with oil, but the inductive unit is passively cooled, with heat transfer to nearby cooling structures.
“A good invention is always easy. If you look as an engineer on good IP, then you say, ‘Okay, that looks nice.’”
—Otmar Scharrer, ZF Friedrichshafen AG
What are the largest motors you’ve built or what are the largest motors you think you can build, in kilowatts?
Scharrer: We don’t think that there is a limitation with this technology. We are convinced that we can build the same size, the same power level of electric motors as with the permanent magnets.
What have you done so far? What prototypes have you built?
Scharrer: We have a prototype with 220 kilowatts. And we can easily upgrade it to 300, for example. Or we can shrink it to 150. That is always easy.
And what is your specific power of this motor?
Scharrer: You mean kilowatts per kilogram? I can’t tell you, to be quite honest. It’s hard to compare, because it always depends on where the borderline is. You never have a motor by itself. You always need a housing as well. What part of the housing are you including in the calculation? But I can tell you one thing: The power density is the same as for a permanent-magnet machine because we produce both. And I can tell you that there is no difference.
What automakers do you currently have agreements with? Are you providing electric motors for certain automakers? Who are some of your customers now?
Scharrer: We are providing our dedicated hybrid transmissions to BMW, to Jaguar Land Rover, and our electric-axle drives to Mercedes-Benz and Geely Lotus, for example. And we are, of course, in development with a lot of other applications. And I think you understand that I cannot talk about that.
So for BMW, Land Rover, Mercedes-Benz, you’re providing electric motors and drivetrain components?
Scharrer: BMW and Land Rover. We provide dedicated hybrid transmissions. We provide an eight-speed automatic transmission with a hybrid electric motor up to 160 kilowatts. It’s one of the best hybrid transmissions because you can drive fully electrically with 160 kilowatts, which is quite something.
“We achieved the same values, for power density and other characteristics, for as for a [permanent] magnet motor. And this is really a breakthrough because according to our best knowledge, this never happened before.”
—Otmar Scharrer, ZF Friedrichshafen AG
What were the major challenges you had to overcome, to transmit the power inside the rotor shaft?
Scharrer: The major challenge is, always, it needs to be very small. At the same time, it needs to be super reliable, and it needs to be easy.
A good invention is always easy. When you see it, if you look as an engineer on good IP [intellectual property], then you say, “Okay, that looks nice”—it’s quite obvious that it’s a good idea. If the idea is complex and it needs to be explained and you don’t understand it, then usually this is not a good idea to be implemented. And this one is very easy. Straightforward. It’s a good idea: Shrink it, put it into the rotor shaft.
So you mean very easy to explain?
Scharrer: Yes. Easy to explain because it’s obviously an interesting idea. You just say, “Let’s use part of the rotor shaft for the transmission of the electricity into the rotor shaft, and then we can cut the additional length out of the magnet-free motor.” Okay. That’s a good answer.
We have a lot of IP here. This is important because if you have the idea, I mean, the idea is the main thing.
What were the specific savings in weight and rotor shaft and so on?
Scharrer: Well, again, I would just answer in a very general way. We achieved the same values, for power density and other characteristics, as for a [permanent] magnet motor. And this is really a breakthrough because according to our best knowledge, this never happened before.
Do you think the motor will be available before the end of this year or perhaps next year?
Scharrer: You mean available for a serious application?
Yes. If Volkswagen came to you and said, “Look, we want to use this in our next car,” could you do that before the end of this year, or would it have to be 2025?
Scharrer: It would have to be 2025. I mean, technically, the electric motor is very far along. It is already in an A-sample status, which means we are...
What kind of status?
Scharrer: A-sample. In the automotive industry, you have A, B, or C. For A-sample, you have all the functions, and you have all the features of the product, and those are secured. And then B- is, you are not producing any longer in the prototype shop, but you are producing close to a possibly serious production line. C-sample means you are producing on serious fixtures and tools, but not on a [mass-production] line. And so this is an A-sample, meaning it is about one and a half years away from a conventional SOP ["Start of Production"] with our customer. So we could be very fast.
This article was updated on 15 April 2024. An earlier version of this article gave an incorrect figure for the efficiency of the inductive exciter used in the motor. This efficiency is 96 percent, not 98 or 99 percent.