FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Using AI to Clear Land Mines in Ukraine



Stephen Cass: Hello. I’m Stephen Cass, Special Projects Director at IEEE Spectrum. Before starting today’s episode hosted by Eliza Strickland, I wanted to give you all listening out there some news about this show.

This is our last episode of Fixing the Future. We’ve really enjoyed bringing you some concrete solutions to some of the world’s toughest problems, but we’ve decided we’d like to be able to go deeper into topics than we can in the course of a single episode. So we’ll be returning later in the year with a program of limited series that will enable us to do those deep dives into fascinating and challenging stories in the world of technology. I want to thank you all for listening and I hope you’ll join us again. And now, on to today’s episode.

Eliza Strickland: Hi, I’m Eliza Strickland for IEEE Spectrum‘s Fixing the Future podcast. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.IEEE.org/newsletters to subscribe.

Around the world, about 60 countries are contaminated with land mines and unexploded ordnance, and Ukraine is the worst off. Today, about a third of its land, an area the size of Florida, is estimated to be contaminated with dangerous explosives. My guest today is Gabriel Steinberg, who co-founded both the nonprofit Demining Research Community and the startup Safe Pro AI with his friend, Jasper Baur. Their technology uses drones and artificial intelligence to radically speed up the process of finding land mines and other explosives. Okay, Gabriel, thank you so much for joining me on Fixing the Future today.

Gabriel Steinberg: Yeah, thank you for having me.

Strickland: So I want to start by hearing about the typical process for demining, and so the standard operating procedure. What tools do people use? How long does it take? What are the risks involved? All that kind of stuff.

Steinberg: Sure. So humanitarian demining hasn’t changed significantly. There’s been evolutions, of course, since its inception and about the end of World War I. But mostly, the processes have been the same. People stand from a safe location and walk around an area in areas that they know are safe, and try to get as much intelligence about the contamination as they can. They ask villagers or farmers, people who work around the area and live around the area, about accidents and potential sightings of minefields and former battle positions and stuff. The result of this is a very general idea, a polygon, of where the contamination is. After that polygon and some prioritization based on danger to civilians and economic utility, the field goes into clearance. The first part is the non-technical survey, and then this is clearance. Clearance happens one of three ways, usually, but it always ends up with a person on the ground basically doing extreme gardening. They dig out a certain standard amount of the soil, usually 13 centimeters. And with a metal detector, they walk around the field and a mine probe. They find the land mines and nonexploded ordnance. So that always is how it ends.

To get to that point, you can also use mechanical assets, which are large tillers, and sometimes dogs and other animals are used to walk in lanes across the contaminated polygon to sniff out the land mines and tell the clearance operators where the land mines are.

Strickland: How do you hope that your technology will change this process?

Steinberg: Well, my technology is a drone-based mapping solution, basically. So we provide a software to the humanitarian deminers. They are already flying drones over these areas. Really, it started ramping up in Ukraine. The humanitarian demining organizations have started really adopting drones just because it’s such a massive problem. The extent is so extreme that they need to innovate. So we provide AI and mapping software for the deminers to analyze their drone imagery much more effectively. We hope that this process, or our software, will decrease the amount of time that deminers use to analyze the imagery of the land, thereby more quickly and more effectively constraining the areas with the most contamination. So if you can constrain an area, a polygon with a certainty of contamination and a high density of contamination, then you can deploy the most expensive parts of the clearance process, which are the humans and the machines and the dogs. You can deploy them to a very specific area. You can much more cost-effectively and efficiently demine large areas.

Strickland: Got it. So it doesn’t replace the humans walking around with metal detectors and dogs, but it gets them to the right spots faster.

Steinberg: Exactly. Exactly. At the moment, there is no conception of replacing a human in demining operations, and people that try to push that eventuality are usually disregarded pretty quickly.

Strickland: How did you and your co-founder, Jasper, first start experimenting with the use of drones and AI for detecting explosives?

Steinberg: So it started in 2016 with my partner, Jasper Baur, doing a research project at Binghamton University in the remote sensing and geophysics lab. And the project was to detect a specific anti-personnel land mine, the PFM-1. Then found— it’s a Russian-made land mine. It was previously found in Afghanistan. It still is found in Afghanistan, but it’s found in much higher quantities right now in Ukraine. And so his project was to detect the PFM-1 anti-personnel land mine using thermal imagery from drones. It sort of snowballed into quite an intensive research project. It had multiple papers from it, multiple researchers, some awards, and most notably, it beat NASA at a particular Tech Briefs competition. So that was quite a morale boost.

And at some point, Jasper had the idea to integrate AI into the project. Rightfully, he saw the real bottleneck as not the detecting of land mines in drone imagery, but the analysis of land mines in drone imagery. And that really has become— I mean, he knew, somehow, that that would really become the issue that everybody is facing. And everybody we talked to in Ukraine is facing that issue. So machine learning really was the key for solving that problem. And I joined the project in 2018 to integrate machine learning into the research project. We had some more papers, some more presentations, and we were nearing the end of our college tenure, of our undergraduate degree, in 2020. So at that time– but at that time, we realized how much the field needed this. We started getting more and more into the mine action field, and realizing how neglected the field was in terms of technology and innovation. And we felt an obligation to bring our technology, really, to the real world instead of just a research project. There were plenty of research projects about this, but we knew that it could be more and that it should. It really should be more. And we felt we had the– for some reason, we felt like we had the capability to make that happen.

So we formed a nonprofit, the Demining Research Community, in 2020 to try to raise some funding for this project. Our for-profit end of that, of our endeavors, was acquired by a company called Safe Pro Group in 2023. Yeah, 2023, about one year ago exactly. And the drone and AI technology became Safe Pro AI and our flagship product spotlight. And that’s where we’re bringing the technology to the real world. The Demining Research Community is providing resources for other organizations who want to do a similar thing, and is doing more research into more nascent technologies. But yeah, the real drone and AI stuff that’s happening in the real world right now is through Safe Pro.

Strickland: So in that early undergraduate work, you were using thermal sensors. I know now the Spotlight AI system is using more visual. Can you talk about the different modalities of sensing explosives and the sort of trade-offs you get with them?

Steinberg: Sure. So I feel like I should preface this by saying the more high tech and nascent the technology is, the more people want to see it apply to land mine detection. But really, we have found from the problems that people are facing, by far the most effective modality right now is just visual imagery. People have really good visual sensors built into their face, and you don’t need a trained geophysicist to observe the data and very, very quickly get actionable intelligence. There’s also plenty of other benefits. It’s cheaper, much more readily accessible in Ukraine and around the world to get built-in visual sensors on drones. And yeah, just processing the data, and getting the intelligence from the data, is way easier than anything else.

I’ll talk about three different modalities. Well, I guess I could talk about four. There’s thermal, ground penetrating radar, magnetometry, and lidar. So thermal is what we started with. Thermal is really good at detecting living things, as I’m sure most people can surmise. But it’s also pretty good at detecting land mines, mostly large anti-tank land mines buried under a couple millimeters, or up to a couple centimeters, of soil. It’s not super good at this. The research is still not super conclusive, and you have to do it at a very specific time of day, in the morning and at night when, basically the soil around the land mine heats up faster than the land mine and you cause a thermal anomaly, or the sun causes a thermal anomaly. So it can detect things, land mines, in some amount of depth in certain soils, in certain weather conditions, and can only detect certain types of land mines that are big and hefty enough. So yeah, that’s thermal.

Ground penetrating radar is really good for some things. It’s not really great for land mine detection. You have to have really expensive equipment. It takes a really long time to do the surveys. However, it can get plastic land mines under the surface. And it’s kind of the only modality that can do that with reliability. However, you need to train geophysicists to analyze the data. And a lot of the time, the signatures are really non-unique and there’s going to be a lot of false positives. Magnetometry is the other-- by the way, all of this is airborne that I’m referring to. Ground-based GPR and magnetometry are used in demining of various types, but airborne is really what I’m talking about.

For magnetometry, it’s more developed and more capable than ground penetrating radar. It’s used, actually, in the field in Ukraine in some scenarios, but it’s still very expensive. It needs a trained geophysicist to analyze the data, and the signatures are non-unique. So whether it’s a bottle can or a small anti-personnel land mine, you really don’t know until you dig it up. However, I think if I were to bet on one of the other modalities becoming increasingly useful in the next couple of years, it would be airborne magnetometry.

Lidar is another modality that people use. It’s pretty quick, also very expensive, but it can reliably map and find surface anomalies. So if you want to find former fighting positions, sometimes an indicator of that is a trench line or foxholes. Lidar is really good at doing that in conflicts from long ago. So there’s a paper that the HALO Trust published of flying a lidar mission over former fighting positions, I believe, in Angola. And they reliably found a former trench line. And from that information, they confirmed that as a hazardous area. Because if there is a former front line on this position, you can pretty reliably say that there is going to be some explosives there.

Strickland: And so you’ve done some experiments with some of these modalities, but in the end, you found that the visual sensor was really the best bet for you guys?

Steinberg: Yeah. It’s different. The requirements are different for different scenarios and different locations, really. Ukraine has a lot of surface ordnance. Yeah. And that’s really the main factor that allows visual imagery to be so powerful.

Strickland: So tell me about what role machine learning plays in your Spotlight AI software system. Did you create a model trained on a lot of— did you create a model based on a lot of data showing land mines on the surface?

Steinberg: Yeah. Exactly. We used real-world data from inert, non-explosive items, and flew drone missions over them, and did some physical augmentation and some programmatic augmentation. But all of the items that we are training on are real-life Russian or American ordnance, mostly. We’re also using the real-world data in real minefields that we’re getting from Ukraine right now. That is, obviously, the most valuable data and the most effective in building a machine learning model. But yeah, a lot of our data is from inert explosives, as well.

Strickland: So you’ve talked a little bit about the current situation in Ukraine, but can you tell me more about what people are dealing with there? Are there a lot of areas where the battle has moved on and civilians are trying to reclaim roads or fields?

Steinberg: Yeah. So the fighting is constantly ongoing, obviously, in eastern Ukraine, but I think sometimes there’s a perspective of a stalemate. I think that’s a little misleading. There’s lots of action and violence happening on the front line, which constantly contaminates, cumulatively, the areas that are the front line and the gray zone, as well as areas up to 50 kilometers back from both sides. So there’s constantly artillery shells going into villages and cities along the front line. There’s constantly land mines, new mines, being laid to reinforce the positions. And there’s constantly mortars. And everything is constant. In some fights—I just watched the video yesterday—one of the soldiers said you could not count to five without an explosion going off. And this is just one location in one city along the front. So you can imagine the amount of explosive ordnance that are being fired, and inevitably 10, 20, 30 percent of them are sometimes not exploding upon impact, on top of all the land mines that are being purposely laid and not detonating from a vehicle or a person. These all just remain after the war. They don’t go anywhere. So yeah, Ukraine is really being littered with explosive ordnance and land mines every day.

This past year, there hasn’t been terribly much movement on the front line. But in the Ukrainian counteroffensive in 2020— I guess the last major Ukrainian counteroffensive where areas of Mykolaiv, which is in the southeast, were reclaimed, the civilians started repopulating the city almost immediately. There are definitely some villages that are heavily contaminated, that people just deserted and never came back to, and still haven’t come back to after them being liberated. But a lot of the areas that have been liberated, they’re people’s homes. And even if they’re destroyed, people would rather be in their homes than be refugees. And I mean, I totally understand that. And it just puts the responsibility on the deminers and the Ukrainian government to try to clear the land as fast as possible. Because after large liberations are made, people want to come back almost all the time. So it is a very urgent problem as the lines change and as land is liberated.

Strickland: And I think it was about a year ago that you and Jasper went to the Ukraine for a technology demonstration set up by the United Nations. Can you tell about that, and what the task was, and how your technology fared?

Steinberg: Sure. So yeah, the United Nations Development Program invited us to do a demonstration in northern Ukraine to see how our technology, and other technologies similar to it, performed in a military training facility in Ukraine. So everybody who’s doing this kind of thing, which is not many people, but there are some other organizations, they have their own metrics and their own test fields— not always, but it would be good if they did. But the UNDP said, “No, we want to standardize this and try to give recommendations to the organizations on the ground who are trying to adopt these technologies.” So we had five hours to survey the field and collect as much data as we could. And then we had 72 hours to return the results. We—

Strickland: Sorry. How big was the field?

Steinberg: The field was 25 hectares. So yeah, the audience at home can type 25 hectares to amount of football fields. I think it’s about 60. But it’s a large area. So we’d never done anything like that. That was really, really a shock that it was that large of an area. I think we’d only done half a hectare at a time up to that point. So yeah, it was pretty daunting. But we basically slept very, very little in those 72 hours, and as a result, produced what I think is one of the best results that the UNDP got from that test. We didn’t detect everything, but we detected most of the ordnance and land mines that they had laid. We also detected some that they didn’t know were there because it was a military training facility. So there were some mortars being fired that they didn’t know about.

Strickland: And I think Jasper told me that you had to sort of rewrite your software on the fly. You realized that the existing approach wasn’t going to work and you had to do some all-nighter to recode?

Steinberg: Yeah. Yeah, I remember us sitting in a Georgian restaurant— Georgia, the country, not the state, and racking our brain, trying to figure out how we were going to map this amount of land. We just found out how big the area was going to be and we were a little bit stunned. So we devised a plan to do it in two stages. The first stage was where we figured out in the drone images where the contaminated regions were. And then the second stage was to map those areas, just those areas. Now, our software can actually map the whole thing, and pretty casually too. So not to brag. But at the time, we had lots less development under our belt. And yeah, therefore we just had to brute force it through Georgian food and brainpower.

Strickland: You and Jasper just got back from another trip to the Ukraine a couple of weeks ago, I think. Can you talk about what you were doing on this trip, and who you met with?

Steinberg: Sure. This trip was much less stressful, although stressful in different ways than the UNDP demo. Our main objectives were to see operations in action. We had never actually been to real minefields before. We’d been in some perhaps contaminated areas, but never in a real minefield where you can say, “Here was the Russian position. There are the land mines. Do not go there.” So that was one of the main objectives. That was very powerful for us to see the villages that were destroyed and are denied to the citizens because of land mines and unexploded ordnance. It’s impossible to describe how that feels being there. It’s really impactful, and it makes the work that I’m doing feel not like I have a choice anymore. I feel very much obligated to do my absolute best to help these people.

Strickland: Well, I hope your work continues. I hope there’s less and less need for it over time. But yeah, thank you for doing this. It’s important work. And thanks for joining me on Fixing the Future.

Steinberg: My pleasure. Thank you for having me.

Strickland: That was Gabriel Steinberg speaking to me about the technology that he and Jasper Baur developed to help rid the world of land mines. I’m Eliza Strickland, and I hope you’ll join us next time on Fixing the Future.

Never Recharge Your Consumer Electronics Again?



Stephen Cass: Hello and welcome to Fixing the Future, an IEEE Spectrum podcast where we look at concrete solutions to tough problems. I’m your host Stephen Cass, a senior editor at IEEE Spectrum. And before I start, I just wanted to tell you that you can get the latest coverage of Spectrum‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe.

We all love our mobile devices where the progress of Moore’s Law has meant we’re able to pack an enormous amount of computing power in something that’s small enough that we can wear it as jewelery. But their Achilles heel is power. They eat up battery life requiring frequent battery changes or charging. One company that’s hoping to reduce our battery anxiety is Exeger, which wants to enable self-charging devices that convert ambient light into energy on the go. Here to talk about its so-called Powerfoyle solar cell technology is Exeger’s founder and CEO, Giovanni Fili. Giovanni, welcome to the show.

Giovanni Fili: Thank you.

Cass: So before we get into the details of the Powerfoyle technology, was I right in saying that the Achilles heel of our mobile devices is battery life? And if we could reduce or eliminate that problem, how would that actually influence the development of mobile and wearable tech beyond just not having to recharge as often?

Fili: Yeah. I mean, for sure, I think the global common problem or pain point is for sure battery anxiety in different ways, ranging from your mobile phone to your other portable devices, and of course, even EV like cars and all that. So what we’re doing is we’re trying to eliminate this or reduce or eliminate this battery anxiety by integrating— seamlessly integrating, I should say, a solar cell. So our solar cell can convert any light energy to electrical energy. So indoor, outdoor from any angle. We’re not angle dependent. And the solar cell can take the shape. It can look like leather, textile, brushed steel, wood, carbon fiber, almost anything, and can take light from all angles as well, and can be in different colors. It’s also very durable. So our idea is to integrate this flexible, thin film into any device and allow it to be self-powered, allowing for increased functionality in the device. Just look at the smartwatches. I mean, the first one that came, you could wear them for a few hours, and you had to charge them. And they packed them with more functionality. You still have to charge them every day. And you still have to charge them every day, regardless. But now, they’re packed with even more stuff. So as soon as you get more energy efficiency, you pack them with more functionality. So we’re enabling this sort of jump in functionality without compromising design, battery, sustainability, all of that. So yeah, so it’s been a long journey since I started working with this 17 years ago.

Cass: I actually wanted to ask about that. So how is Exeger positioned to attack this problem? Because it’s not like you’re the first company to try and do nice mobile charging solutions for mobile devices.

Fili: I can mention there, I think that the main thing that differentiates us from all other previous solutions is that we have invented a new electrode material, the anode and the cathode with a similar almost like battery. So we have anode, cathode. We have electrolytes inside. So this is a—

Cass: So just for readers who might not be familiar, a battery is basically you have an anode, which is the positive terminal—I hope I didn’t forgot that—cathode, which is a negative terminal, and then you have an electrolyte between them in the battery, and then chemical reactions between these three components, and it can get kind of complicated, produce an electric potential between one side and the other. And in a solar cell, also there’s an anode and a cathode and so on. Have I got that right, my little, brief sketch?

Fili: Yeah. Yeah. Yeah. And so what we add to that architecture is we add one layer of titanium dioxide nanoparticles. Titanium dioxide is the white in white wall paint, toothpaste, sunscreen, all that. And it’s a very safe and abundant material. And we use that porous layer of titanium nanoparticles. And then we deposit a dye, a color, a pigment on this layer. And this dye can be red, black, blue, green, any kind of color. And the dye will then absorb the photons, excite electrons that are injected into the titanium dioxide layer and then collected by the anode and then conducted out to the cable. And now, we use the electrons to light the lamp or a motor or whatever we do with it. And then they turn back to the cathode on the other side and inside the cell. So the electrons goes the other way and the inner way. So the plus, you can say, go inside ions in the electrolytes. So it’s a regenerative system.

So our innovation is a new— I mean, all solar cells, they have electrodes to collect the electrons. If you have silicon wafers or whatever you have, right? And you know that all these solar cells that you’ve seen, they have silver lines crossing the surface. The silver lines are there because the conductivity is quite poor, funny enough, in these materials. So high resistance. So then you need to deposit the silver lines there, and they’re called current collectors. So you need to collect the current. Our innovation is a new electrode material that has 1,000 times better conductivity than other flexible electrode materials. That allows us as the only company in the world to eliminate the silver lines. And we print all our layers as well. And as you print in your house, you can print a photo, an apple with a bite in it, you can print the name, you can print anything you want. We can print anything we want, and it will also be converting light energy to electric energy. So a solar cell.

Cass: So the key part is that the color dye is doing that initial work of converting the light. Do different colors affect the efficiency? I did see on your site that it comes in all these kind of different colors, but. And I was thinking to myself, well, is the black one the best? Is the red one the best? Or is it relatively insensitive to the visible color that I see when I look at these dyes?

Fili: So you’re completely right there. So black would give you the most. And if you go to different colors, typically you lose like 20, 30 percent. But fortunately enough for us, over 50 percent of the consumer electronic market is black products. So that’s good. So I think that you asked me how we’re positioned. I mean, with our totally unique integration possibilities, imagine this super thin, flexible film that works all day, every day from morning to sunset, indoor, outdoor, can look like leather. So we’ve made like a leather bag, right? The leather bag is the solar cell. The entire bag is the solar cell. You wouldn’t see it. It just looks like a normal leather bag.

Cass: So when you talk about flexible, you actually mean this— so sometimes when people talk about flexible electronics, they mean it can be put into a shape, but then you’re not supposed to bend it afterwards. When you’re talking about flexible electronics, you’re talking about the entire thing remains flexible and you can use it flexibly instead of just you can conform it once to a shape and then you kind of leave it alone.

Fili: Correct. So we just recently released a hearing protector with 3M. This great American company with more than 60,000 products across the world. So we have a global exclusivity contract with them where they have integrated our bendable, flexible solar film in the headband. So the headband is the solar cell, right? And where you previously had to change disposable battery every second week, two batteries every second week, now you never need to change the battery again. We just recharge this small rechargeable battery indoor and outdoor, just continues to charge all the time. And they have added a lot of extra really cool new functionality as well. So we’re eliminating the need for disposable batteries. We’re saving millions and millions of batteries. We’re saving the end user, the contractor, the guy who uses them a lot of hassle to buy this battery, store them. And we increase reliability and functionality because they will always be charged. You can trust them that they always work. So that’s where we are totally unique. The solar cell is super durable. If we can be in a professional hearing protector to use on airports, construction sites, mines, whatever you use, factories, oil rig platforms, you can do almost anything. So I don’t think any other solar cell would be able to pass those durability tests that we did. It’s crazy.

Cass: So I have a question. It kind of it’s more appropriate from my experience with utility solar cells and things you put on roofs. But how many watts per square meter can you deliver, we’ll say, in direct sunlight?

Fili: So our focus is on indirect sunlight, like shade, suboptimal light conditions, because that’s where you would typically be with these products. But if you compare to more of a silicon, which is what you typically use for calculators and all that stuff. So we are probably around twice as what they deliver in this dark conditions, two to three times, depending. If you use glass, if you use flexible, we’re probably three times even more, but. So we don’t do full sunshine utility scale solar. But if you look at these products like the hearing protector, we have done a lot of headphones with Adidas and other huge brands, we typically recharge like four times what they use. So if you look at— if you go outside, not in full sunshine, but half sunshine, let’s say 50,000 lux, you’re probably talking at about 13, 14 minutes to charge one hour of listening. So yeah, so we have sold a few hundred thousand products over the last three years when we started selling commercially. And - I don’t know - I haven’t heard anyone who has charged since. I mean, surely someone has, but typically the user never need to charge them again, just charge themself.

Cass: Well, that’s right, because for many years, I went to CES, and I often would buy these, or acquire these, little solar cell chargers. And it was such a disappointing experience because they really would only work in direct sunlight. And even then, it would take a very long time. So I want to talk a little bit about, then, to get to that, what were some of the biggest challenges you had to overcome on the way to developing this tech?

Fili: I mean, this is the fourth commercial solar cell technology in the world after 110 or something years of research. I mean, the Americans, the Bell Laboratory sent the first silicon cell, I think it’s in like 1955 or something, to space. And then there’s been this constant development and trying to find, but to develop a new energy source is as close to impossible as you get, more or less. Everybody tried and everybody failed. We didn’t know that, luckily enough. So just the whole-- so when I try to explain this, I get this question quite a lot. Imagine you found out something really cool, but there’s no one to ask. There’s no book to read. You just realize, “Okay, I have to make like hundreds of thousands, maybe millions of experiments to learn. And all of them, except finally one, they will all fail. But that’s okay.” You will fail, fail, fail. And then, “Oh, here’s the solution. Something that works. Okay. Good.” So we had to build on just constant failing, but it’s okay because you’re in a research phase. So we had to. I mean, we started off with this new nanomaterials, and then we had to make components of these materials. And then we had to make solar cells of the components, but there were no machines either. We have had to invent all the machines from scratch as well to make these components and the solar cells and some of the non-materials. That was also tough. How do you design a machine for something that doesn’t exist? It’s pretty difficult specification to give to a machine builder. So in the end, we had to build our own machine building capacity here. We’re like 50 guys building machines, so.

But now, I mean, today we have over 300 granted patents, another 90 that will be approved soon. We have a complete machine park that’s proprietary. We are now building the largest solar cell factory— one of the largest solar cell factories in Europe. It’s already operational, phase one. Now we’re expanding into phase two. And we’re completely vertically integrated. We don’t source anything from Russia, China; never did. Only US, Japan, and Europe. We run the factories on 100 percent renewable energy. We have zero emissions to air and water. And we don’t have any rare earth metals, no strange stuff in it. It’s like it all worked out. And now we have signed, like I said, global exclusivity deal with 3M. We have a global exclusivity deal with the largest company in the world on computer peripherals, like mouse, keyboard, that stuff. They can only work with us for years. We have signed one of the large, the big fives, the Americans, the huge CE company. Can’t tell you yet the name. We have a globally exclusive deal for electronic shelf labels, the small price tags in the stores. So we have a global solution with Vision Group, that’s the largest. They have 50 percent of the world market as well. And they have Walmart, IKEA, Target, all these huge companies. So now it’s happening. So we’re rolling out, starting to deploy massive volumes later this year.

Cass:So I’ll talk a little bit about that commercial experience because you talked about you had to create verticals. I mean, in Spectrum, we do cover other startups which have had these— they’re kind of starting from scratch. And they develop a technology, and it’s a great demo technology. But then it comes that point where you’re trying to integrate in as a supplier or as a technology partner with a large commercial entity, which has very specific ideas and how things are to be manufactured and delivered and so on. So can you talk a little bit about what it was like adapting to these partners like 3M and what changes you had to make and what things you learned in that process where you go from, “Okay, we have a great product and we could make our own small products, but we want to now connect in as part of this larger supply chain.”

Fili: It’s a very good question and it’s extremely tough. It’s a tough journey, right? Like to your point, these are the largest companies in the world. They have their way. And one of the first really tough lessons that we learned was that one factory wasn’t enough. We had to build two factories to have redundancy in manufacturing. Because single source is bad. Single source, single factory, that’s really bad. So we had to build two factories and we had to show them we were ready, willing and able to be a supplier to them. Because one thing is the product, right? But the second thing is, are you worthy supplier? And that means how much money you have in the bank. Are you going to be here in two, three, four years? What’s your ISO certifications like? REACH, RoHS, Prop 65. What’s your LCA? What’s your view on this? Blah, blah, blah. Do you have professional supply chain? Did you do audits on your suppliers? But now, I mean, we’ve had audits here by five of the largest companies in the world. We’ve all passed them. And so then you qualify as a worthy supplier. Then comes your product integration work, like you mentioned. And I think it’s a lot about— I mean, that’s our main feature. The main unique selling point with Exeger is that we can integrate into other people’s products. Because when you develop this kind of crazy technology-- “Okay, so this is solar cell. Wow. Okay.” And it can look like anything. And it works all the time. And all the other stuff is sustainable and all that. Which product do you go for? So I asked myself—I’m an entrepreneur since the age of 15. I’ve started a number of companies. I lost so much money. I can’t believe it. And managed to earn a little bit more. But I realized, “Okay, how do you select? Where do you start? Which product?”

Okay, so I sat down. I was like, “When does it sell well? When do you see market success?” When something is important. When something is important, it’s going to work. It’s not the best tech. It has to be important enough. And then, you need distribution and scale and all that. Okay, how do you know if something is important? You can’t. Okay. What if you take something that’s already is— I mean, something new, you can’t know if it’s going to work. But if we can integrate into something that’s already selling in the billions of units per year, like headphones— I think this year, one billion headphones are going to be sold or something. Okay, apparently, obviously that’s important for people. Okay, let’s develop technology that can be integrated into something that’s already important and allow it to stay, keep all the good stuff, the design, the weight, the thickness, all of that, even improve the LCA better for the environment. And it’s self-powered. And it will allow the user to participate and help a little bit to a better world, right? With no charge cable, no charging in the wall, less batteries and all that. So our strategy was to develop such a strong technology so that we could integrate into these companies/partners products.

Cass: So I guess the question there is— so you come to a company, the company has its own internal development engineers. It’s got its own people coming up with product ideas and so on. How do you evangelize within a company to say, “Look, you get in the door, you show your demo,” to say, product manager who’s thinking of new product lines, “You guys should think about making products with our technology.” How do you evangelize that they think, “Okay, yeah, I’m going to spend the next six months of my life betting on these headphones, on this technology that I didn’t invent that I’m kind of trusting.” How do you get that internal buy-in with the internal engineers and the internal product developers and product managers?

Fili: That’s the Holy Grail, right? It’s very, very, very difficult. Takes a lot of time. It’s very expensive. And the point, I think you’re touching a little bit when you’re asking me now, because they don’t have a guy waiting to buy or a division or department waiting to buy this flexible indoor solar cell that can look like leather. They don’t have anyone. Who’s going to buy? Who’s the decision maker? There is not one. There’s a bunch, right? Because this will affect the battery people. This will affect the antenna people. This will affect the branding people. It will affect the mechanic people, etc., etc., etc. So there’s so many people that can say no. No one can say yes alone. All of them can say no alone. Any one of them can block the project, but to proceed, all of them have to say yes. So it’s a very, very tough equation. So that’s why when we realized this— this was another big learning that we had that we couldn’t go with the sales guy. We couldn’t go with two sales guys. We had to go with an entire team. So we needed to bring our design guy, our branding person, our mechanics person, our software engineer. We had to go like huge teams to be able to answer all the questions and mitigate and explain.

So we had to go both top down and explain to the head of product or head of sustainability, “Okay, if you have 100 million products out in five years and they’re going to be using 50 batteries per year, that’s 5 billion batteries per year. That’s not good, right? What if we can eliminate all these batteries? That’s good for sustainability.” “Okay. Good.” “That’s also good for total cost. We can lower total cost of ownership.” “Okay, that’s also good.” “And you can sell this and this and this way. And by the way, here’s a narrative we offer you. We have also made some assets, movies, pictures, texts. This is how other people talk about this.” But it’s a very, very tough start. How do you get the first big name in? And big companies, they have a lot to risk, a lot to lose as well. So my advice would be to start smaller. I mean, we started mainly due to COVID, to be honest. Because Sweden stayed open during COVID, which was great. We lived our lives almost like normal. But we couldn’t work with any international companies because they were all closed or no one went to the office. So we had to turn to Swedish companies, and we developed a few products during COVID. We launched like four or five products on the market with smaller Swedish companies, and we launched so much. And then we could just send these headphones to the large companies and tell them, “You know what? Here’s a headphone. Use it for a few months. We’ll call you later.” And then they call us that, “You know what? We have used them for three months. No one has charged. This is sick. It actually works.” We’re like, “Yeah, we know.” And then that just made it so much easier. And now anyone who wants to make a deal with us, they can just buy these products anywhere online or in-store across the whole world and try them for themselves.

And we send them also samples. They can buy, they can order from our website, like development kits. We have software, we have partnered up with Qualcomm, early semiconductor. All the big electronics companies, we’re now qualified partners with them. So all the electronics is powerful already. So now it’s very easy now to build prototypes if you want to test something. We have offices across the world. So now it’s much easier. But my advice to anyone who would want to start with this is try and get a few customers in. The important thing is that they also care about the project. If we go to one of these large companies, 3M, they have 60,000 products. If they have 60,001, yeah. But for us, it’s like the project. And we have managed to land it in a way. So it’s also important for them now because it just touches so many of their important areas that they work with, so.

Cass: So in terms of future directions for the technology, do you have a development pathway? What kind of future milestones are you hoping to hit?

Fili: For sure. So at the moment, we’re focusing on consumer electronics market, IoT, smart home. So I think the next big thing will be the smart workplace where you see huge construction sites and other areas where we connect the workers, anything from the smart helmet. You get hit in your head, how hard was it? I mean, why can’t we tell you that? That’s just ridiculous. There’s all these sensors already available. Someone just needs to power the helmet. Location services. Is the right person in the right place with the proper training or not? On the construction side, do you have the training to work with dynamite, for example, or heavy lifts or different stuff? So you can add the geofencing in different sites. You can add health data, digital health tracking, pulse, breathing, temperature, different stuff. Compliance, of course. Are you following all the rules? Are you wearing your helmet? Is the helmet buttoned? Are you wearing the proper other gear, whatever it is? Otherwise, you can’t start your engine, or you can’t go into this site, or you can’t whatever. I think that’s going to greatly improve the proactive safety and health a lot and increase profits for employers a lot too at the same time. In a few years, I think we’re going to see the American unions are going to be our best sales force. Because when they see the greatness of this whole system, they’re going to demand it in all tenders, all biggest projects. They’re going to say, “Hey, we want to have the connected worker safety stuff here.” Because you can just stream-- if you’re working, you can stream music, talk to your colleagues, enjoy connected safety without invading the privacy, knowing that you’re good. If you fall over, if you faint, if you get a heart attack, whatever, in a few seconds, the right people will know and they will take their appropriate actions. It’s just really, really cool, this stuff.

Cass: Well, it’ll be interesting to see how that turns out. But I’m afraid that’s all we have time for today, although this is fascinating. But today, so Giovanni, I want to thank you very much for coming on the show.

Fili: Thank you so much for having me.

Cass: So today we were talking with Giovanni Fili, who is Exeger’s founder and CEO, about their new flexible powerfoyle solar cell technology. For IEEE Spectrum‘s Fixing the Future, I’m Stephen Cass, and I hope you’ll join me next time.

The UK's ARIA Is Searching For Better AI Tech



Dina Genkina: Hi, I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. And today our guest on the show is Suraj Bramhavar. Recently, Bramhavar left his job as a co-founder and CTO of Sync Computing to start a new chapter. The UK government has just founded the Advanced Research Invention Agency, or ARIA, modeled after the US’s own DARPA funding agency. Bramhavar is heading up ARIA’s first program, which officially launched on March 12th of this year. Bramhavar’s program aims to develop new technology to make AI computation 1,000 times more cost efficient than it is today. Siraj, welcome to the show.

Suraj Bramhavar: Thanks for having me.

Genkina: So your program wants to reduce AI training costs by a factor of 1,000, which is pretty ambitious. Why did you choose to focus on this problem?

Bramhavar: So there’s a couple of reasons why. The first one is economical. I mean, AI is basically to become the primary economic driver of the entire computing industry. And to train a modern large-scale AI model costs somewhere between 10 million to 100 million pounds now. And AI is really unique in the sense that the capabilities grow with more computing power thrown at the problem. So there’s kind of no sign of those costs coming down anytime in the future. And so this has a number of knock-on effects. If I’m a world-class AI researcher, I basically have to choose whether I go work for a very large tech company that has the compute resources available for me to do my work or go raise 100 million pounds from some investor to be able to do cutting edge research. And this has a variety of effects. It dictates, first off, who gets to do the work and also what types of problems get addressed. So that’s the economic problem. And then separately, there’s a technological one, which is that all of this stuff that we call AI is built upon a very, very narrow set of algorithms and an even narrower set of hardware. And this has scaled phenomenally well. And we can probably continue to scale along kind of the known trajectories that we have. But it’s starting to show signs of strain. Like I just mentioned, there’s an economic strain, there’s an energy cost to all this. There’s logistical supply chain constraints. And we’re seeing this now with kind of the GPU crunch that you read about in the news.

And in some ways, the strength of the existing paradigm has kind of forced us to overlook a lot of possible alternative mechanisms that we could use to kind of perform similar computations. And this program is designed to kind of shine a light on those alternatives.

Genkina: Yeah, cool. So you seem to think that there’s potential for pretty impactful alternatives that are orders of magnitude better than what we have. So maybe we can dive into some specific ideas of what those are. And you talk about in your thesis that you wrote up for the start of this program, you talk about natural computing systems. So computing systems that take some inspiration from nature. So can you explain a little bit what you mean by that and what some of the examples of that are?

Bramhavar: Yeah. So when I say natural-based or nature-based computing, what I really mean is any computing system that either takes inspiration from nature to perform the computation or utilizes physics in a new and exciting way to perform computation. So you can think about kind of people have heard about neuromorphic computing. Neuromorphic computing fits into this category, right? It takes inspiration from nature and usually performs a computation in most cases using digital logic. But that represents a really small slice of the overall breadth of technologies that incorporate nature. And part of what we want to do is highlight some of those other possible technologies. So what do I mean when I say nature-based computing? I think we have a solicitation call out right now, which calls out a few things that we’re interested in. Things like new types of in-memory computing architectures, rethinking AI models from an energy context. And we also call out a couple of technologies that are pivotal for the overall system to function, but are not necessarily so eye-catching, like how you interconnect chips together, and how you simulate a large-scale system of any novel technology outside of the digital landscape. I think these are critical pieces to realizing the overall program goals. And we want to put some funding towards kind of boosting that workup as well.

Genkina: Okay, so you mentioned neuromorphic computing is a small part of the landscape that you’re aiming to explore here. But maybe let’s start with that. People may have heard of neuromorphic computing, but might not know exactly what it is. So can you give us the elevator pitch of neuromorphic computing?

Bramhavar: Yeah, my translation of neuromorphic computing— and this may differ from person to person, but my translation of it is when you kind of encode the information in a neural network via spikes rather than kind of discrete values. And that modality has shown to work pretty well in certain situations. So if I have some camera and I need a neural network next to that camera that can recognize an image with very, very low power or very, very high speed, neuromorphic systems have shown to work remarkably well. And they’ve worked in a variety of other applications as well. One of the things that I haven’t seen, or maybe one of the drawbacks of that technology that I think I would love to see someone solve for is being able to use that modality to train large-scale neural networks. So if people have ideas on how to use neuromorphic systems to train models at commercially relevant scales, we would love to hear about them and that they should submit to this program call, which is out.

Genkina: Is there a reason to expect that these kinds of— that neuromorphic computing might be a platform that promises these orders of magnitude cost improvements?

Bramhavar: I don’t know. I mean, I don’t know actually if neuromorphic computing is the right technological direction to realize that these types of orders of magnitude cost improvements. It might be, but I think we’ve intentionally kind of designed the program to encompass more than just that particular technological slice of the pie, in part because it’s entirely possible that that is not the right direction to go. And there are other more fruitful directions to put funding towards. Part of what we’re thinking about when we’re designing these programs is we don’t really want to be prescriptive about a specific technology, be it neuromorphic computing or probabilistic computing or any particular thing that has a name that you can attach to it. Part of what we tried to do is set a very specific goal or a problem that we want to solve. Put out a funding call and let the community kind of tell us which technologies they think can best meet that goal. And that’s the way we’ve been trying to operate with this program specifically. So there are particular technologies we’re kind of intrigued by, but I don’t think we have any one of them selected as like kind of this is the path forward.

Genkina: Cool. Yeah, so you’re kind of trying to see what architecture needs to happen to make computers as efficient as brains or closer to the brain’s efficiency.

Bramhavar: And you kind of see this happening in the AI algorithms world. As these models get bigger and bigger and grow their capabilities, they’re starting to introduce things that we see in nature all the time. I think probably the most relevant example is this stable diffusion, this neural network model where you can type in text and generate an image. It’s got diffusion in the name. Diffusion is a natural process. Noise is a core element of this algorithm. And so there’s lots of examples like this where they’ve kind of— that community is taking bits and pieces or inspiration from nature and implementing it into these artificial neural networks. But in doing that, they’re doing it incredibly inefficiently.

Genkina: Yeah. Okay, so great. So the idea is to take some of the efficiencies out in nature and kind of bring them into our technology. And I know you said you’re not prescribing any particular solution and you just want that general idea. But nevertheless, let’s talk about some particular solutions that have been worked on in the past because you’re not starting from zero and there are some ideas about how to do this. So I guess neuromorphic computing is one such idea. Another is this noise-based computing, something like probabilistic computing. Can you explain what that is?

Bramhavar: Noise is a very intriguing property? And there’s kind of two ways I’m thinking about noise. One is just how do we deal with it? When you’re designing a digital computer, you’re effectively designing noise out of your system, right? You’re trying to eliminate noise. And you go through great pains to do that. And as soon as you move away from digital logic into something a little bit more analog, you spend a lot of resources fighting noise. And in most cases, you eliminate any benefit that you get from your kind of newfangled technology because you have to fight this noise. But in the context of neural networks, what’s very interesting is that over time, we’ve kind of seen algorithms researchers discover that they actually didn’t need to be as precise as they thought they needed to be. You’re seeing the precision kind of come down over time. The precision requirements of these networks come down over time. And we really haven’t hit the limit there as far as I know. And so with that in mind, you start to ask the question, “Okay, how precise do we actually have to be with these types of computations to perform the computation effectively?” And if we don’t need to be as precise as we thought, can we rethink the types of hardware platforms that we use to perform the computations?

So that’s one angle is just how do we better handle noise? The other angle is how do we exploit noise? And so there’s kind of entire textbooks full of algorithms where randomness is a key feature. I’m not talking necessarily about neural networks only. I’m talking about all algorithms where randomness plays a key role. Neural networks are kind of one area where this is also important. I mean, the primary way we train neural networks is stochastic gradient descent. So noise is kind of baked in there. I talked about stable diffusion models like that where noise becomes a key central element. In almost all of these cases, all of these algorithms, noise is kind of implemented using some digital random number generator. And so there the thought process would be, “Is it possible to redesign our hardware to make better use of the noise, given that we’re using noisy hardware to start with?” Notionally, there should be some savings that come from that. That presumes that the interface between whatever novel hardware you have that is creating this noise, and the hardware you have that’s performing the computing doesn’t eat away all your gains, right? I think that’s kind of the big technological roadblock that I’d be keen to see solutions for, outside of the algorithmic piece, which is just how do you make efficient use of noise.

When you’re thinking about implementing it in hardware, it becomes very, very tricky to implement it in a way where whatever gains you think you had are actually realized at the full system level. And in some ways, we want the solutions to be very, very tricky. The agency is designed to fund very high risk, high reward type of activities. And so there in some ways shouldn’t be consensus around a specific technological approach. Otherwise, somebody else would have likely funded it.

Genkina: You’re already becoming British. You said you were keen on the solution.

Bramhavar: I’ve been here long enough.

Genkina: It’s showing. Great. Okay, so we talked a little bit about neuromorphic computing. We talked a little bit about noise. And you also mentioned some alternatives to backpropagation in your thesis. So maybe first, can you explain for those that might not be familiar what backpropagation is and why it might need to be changed?

Bramhavar: Yeah, so this algorithm is essentially the bedrock of all AI training currently you use today. Essentially, what you’re doing is you have this large neural network. The neural network is composed of— you can think about it as this long chain of knobs. And you really have to tune all the knobs just right in order to get this network to perform a specific task, like when you give it an image of a cat, it says that it is a cat. And so what backpropagation allows you to do is to tune those knobs in a very, very efficient way. Starting from the end of your network, you kind of tune the knob a little bit, see if your answer gets a little bit closer to what you’d expect it to be. Use that information to then tune the knobs in the previous layer of your network and keep on doing that iteratively. And if you do this over and over again, you can eventually find all the right positions of your knobs such that your network does whatever you’re trying to do. And so this is great. Now, the issue is every time you tune one of these knobs, you’re performing this massive mathematical computation. And you’re typically doing that across many, many GPUs. And you do that just to tweak the knob a little bit. And so you have to do it over and over and over and over again to get the knobs where you need to go.

There’s a whole bevy of algorithms. What you’re really doing is kind of minimizing error between what you want the network to do and what it’s actually doing. And if you think about it along those terms, there’s a whole bevy of algorithms in the literature that kind of minimize energy or error in that way. None of them work as well as backpropagation. In some ways, the algorithm is beautiful and extraordinarily simple. And most importantly, it’s very, very well suited to be parallelized on GPUs. And I think that is part of its success. But one of the things I think both algorithmic researchers and hardware researchers fall victim to is this chicken and egg problem, right? Algorithms researchers build algorithms that work well on the hardware platforms that they have available to them. And at the same time, hardware researchers develop hardware for the existing algorithms of the day. And so one of the things we want to try to do with this program is blend those worlds and allow algorithms researchers to think about what is the field of algorithms that I could explore if I could rethink some of the bottlenecks in the hardware that I have available to me. Similarly in the opposite direction.

Genkina: Imagine that you succeeded at your goal and the program and the wider community came up with a 1/1000s compute cost architecture, both hardware and software together. What does your gut say that that would look like? Just an example. I know you don’t know what’s going to come out of this, but give us a vision.

Bramhavar: Similarly, like I said, I don’t think I can prescribe a specific technology. What I can say is that— I can say with pretty high confidence, it’s not going to just be one particular technological kind of pinch point that gets unlocked. It’s going to be a systems level thing. So there may be individual technology at the chip level or the hardware level. Those technologies then also have to meld with things at the systems level as well and the algorithms level as well. And I think all of those are going to be necessary in order to reach these goals. I’m talking kind of generally, but what I really mean is like what I said before is we got to think about new types of hardware. We also have to think about, “Okay, if we’re going to scale these things and manufacture them in large volumes cost effectively, we’re going to have to build larger systems out of building blocks of these things. So we’re going to have to think about how to stitch them together in a way that makes sense and doesn’t eat away any of the benefits. We’re also going to have to think about how to simulate the behavior of these things before we build them.” I think part of the power of the digital electronics ecosystem comes from the fact that you have cadence and synopsis and these EDA platforms that allow you with very high accuracy to predict how your circuits are going to perform before you build them. And once you get out of that ecosystem, you don’t really have that.

So I think it’s going to take all of these things in order to actually reach these goals. And I think part of what this program is designed to do is kind of change the conversation around what is possible. So by the end of this, it’s a four-year program. We want to show that there is a viable path towards this end goal. And that viable path could incorporate kind of all of these aspects of what I just mentioned.

Genkina: Okay. So the program is four years, but you don’t necessarily expect like a finished product of a 1/1000s cost computer by the end of the four years, right? You kind of just expect to develop a path towards it.

Bramhavar: Yeah. I mean, ARIA was kind of set up with this kind of decadal time horizon. We want to push out-- we want to fund, as I mentioned, high-risk, high reward technologies. We have this kind of long time horizon to think about these things. I think the program is designed around four years in order to kind of shift the window of what the world thinks is possible in that timeframe. And in the hopes that we change the conversation. Other folks will pick up this work at the end of that four years, and it will have this kind of large-scale impact on a decadal.

Genkina: Great. Well, thank you so much for coming today. Today we spoke with Dr. Suraj Bramhavar, lead of the first program headed up by the UK’s newest funding agency, ARIA. He filled us in on his plans to reduce AI costs by a factor of 1,000, and we’ll have to check back with him in a few years to see what progress has been made towards this grand vision. For IEEE Spectrum, I’m Dina Genkina, and I hope you’ll join us next time on Fixing the Future.

U.S. Commercial Drone Delivery Comes Closer



Stephen Cass: Hello and welcome to Fixing the Future, an IEEE Spectrum podcast where we look at concrete solutions to tough problems. I’m your host, Stephen Cass, a senior editor at IEEE Spectrum. And before I start, I just want to tell you that you can get the latest coverage of some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. We’ve been covering the drone delivery company Zipline in Spectrum for several years, and I do encourage listeners to check out our great onsite reporting from Rwanda in 2019 when we visited one of Zipline’s dispatch centers for delivering vital medical supplies into rural areas. But now it’s 2024, and Zipline is expanding into commercial drone delivery in the United States, including into urban areas, and hitting some recent milestones. Here to talk about some of those milestones today, we have Keenan Wyrobek, Zipline’s co-founder and CTO. Keenan, welcome to the show.

Keenan Wyrobek: Great to be here. Thanks for having me.

Cass: So before we get into what’s going on with the United States, can you first catch us up on how things have been going on with Rwanda and the other African countries you’ve been operating in?

Wyrobek: Yeah, absolutely. So we’re now operating in eight countries, including here in the US. That includes a handful of countries in Africa, as well as Japan and Europe. So in Africa, it’s really exciting. So the scale is really impressive, basically. As we’ve been operating, started eight years ago with blood, then moved into vaccine delivery and delivering many other things in the healthcare space, as well as outside the healthcare space. We can talk a little bit about in things like animal husbandry and other things. The scale is really what’s exciting. We have a single distribution center there that now regularly flies more than the equivalent of once the equator of the Earth every day. And that’s just from one of a whole bunch of distribution centers. That’s where we are really with that operation today.

Cass: So could you talk a little bit about those non-medical systems? Because this was very much how we’d seen blood being parachuted down from these drones and reaching those distant centers. What other things are you delivering there?

Wyrobek: Yeah, absolutely. So start with blood, like you said, then vaccines. We’ve now done delivered well over 15 million vaccine doses, lots of other pharmaceutical use cases to hospitals and clinics, and more recently, patient home delivery for chronic care of things like hypertension, HIV-positive patients, and things like that. And then, yeah, moved into some really exciting use cases and things like animal husbandry. One that I’m personally really excited about is supporting these genetic diversity campaigns. It’s one of those things very unglamorous, but really impactful. One of the main sources of protein around the world is cow’s milk. And it turns out the difference between a non-genetically diverse cow and a genetically diverse cow can be 10x difference in milk production. And so one of the things we deliver is bull semen. We’re very good at the cold chain involved in that as we’ve mastered in vaccines and blood. And that’s just one of many things we’re doing in other spaces outside of healthcare directly.

Cass: Oh, fascinating. So turning now to the US, it seems like there’s been two big developments recently. One is you’re getting close to deploying Platform 2, which has some really fascinating tech that allows packages to be delivered very precisely by tether. And I do want to talk about that later. But first, I want to talk about a big milestone you had late last year. And this was something that goes by the very unlovely acronym of a BVLOS flight. Can you tell us what a BVLOS stands for and why that flight was such a big deal?

Wryobek: Yeah, “beyond visual line of sight.” And so that is basically, before this milestone last year, all drone deliveries, all drone operations in the US were done by people standing on the ground, looking at the sky, that line of sight. And that’s how basically we made sure that the drones were staying clear of aircraft. This is true of everybody. Now, this is important because in places like the United States, many aircraft don’t and aren’t required to carry a transponder, right? So transponders where they have a radio signal that they’re transmitting their location that our drones can listen to and use to maintain separation. And so the holy grail of basically scalable drone operations, of course, it’s physically impossible to have people standing around all the world staring at the sky, and is a sensing solution where you can sense those aircraft and avoid those aircraft. And this is something we’ve been working on for a long time and got the approval for late last year with the FAA, the first-ever use of sensors to detect and avoid for maintaining safety in the US airspace, which is just really, really exciting. That’s now been in operations in two distribution centers here, one in Utah and one in Arkansas ever since.

Cass: So could you just tell us a little bit about how that tech works? It just seems to be quite advanced to trust a drone to recognize, “Oh, that is an actual airplane that’s a Cessna that’s going to be here in about two minutes and is a real problem,” or, “No, it’s a hawk, which is just going about his business and I’m not going to ever come close to it at all because it’s so far away.

Wryobek: Yeah, this is really fun to talk about. So just to start with what we’re not doing, because most people expect us to use either a radar for this or cameras for this. And basically, those don’t work. And the radar, you would need such a heavy radar system to see 360 degrees all the way around your drone. And this is really important because two things to kind of plan in your mind. One is we’re not talking about autonomous driving where cars are close together. Aircraft never want to be as close together as cars are on a road, right? We’re talking about maintaining hundreds of meters of separation, and so you sense it a long distance. And drones don’t have right of way. So what that means is even if a plane’s coming up behind the drone, you got to sense that plane and get out of the way. And so to have enough radar on your drone that you can actually see far enough to maintain that separation in every direction, you’re talking about something that weighs many times the weight of a drone and it just doesn’t physically close. And so we started there because that’s sort of where we assumed and many people assume that’s the place to start. Then looked at cameras. Cameras have lots of drawbacks. And fundamentally, you can sort of-- we’ve all had this, you taken your phone and tried to take a picture of an airplane and you look at the picture, you can’t see the airplane. Yeah. It takes so many pixels of perfectly clean lenses to see an aircraft at a kilometer or two away that it really just is not practical or robust enough. And that’s when we went back to the drawing board and it ended up where we ended up, which is using an array of microphones to listen for aircraft, which works very well at very long distances to then maintain separation from those other aircraft.

Cass: So yeah, let’s talk about Platform 2 a little bit more because I should first explain for listeners who maybe aren’t familiar with Zipline that these are not the kind of the little purely sort of helicopter-like drones. These are these fixed wing with sort of loiter capability and hovering capabilities. So they’re not like your Mavic drones and so on. These have a capacity then for long-distance flight, which is what it gives them.

Wyrobek: Yeah. And maybe to jump into Platform 2— maybe starting with Platform 1, what does it look like? So Platform 1 is what we’ve been operating around the world for years now. And this basically looks like a small airplane, right? In the industry referred to as a fixed-wing aircraft. And it’s fixed wing because to solve the problem of going from a metro area to surrounding countryside, really two things matter. Your range and long range and low cost. And a fixed-wing aircraft over something that can hover has something like an 800% advantage in range and cost. And that’s why we did fix wing because it actually works for our customers for their needs for that use case. Platform 2 is all about, how do you deliver to homes and in metro areas where you need an incredible amount of precision to deliver to nearly every home. And so Platform 2—we call our drone zips—our drone, it flies out to the delivery site. Instead of floating a package down to a customer like Platform 1 does, it hovers. Platform 2 hovers and lowers down what we call a droid. And so the droids on tether. The drone stays way up high, about 100 meters up high, and the drone lowers down. And the drone itself-- sorry, the droid itself, it lowers down, it can fly. Right? So you think of it as like the tether does the heavy lifting, but the droid has fans. So if it gets hit by a gust of wind or whatnot, it can still stay very precisely on track and come in and deliver it to a very small area, put the package down, and then be out of there seconds later.

Cass: So let me get this right. Platform 2 is kind of as a combo, fixed wing and rotor wing. It’s like a VTOL like that. I’m cheating here a little bit because my colleague Evan Ackerman has a great Q&A on the Spectrum website with you, some of your team members about the nitty-gritty of how that design was evolved. But first off, it’s like a little droid thing at the end of the tether. How much extra precision do all those fans and stuff give you?

Wyrobek: Oh, massive, right? We can come down and hit a target within a few centimeters of where we want to deliver, which means we can deliver. Like if you have a small back porch, which is really common, right, in a lot of urban areas to have a small back porch or a small place on your roof or something like that, we can still just deliver as long as we have a few feet of open space. And that’s really powerful for being able to serve our customers. And a lot of people think of Platform 2 as like, “Hey, it’s a slightly better way of doing maybe a DoorDash-style operation, people in cars driving around.” And to be clear, it’s not slightly better. It’s massively better, much faster, more environmentally friendly. But we have many contracts for Platform 2 in the health space with US Health System Partners and Health Systems around the world. And what’s powerful about these customers in terms of their needs is they really need to serve all of their customers. And this is where a lot of our sort of-- this is where our engineering effort goes is how do you make a system that doesn’t just kind of work for some folks, and they can use it if they want to, but a health system is like, “No, I want this to work for everybody in my health network.” And so how do we get to that near 100 percent serviceability? And that’s what this droid really enables us to do. And of course, it has all these other magic benefits too. It makes some of the hardest design problems in this space much, much easier. The safety problem gets much easier by keeping the drone way up high.

Cass: Yeah, how high is Platform 2 hovering when it’s doing its deliveries?

Wyrobek: About 100 meters, so 300 plus feet, right? We’re talking about high up as a football field is long. And so it’s way up there. And it also helps with things like noise, right? We don’t want to live in a future where drones are all around us sounding like swarms of insects. We want drones to make no noise. We want them to just melt into the background. And so it makes that kind of problem much easier as well. And then, of course, the droid gets other benefits where for many products, we don’t need any packaging at all. We can just deliver the product right onto a table in your porch. And not just from a cost perspective, but again, from— we’re all familiar with the nightmare of packaging from deliveries we get. Eliminating packaging just has to be our future. And we’re really excited to advance that future.

Cass: From Evan’s Q&A, I know that a lot of effort went into making the droid element look rather adorable. Why was that so important?

Wryobek: Yeah, I like to describe it as sort of a cross between three things, if you kind of picture this, like a miniature little fan boat, right, because it has some fan, a big fan on the back, looks like a little fan boat, combined with sort of a baby seal, combined with a toaster. It sort of has that look to it. And making it adorable, there’s a bunch of sort of human things that matter, right? I want this to be something that when my grandmother, who’s not a tech-savvy, gets these deliveries, it’s approachable. It doesn’t come off as sort of scary. And when you make something cute, not only does it feel approachable, but it also forces you to get the details right so it is approachable, right? The rounded corners, right? This sounds really benign, but a lot of robots, it turns out if you bump into them, they scratch you. And we want you to be able to bump into this droid, and this is no big deal. And so getting the surfaces right, getting them— the surface is made sort of like a helmet foam. If you can picture that, right? The kind of thing you wouldn’t be afraid to touch if it touched you. And so getting it both to be something that feels safe, but is something that actually is safe to be around, those two things just matter a lot. Because again, we’re not designing this for some piloty kind of low-volume thing. Our customers want this in phenomenal volume. And so we really want this to be something that we’re all comfortable around.

Cass: Yeah, and one thing I want to pull out from that Q&A as well is it was an interesting note, because you mentioned it has three fans, but they’re rather unobtrusive. And the original design, you had two big fans on the sides, which was very great for maneuverability. But you had to get rid of those and come up with a three-fan design. And maybe you can explain why that was so.

Wryobek: Yeah, that’s a great detail. So the original design, the picture, it was like, imagine the package in the middle, and then kind of on either side of the package, two fans. So when you looked at it, it kind of looked like— I don’t know. It kind of looked like the package had big mouse ears or something. And when you looked at it, everybody had the same reaction. You kind of took this big step back. It was like, “Whoa, there’s this big thing coming down into my yard.” And when you’re doing this kind of user testing, we always joke, you don’t need to bring users in if it already makes you take a step back. And this is one of those things where like, “That’s just not good enough, right, to even start with that kind of refined design.” But when we got the sort of profile of it smaller, the way we think about it from a design experiment perspective is we want to deliver a large package. So basically, the droid needs to be as sucked down as small additional volume around that package as possible. So we spent a lot of time figuring out, “Okay, how do you do that sort of physically and aesthetically in a way that also gets that amazing performance, right? Because when I say performance, what I’m talking about is we still need it to work when the winds are blowing really hard outside and still can deliver precisely. And so it has to have a lot of aero performance to do that and still deliver precisely in essentially all weather conditions.

Cass: So I guess I just want to ask you then is, what kind of weight and volume are you able to deliver with this level of precision?

Wryobek: Yeah, yeah. So we’ll be working our way up to eight pounds. I say working our way up because that’s part of, once you launch a product like this, there’s refinement you can do overtime on many layers, but eight pounds, which was driven off, again, these health use cases. So it does basically 100 percent of what our health partners need to do. And it turns out it’s, nearly 100 percent of what we want to do in meal delivery. And even in the goods sector, I’m impressed by the percentage of goods we can deliver. One of our partners we work with, we can deliver over 80 percent of what they have in their big box store. And yeah, it’s wildly exceeding expectations on nearly every axis there. And volume, it’s big. It’s bigger than a shoebox. I don’t have a great-- I’m trying to think of a good reference to kind of bring it to life. But it looks like a small cooler basically inside. And it can comfortably fit a meal for four to give you a sense of the amount of food you can fit in there. Yeah.

Cass: So we’ve seen this history of Zipline in rural areas, and now we’re talking about expanding operations in more urban areas, but just how urban? I don’t imagine that we’ll see the zip lines of zooming around, say, the very hemmed-in streets, say, here in Midtown Manhattan. So what level of urban are we talking about?

Wryobek: Yeah, so the way we talk about it internally in our design process is basically we call three-story sprawl. Manhattan is the place where when we think of New York, we’re not talking about Manhattan, but most of the rest of New York, we are talking about it, right? Like the Bronx, things like that. We just have this sort of three stories forever. And that’s a lot of the world out here in California, that’s most of San Francisco. I think it’s something like 98 percent of San Francisco is that. If you’ve ever been to places like India and stuff like that, the cities, it’s just sort of this three stories going for a really long way. And that’s what we’re really focused on. And that’s also where we provide that incredible value because that’s also matches where the hardest traffic situations and things like that can make any other sort of terrestrial on-demand delivery be phenomenally late.

Cass: Well, no, I live out in Queens, so I agree there’s not much skyscrapers out there. Although there are quite a few trees and so on, but at the same time, there’s usually some sort of sidewalk availability. So is that kind of what you’re hoping to get into?

Wyrobek: Exactly. So as long as you’ve got a porch with a view of the sky or an alley with a view of the sky, it can be literally just a few feet, we can get in there, make a delivery, and be on our way.

Cass: And so you’ve done this preliminary test with the FAA, the BVLOS test, and so on. How close do you think you are to, and you’re working with a lot of partners, to really seeing this become routine commercial operations?

Wyrobek: Yeah, yeah. So at relatively limited scale, our operations here in Utah and in Arkansas that are leveraging that FAA approval for beyond visual line-of-sight flight operations, that’s been all day, every day now since our approval last year. With Platform 2, we’re really excited. That’s coming later this year. We’re currently in the phase of basically massive-scale testing. So we now have our production hardware and we’re taking it through a massive ground testing campaign. So this picture dozens of thermal chambers and five chambers and things like that just running to really both validate that we have the reliability we need and flush out any issues that we might have missed so we can address that difference between what we call the theoretical reliability and the actual reliability. And that’s running in parallel to a massive flight test campaign. Same idea, right? We’re slowly ramping up the flight volume as we fly into heavier conditions really to make sure we know the limits of the system. We know its actual reliability and true scaled operations so we can get the confidence that it’s ready to operate for people.

Cass: So you’ve got Platform 2. What’s kind of next on your technology roadmap for any possible platform three?

Wyrobek: Oh, great question. Yeah, I can’t comment on platform three at this time, but. And I will also say, Zipline is pouring our heart into Platform 2 right now. Getting Platform 2 ready for this-- the way I like to talk about this internally is today, we fly about four times the equator of the Earth in our operations on average. And that’s a few thousand flights per day. But the demand we have is for more like millions of flights per day, if not beyond. And so on the log scale, right, we’re halfway there. Three hours of magnitude down, three more zeros to come. And the level of testing, the level of systems engineering, the level of refinement required to do that is a lot. And there’s so many systems from weather forecasting to our onboard autonomy and our fleet management systems. And so to highlight one team, our system test team run by this really impressive individual named Juan Albanell, this team has taken us from where we were two years ago, where we had shown the concept at a very prototype stage of this delivery experience, and we’ve done the first order math kind of on the architecture and things like that through the iterations in test to actually make sure we had a drone that could actually fly in all these weather conditions with all the robustness and tolerance required to actually go to this global scale that Platform 2 is targeting.

Cass: Well, that’s fantastic. Well, I think there’s a lot more to talk about to come up in the future, and we look forward to talking with Zipline again. But for today, I’m afraid we’re going to have to leave it there. But it was really great to have you on the show, Keenan. Thank you so much.

Wyrobek: Cool. Absolutely, Stephen. It was a pleasure to speak with you.

Cass: So today on Fixing the Future, we were talking with Zipline’s Keenan Wyrobek about the progress of commercial drone deliveries. For IEEE Spectrum, I’m Stephen Cass, and I hope you’ll join us next time.

Lean Software, Power Electronics, and the Return of Optical Storage



Stephen Cass: Hi. I’m Stephen Cass, a senior editor at IEEE Spectrum. And welcome to Fixing The Future, our bi-weekly podcast that focuses on concrete solutions to hard problems. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe.

Today on Fixing The Future, we’re doing something a little different. Normally, we deep dive into exploring one topic, but that does mean that some really interesting things get left out for the podcast simply because they wouldn’t take up a whole episode. So here today to talk about some of those interesting things, I have Spectrum‘s Editor in Chief Harry Goldstein. Hi, boss. Welcome to the show.

Harry Goldstein: Hi there, Stephen. Happy to be here.

Cass: You look thrilled.

Goldstein: I mean, I am thrilled. I’m always excited to talk about Spectrum stories.

Cass: No, we’ve tied you down and made you agree to this, but I think it’ll be fun. So first up, I’d like to talk about this guest post we had from Bert Hubert which seemed to really strike a chord with readers. It was called Why Bloat Is Still Software’s Biggest Vulnerability: A 2024 plea for lean software. Why do you think this one resonated with readers, and why is it so important?

Goldstein: I think it resonated with readers because software is everywhere. It’s ubiquitous. The entire world is essentially run on software. A few days ago, even, there was a good example of the AT&T network going down likely because of some kind of software misconfiguration. This happens constantly. In fact, it’s kind of like bad weather, the software systems going down. You just come to expect it, and we all live with it. But why we live with it and why we’re forced to live with it is something that people are interested in finding out more, I guess.

Cass: So I think, in the past, when we associated giant bloated software, we had associated with large projects, these big government projects, these big airlines, big, big, big projects. And we’ve written about that a lot at Spectrum before, haven’t we?

Goldstein: We certainly have. And Bob Charette, our longtime contributing editor, who is actually the father of lean software, back in the early ‘90s took the Toyota Total Quality Management program and applied it to software development. And so it was pretty interesting to see Hubert’s piece on this more than 30 years later where the problems have just proliferated. And think about your average car these days. It’s approaching a couple hundred million lines of code. A glitch in any of those could cause some kind of safety problem. Recalls are pretty common. I think Toyota had one a few months ago. So the problem is everywhere, and it’s just going to get worse.

Cass: Yeah. One of the things that struck me was that Bert’s making the argument that you don’t actually need now an army of programmers to create bloated software— to get all those millions of lines of code. You could be just writing a code to open a garage door. This is a trivial program. Because of the way you’re writing it on frameworks, and those are pulling in dependencies and so on, you’re pulling in just millions of lines of other people’s code. You might not even know you’re doing it. And you kind of don’t notice unless, at the end of the day, you look at your final program file and you’re like, “Oh, why is that megabytes upon megabytes?” which represents endless lines of source code. Why is that so big? Because this is how you do software. You just pull these things together. You glue stuff. You focus on the business logic because that’s your value add, but you’re not paying attention to this enormous sort of—I don’t know; what would you call it?—invisible dark matter that surrounds your software.

Goldstein: Right. It’s kind of like dark matter. Yeah, that’s kind of true. I mean, it actually started making me think. All of these large language models that are being applied to software development. Co-piloting, I guess they call it, right, where the coder is sitting with an AI, trying to write better code. Do you think that might solve the problem or get us closer?

Cass: No, because I think those systems, if you look at them, they reflect modern programming usage. And modern programming usage is often to use the frameworks that are available. It’s not about really getting in and writing something that’s a little bit leaner. Actually, I think the Ais—it’s not their fault—they just do what we do. And we write bloaty softwares. So I think that’s not going to get any better necessarily with this AI stuff because the point of lean software is it does take extra time to make, and there are no incentives to make lean software. And Bert talks about, “Maybe we’re going to have to impose some of this legis— l e g i s l a tively.”—I speak good. I editor. You hire wise.—But some of these things are going to have to be mandated through standards and regulations, and specifically through the lens of these cybersecurity requirements and knowing what’s going into your software. And that may help with all just getting a little bit leaner. But I did actually want to— another news story that came up this week was Apple closing down its EV division. And you mentioned Bob Charette there. And he wrote this great thing for us recently about why EV cars are one thing and EV infrastructure is an even bigger problem and why EVs are proving to be really quite tough. And maybe the problem— again, it’s a dark matter problem, not so much the car at the center, but this sort of infrastructure— just talk a little bit about Bob’s book, which is, by the way, free to download, and we’ll have the link in the show notes.

Goldstein: Everything you need to know about the EV transition can be yours for the low, low price of free. But, yeah. And I think we’re starting to see-- I mean, even if you mandate things, you’re going to-- you were talking about legislation to regulate software bloat.

Cass: Well, it’s kind of indirect. If you want to have good security, then you’re going to have to do certain things. The White House just came out with this paper, I think yesterday or the day before, saying, “Okay, you need to start using memory-safe languages.” And it’s not quite saying, “You are forbidden from using C, and you must use Rust,” but it’s kind of close to that for certain applications. They exempted certain areas. But you can see, that is the government really coming in and, actually, what has often been a very personal decision of programmers, like, “What language do I use?” and, “I know how to use C. I know how to do garbage collection,” the government kind of saying, “Yeah, we don’t care how great a programmer you think you are. These programs lead to this class of bugs, and we’d really prefer if you used one of these memory-safe languages.” And that’s, I guess, a push into sort of the private lives of programmers that I think we’re going to see more of as time goes by.

Goldstein: Oh, that’s interesting because the—I mean, where I was going with that connection to legislation is that—I think what Bob found in the EV transition is that the knowledge base of the people who are charged with making decisions about regulations is pretty small. They don’t really understand the technology. They certainly don’t understand the interdependencies, which are very similar to the software development processes you were just referring to. It’s very similar to the infrastructure for electric cars because the idea, ultimately, for electric cars is that you also are revamping your grid to facilitate, whatchamacallit, intermittent renewable energy sources, like wind and solar, because having an electric car that runs off a coal-fired power plant is defeating the purpose, essentially. In fact, Ozzie Zehner wrote an article for us way back in the mid-Teens about the— the dirty secret behind your electric car is the coal that fuels it. And—

Cass: Oh, that was quite controversial. Yeah. I think maybe because the cover was a car perched at the top of a giant mountain of coal. I think that—

Goldstein: But it’s true. I mean, in China, they have one of the biggest electric car industries in the world, if not the biggest, and one of the biggest markets that has not been totally saturated by personal vehicles, and all their cars are going to be running on coal. And they’re the world’s second-largest emitter behind the US. But just circling back to the legislative angle and the state of the electric vehicle industry-- well, actually, are we just getting way off topic with the electric vehicles?

Cass: No, it is this idea of interdependence and these very systems that are all coupled in all kinds of ways we don’t expect. And with that EV story— so last time I was home in Ireland, one of the stories was— so they had bought this fleet of buses to put in Dublin to replace these double-decker buses, electric double-deck, to help Ireland hit its carbon targets. So this was an official government goal. We bought the buses, great expense purchasing the buses, and then they can’t charge the buses because they haven’t already done the planning permission to get the charging stations added into the bus depot, which just was this staggering level of interconnect whereas, one hand, the national government is very— “Yes, meeting our target goals. We’re getting these green buses in. Fantastic advance. Very proud of it,” la la la la, and you can’t plug the things in because just the basic work on the ground and dealing with the local government has not been there to put in the charging stations. All of these little disconnects add up. And the bigger, the more complex system you have, the more these things add up, which I think does come back to lean software. Because it’s not so much, “Okay. Yeah, your software is bloaty.” Okay, you don’t win the Turing Prize. Boo-hoo. Okay. But the problem is that because you are pulling all of these dependencies that you just do not know and all these places where things break— or the problem of libraries getting hijacked.

So we have to retain the capacity on some level— and this actually is a personal thing with me, is that I believe in the concept of personal computing. And this was the thing back in the 1970s when personal computers first came out, which the idea was it would— it was very explicitly part of the culture that you would free yourself from the utilities and the centralized systems and you could have a computer on your desk that will let you do stuff, that you didn’t have to go through, at that stage, university administrators and paperwork and you could— it was a personal computer revolution. It was very much front and center. And nowadays it’s kind of come back full circle because now we’re increasingly finding things don’t work if they’re not network connected. So I believe it should be possible to have machines that operate independently, truly personal machines. I believe it should be possible to write software to do even complicated things without relying on network servers or vast downloads or, again, the situation where you want it to run independently, okay, but you’ve got to download these Docker images that are 350 megabytes or something because an entire operating system has to be bundled into them because it is impossible to otherwise replicate the correct environment in which software is running, which also undercuts the whole point of open source software. The point of open source is, if I don’t like something, I can change it. But if it’s so hard for me to change something because I have to replicate the exact environment and toolchains that people on a particular project are using, it really limits the ability of me to come in and maybe— maybe I just want to make some small changes, or I just want to modify something, or I want to pull it into my project. That I have to bring this whole trail of dependencies with me is really tough. Sorry, that’s my rant.

Goldstein: Right. Yeah. Yeah. Actually, one of the things I learned the most about from the Hubert piece was Docker and the idea that you have to put your program in a container that carries with it an entire operating system or whatever. Can you tell me more about containers?

Cass: Yeah. Yeah. Yeah. I mean, you can put whatever you want into a container, and some containers are very small. It distributes its own thing. You can get very lean containers that is just basically the program and the install. But it basically replaces the old idea of installing software, where you’d— and that was a problem, because every time you installed a bit of software, it scarred your system in some way. There was always scar tissue because it made changes. It nestled in. If nothing else, it put files onto your disk. And so over time, one of the problems was that this then meant that your computer would accumulate random files. It was very hard to really uninstall something completely because it’d always put little hooks and would register itself in a different place in the operating system, again, because now it’s interoperating with a whole bunch of stuff. Programs are not completely standalone. At the very least, they’re talking to an operating system. You want it to talk nicely to other programs in the operating system. And this led to all these kind of direct install problems.

And so the idea was, “Oh, we will sandbox this out. We’ll have these little Docker images, basically, to do it,” but that does give you the freedom whereby you can build these huge images, which are essentially virtual machines running away. So, again, it relieves the process of having to figure out your install and your configuration, which is one thing he was talking about. When you had to do these installers, it did really make you clarify your thinking very sharply on configuration and so on. So again, containers are great. All these cloud technologies, being able to use libraries, being able to automatically pull in dependencies, they’re all terrific in moderation. They all solve very real problems. I don’t want to be a Luddite and go, “We should go back to writing assembler code as God intended.” That’s not what I’m saying, but we do sometimes have to look at— it does sometimes enable bad habits. It can incentivize bad habits. And you have to really then think very deliberately about how to combat those problems as they pop up.

Goldstein: But from the beginning, right? I mean, it seems to me like you have to commit to a lean methodology at the start of any project. It’s not something that the AI is going to come in and magically solve and slim down at the end.

Cass: No, I agree. Yeah, you have to commit to it, or you have to commit to frameworks where— I’m not going to necessarily use these frameworks. I’m going to go and try and do some of this myself, or I’m going to be very careful in how I look at my frameworks, like what libraries I’m going to use. I’m going to use maybe a library that doesn’t pull in other dependencies. This guy maybe wrote this library which has got 80 percent of what I need it to do, but it doesn’t pull in libraries, unlike the bells and whistles thing which actually does 400 percent of what I need it to do. And maybe I might write that extra 20 percent. And again, it requires skill and it requires time. And it’s like anything else. There are just incentives in the world that really tend to sort of militate against having the time to do that, which, again, is where we start coming back into some of these regulatory regimes where it becomes a compliance requirement. And I think a lot of people listening will know that time when things get done is when things become compliance requirements, and then it’s mandatory. And that has its own set of issues with it in terms of losing a certain amount of flexibility and so on, but that sometimes seems to be the only way to get things done in commercial environments certainly. Not in terms of personal projects, but certainly for commercial environments.

Goldstein: So what are the consequences, in a commercial environment, of bloat, besides— are there things beyond security? Here’s why I’m asking, because the idea that you’re going to legislate lean software into the world as opposed to having it come from the bottom up where people are recognizing the need because it’s costing them something—so what are the commercial costs to bloated software?

Cass: Well, apparently, absolutely none. That really is the issue. Really, none, because software often isn’t maintained. People just really want to get their products out. They want to move very quickly. We see this when it comes to— they like to abandon old software very quickly. Some companies like to abandon old products as soon as the new one comes out. There really is no commercial downside to using this big software because you can always say, “Well, it’s industry standard. Everybody is doing it.” Because everybody’s doing it. You’re not necessarily losing out to your competitor. We see these massive security breaches. And again, the legislating for lean software is through demanding better security. Because currently, we see these huge security breaches, and there’s very minimal consequences. Occasionally, yes, a company screws up so badly that it goes down. But even so, sometimes they’ll reemerge in a different form, or they’ll get gobbled up in someone.

There really does not, at the moment, seem to be any commercial downside for this big software, in the same way that— there are a lot of weird incentives in the system, and this certainly is one of them where, actually, the incentive is, “Just use all the frameworks. Bolt everything together. Use JS Electron. Use all the libraries. Doesn’t matter because the end user is not really going to notice very much if their program is 10 megabytes versus 350 megabytes,” especially now when people are completely immune to the size of their software. Back in the days when software came on floppy disk, if you had a piece of software that came on 100 floppy disks, that would be considered impractical. But nowadays, people are downloading gigabytes of data just to watch a movie or something like this. If a program is 1 gigabyte versus 100 megabytes, they don’t really notice. I mean, the only people who notice is if, say, video games— a really big video game. And then you see people going, “Well, it took me three hours to download the 70 gigabytes for this AAA game that I wanted to play.” That’s about the only time you see people complaining about the actual storage size of software anymore, but everybody else, they just don’t care. Yeah, it’s just invisible to them now.

Goldstein: And that’s a good thing. I think Charles Choi had a piece for us on-- we’ll have endless storage, right, on disks, apparently.

Cass: Oh, I love this story because it’s another story of a technology that looks like it’s headed off into the sunset, “We’ll see you in the museum.” And this is optical disk technology. I love this story and the idea that you can— we had laser disks. We had CDs. We had CD-ROMs. We had DVD. We had Blu-ray. And Blu-ray really seemed to be in many ways the end of the line for optical disks, that after that, we’re just going to use solid-state storage devices, and we’ll store all our data in those tiny little memory cells. And now we have these researchers coming back. And now my brain has frozen for a second on where they’re from. I think they’re from Shanghai. Is it Shanghai Institute?

Goldstein: Yes, I think so.

Cass: Yes, Shanghai. There we go. There we go. Very nice subtle check of the website there. And it might let us squeeze this data center into something the size of a room. And this is this optical disk technology where you can make a disk that’s about the size of just a regular DVD. And you can squeeze just enormous amount of data. I think he’s talking about petabits in a—

Goldstein: Yeah, like 1.6 petabits on--

Cass: Petabits on this optical surface. And the magic key is, as always, a new material. I mean, we do love new materials because they’re always the wellspring from which so much springs. And we have at Spectrum many times chased down materials that have not fulfilled necessarily their promise. We have a long history— and sometimes materials go away and they come back, like—

Goldstein: They come back, like graphene. It’s gone away. It’s come back.

Cass: —graphene and stuff like this. We’re always looking for the new magic material. But this new magic material, which has this—

Goldstein: Oh, yeah. Oh, I looked this one up, Stephen.

Cass: What is it? What is it? What is it? It is called--

Goldstein: Actually, our story did not even bother to include the translation because it’s so botched. But it is A-I-E, dash, D-D-P-R, AIE-DDPR or aggregation-induced emission dye-doped photoresist.

Cass: Okay. Well, let’s just call it magic new dye-doped photoresist. And the point about this is that this material works at basically four wavelengths. And why you want a material that responds at four different wavelengths? Because the limit on optical technologies— and I’m also stretching here into the boundaries on either side of optical. The standard rule is you can’t really do anything that’s smaller than the wavelength of the light you’re using to read or write. So the length of your laser sets the density of data on your disk. And what these clever clogs have done is they’ve worked out that by using basically two lasers at once, you can, in a very clever way, write a blob that is smaller than the wavelength of light, and you can do it in multiple layers. So usually, your standard Blu-ray disk, they’re very limited in the number of layers they have on them, like CDs originally, one layer.

So you have multiple layers on this disk that you can write to, and you can write at resolutions that you wouldn’t think you could do if you were just doing— from your high school physics or whatever. So you write it using these two lasers of two wavelengths, and then you read it back using another two lasers at two different wavelengths. And this all localizes and makes it work. And suddenly, as I say, you can squeeze racks and racks and racks of solid-state storage down to hopefully something that is very small. And what’s also interesting is that they’re actually closer to commercialization than you normally see with these early material stories. And they also think you could write one of these disks in six minutes, which is pretty impressive. As someone who stood and has sat watching the progress bars on a lot of DVD-ROMs burn over the years back in the day, six minutes to burn these—that’s probably for commercial mass production—is still pretty impressive. And so you could solve this problem of some of these large data transfers we get where currently you do have to ship servers from one side of the world to the other because it actually is too slow to copy things over the internet. And so this would increase the bandwidth of sort of the global sneakernet or station wagon net quite dramatically as well.

Goldstein: Yeah. They are super interested in seeing them deployed in big data centers. And in order for them to do that, they still have to get the writing speed up and the energy consumption down. So the real engineering is just beginning for this. Well, speaking of new materials, there’s a new use for aluminum nitride according to our colleague Glenn Zorpette who wrote about the use of the material in power transistors. And apparently, if you properly dope this material, it’ll have a much wider band gap and be able to handle higher voltages. So what does this mean for the grid, Stephen?

Cass: Yeah. So I actually find power electronics really fascinating because most of the history of transistors, right, is about making them use ever smaller amounts of electricity—5-volt logic used to be pretty common; now 3.3 is pretty common, and even 1.1 volts is pretty common—and really sipping microamps of power through these circuits. And power electronics kind of gets you back to actually the origins of being an electronics engineer, electrical engineers, which is when you’re really talking about power and energy, and you are humping around thousands of volts, and you’re humping around huge currents. And power electronics is an attempt to bring some of that smartness that transistors gives you into these much higher voltages. And we’ve seen some of this with, say, gallium nitride, which is a material we had talked about in Spectrum for years, speaking of materials that had been for years floating around, and then really, though, in the last like five years, you’ve seen it be a real commercial success. So all those wall warts we have have gotten dramatically smaller and better, which is why you can have a USB-C charger system where you can drive your laptop and bunch of ancillary peripherals all off one little wall wart without worrying about it bringing down the house because it’s just so efficient and so small. And most of those now are these new gallium-nitride-based devices, which is one example where a material really is making some progress.

And so aluminum nitride is kind of another step along that, to be able to handle even higher voltages, being able to handle bigger currents. So we’re not up yet to the level where you could have these massive high-voltage transmission lines directly, but the more and more you— the rising tide of where you can put these kind of electronics into your systems. First off, it means more efficient. As I say, these power adapters that convert AC to DC, they get more efficient. Your power supplies in your computer get more efficient, and your power supplies in your grid center. We’ve talked about how much power grid centers today get more efficient. And it bundles up. And the whole point of this is that you do want a grid that is as smart as possible. You need something that will be able to handle very intermittent power sources, fluctuating power sources. The current grid is really built around very, very stable power supplies, very constant power supplies, very stable frequency timings. So the frequency of the grid is the key to stability. Everything’s got to be on that 60 hertz in the US, 50 hertz in other places. Every power station has got to be synchronized very precisely with the other. So stability is a problem, and being able to handle fluctuations quickly is the key to both grid stability and to be able to handle some of these intermittent sources where the power varies as the wind blows stronger or weaker, as the day turns, as clouds move in front of your farm. So it’s very exciting from that point of view to see these very esoteric technologies. We’re talking about things like band gaps and how do you stick the right doping molecule in the matrix, but it does bubble up into these very-large-scale impacts when we’re talking about the future of electrical engineering and that old-school power and energy keeping the lights on and the motors churning kind of a way.

Goldstein: Right. And the electrification of everything is just going to put bigger demands on the grid, like you were saying, for alternative energy sources. “Alternative.” They’re all price competitive now, the solar and wind. But--

Cass: Yeah, not just at the generate— this idea that you have distributed power and power can be generated locally, and also being able to switch power. So you have these smart transformers so that if you are generating surplus power on your solar panels, you can send that to maybe your neighbor next door who’s charging their electric vehicle without at all having to be mediated by going up to the power company. Maybe your local transformer is making some of these local grid scale balancing decisions that are much closer to where the power is being used.

Goldstein: Oh, yeah. Stephen, that reminds me of this other piece we had this week, actually, on utilities and profit motive on their part hampering US grid expansion. It’s by a Harvard scholar named Ari Peskoe, and his first line is, “The United States is not building enough transmission lines to connect regional power networks. The deficit is driving up electricity prices, reducing grid reliability, and hobbling renewable-energy deployment.” And basically, they’re just saying that it’s not—what he does a good job explaining is not only how these new projects might impact their bottom lines but also all of the industry alliances that they’ve established over the years that become these embedded interests that need to be disrupted.

Cass: Yeah, the truth is there is a list of things we could do. Not magic things. There are pretty obvious things we could do that would make the US grid— even if you don’t care much about renewables, you probably do care about your grid resilience and reliability and being able to move power around. The US grid is not great. It is creaky. We know there are things that could be done. As a byproduct of doing those things, you also would actually make it much more renewable friendly. So it is this issue of— there are political problems. Depending on which administration is in power, there is more or less an appetite to deal with some of these interests. And then, yeah, these utilities often have incentives to kind of keep things the way they are. They don’t necessarily want a grid where it’s easier to get cheaper electricity or more green electricity from one place to a different market. Everybody loves a captive monopoly market they can sell. I mean, that’s wonderful if you could do that. And then there are many places with anti-competition rules. But grids are a real— it’s really difficult to break down those barriers.

Goldstein: It is. And if you’re in Texas in a bad winter and the grid goes down and you need power from outside but you’re an island unto yourself and you can’t import that power, it becomes something that is disruptive to people’s lives, right? And people pay attention to it during a disaster, but we have a slow-rolling disaster called climate change that if we don’t start overturning some of the barriers to electrification and alternative energy sources, we’re kind of digging our own grave.

Cass: It is very tricky because we do then get into these issues where you build these transmission lines, and there are questions about who ends up paying for those transmission lines and whether they get built over their lands, the local impacts of those. And it’s hard sometimes to tell. Is this a group that is really genuinely feeling that there is a sort of justice gap here— that they’re being asked to pay for the sins of higher carbon producers, or is this astroturfing? And sometimes it’s very difficult to tell that these organizations are being underwritten by people who are invested in the status quo, and it does become a knotty problem. And we are going to, I think, as things get more and more difficult, be really faced into making some difficult choices. And I am not quite sure how that’s going to play out, but I do know that we will keep tracking it as best we can. And I think maybe, yeah, you just have to come back and see how we keep covering the grid in pages of Spectrum.

Goldstein: Excellent. Well—

Cass: And so that’s probably a good point where— I think we’re going to have to wrap this round up here. But thank you so much for coming on the show.

Goldstein: Excellent. Thank you, Stephen. Much fun.

Cass: So today on Fixing The Future, I was talking with Spectrum‘s Editor in Chief Harry Goldstein, and we talked about electric vehicles, we talked about software bloat, and we talked about new materials. I’m Stephen Cass, and I hope you join us next time.

Let Robots Do Your Lab Work



Dina Genkina: Hi. I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beeps, including AI, Change, and Robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org\newsletters to subscribe. Today, a guest is Dr. Benji Maruyama, a Principal Materials Research Engineer at the Air Force Research Laboratory, or AFRL. Dr. Maruyama is a materials scientist, and his research focuses on carbon nanotubes and making research go faster. But he’s also a man with a dream, a dream of a world where science isn’t something done by a select few locked away in an ivory tower, but something most people can participate in. He hopes to start what he calls the billion scientist movement by building AI-enabled research robots that are accessible to all. Benji, thank you for coming on the show.

Benji Maruyama: Thanks, Dina. Great to be with you. I appreciate the invitation.

Genkina: Yeah. So let’s set the scene a little bit for our listeners. So you advocate for this billion scientist movement. If everything works amazingly, what would this look like? Paint us a picture of how AI will help us get there.

Maruyama: Right, great. Thanks. Yeah. So one of the things as you set the scene there is right now, to be a scientist, most people need to have access to a big lab with very expensive equipment. So I think top universities, government labs, industry folks, lots of equipment. It’s like a million dollars, right, to get one of them. And frankly, just not that many of us have access to those kinds of instruments. But at the same time, there’s probably a lot of us who want to do science, right? And so how do we make it so that anyone who wants to do science can try, can have access to instruments so that they can contribute to it. So that’s the basics behind citizen science or democratization of science so that everyone can do it. And one way to think of it is what happened with 3D printing. It used to be that in order to make something, you had to have access to a machine shop or maybe get fancy tools and dyes that could cost tens of thousands of dollars a pop. Or if you wanted to do electronics, you had to have access to very expensive equipment or services. But when 3D printers came along and became very inexpensive, all of a sudden now, anyone with access to a 3D printer, so maybe in a school or a library or a makerspace could print something out. And it could be something fun, like a game piece, but it could also be something that got you to an invention, something that was maybe useful to the community, was either a prototype or an actual working device.

And so really, 3D printing democratized manufacturing, right? It made it so that many more of us could do things that before only a select few could. And so that’s where we’re trying to go with science now, is that instead of only those of us who have access to big labs, we’re building research robots. And when I say we, we’re doing it, but now there are a lot of others who are doing it as well, and I’ll get into that. But the example that we have is that we took a 3D printer that you can buy off the internet for less than $300. Plus a couple of extra parts, a webcam, a Raspberry Pi board, and a tripod really, so only four components. You can get them all for $300. Load them with open-source software that was developed by AFIT, the Air Force Institute of Technology. So Burt Peterson and Greg Captain [inaudible]. We worked together to build this fully autonomous 3D printing robot that taught itself how to print to better than manufacturer’s specifications. So that was a really fun advance for us, and now we’re trying to take that same idea and broaden it. So I’ll turn it back over to you.

Genkina: Yeah, okay. So maybe let’s talk a little bit about this automated research robot that you’ve made. So right now, it works with a 3D printer, but is the big picture that one day it’s going to give people access to that million dollar lab? How would that look like?

Maruyama: Right, so there are different models out there. One, we just did a workshop at the University of— sorry, North Carolina State University about that very problem, right? So there’s two models. One is to get low-cost scientific tools like the 3D printer. There’s a couple of different chemistry robots, one out of University of Maryland and NIST, one out of University of Washington that are in the sort of 300 to 1,000 dollars range that makes it accessible. The other part is kind of the user facility model. So in the US, the Department of Energy National Labs have many user facilities where you can apply to get time on very expensive instruments. Now we’re talking tens of millions. For example, Brookhaven has a synchrotron light source where you can sign up and it doesn’t cost you any money to use the facility. And you can get days on that facility. And so that’s already there, but now the advances are that by using this, autonomy, autonomous closed loop experimentation, that the work that you do will be much faster and much more productive. So, for example, on ARES, our Autonomous Research System at AFRL, we actually were able to do experiments so fast that a professor who came into my lab said, it just took me aside and said, “Hey, Benji, in a week’s worth of time, I did a dissertation’s worth of research.” So maybe five years worth of research in a week. So imagine if you keep doing that week after week after week, how fast research goes. So it’s very exciting.

Genkina: Yeah, so tell us a little bit about how that works. So what’s this system that has sped up five years of research into a week and made graduate students obsolete? Not yet, not yet. How does that work? Is that the 3D printer system or is that a—

Maruyama: So we started with our system to grow carbon nanotubes. And I’ll say, actually, when we first thought about it, your comment about graduate students being absolute— obsolete, sorry, is interesting and important because, when we first built our system that worked it 100 times faster than normal, I thought that might be the case. We called it sort of graduate student out of the loop. But when I started talking with people who specialize in autonomy, it’s actually the opposite, right? It’s actually empowering graduate students to go faster and also to do the work that they want to do, right? And so just to digress a little bit, if you think about farmers before the Industrial Revolution, what were they doing? They were plowing fields with oxen and beasts of burden and hand plows. And it was hard work. And now, of course, you wouldn’t ask a farmer today to give up their tractor or their combine harvester, right? They would say, of course not. So very soon, we expect it to be the same for researchers, that if you asked a graduate student to give up their autonomous research robot five years from now, they’ll say, “Are you crazy? This is how I get my work done.”

But for our original ARES system, it worked on the synthesis of carbon nanotubes. So that meant that what we’re doing is trying to take this system that’s been pretty well studied, but we haven’t figured out how to make it at scale. So at hundreds of millions of tons per year, sort of like polyethylene production. And part of that is because it’s slow, right? One experiment takes a day, but also because there are just so many different ways to do a reaction, so many different combinations of temperature and pressure and a dozen different gases and half the periodic table as far as the catalyst. It’s just too much to just brute force your way through. So even though we went from experiments where we could do 100 experiments a day instead of one experiment a day, just that combinatorial space was vastly overwhelmed our ability to do it, even with many research robots or many graduate students. So the idea of having artificial intelligence algorithms that drive the research is key. And so that ability to do an experiment, see what happened, and then analyze it, iterate, and constantly be able to choose the optimal next best experiment to do is where ARES really shines. And so that’s what we did. ARES taught itself how to grow carbon nanotubes at controlled rates. And we were the first ones to do that for material science in our 2016 publication.

Genkina: That’s very exciting. So maybe we can peer under the hood a little bit of this AI model. How does the magic work? How does it pick the next best point to take and why it’s better than you could do as a graduate student or researcher?

Maruyama: Yeah, and so I think it’s interesting, right? In science, a lot of times we’re taught to hold everything constant, change one variable at a time, search over that entire space, see what happened, and then go back and try something else, right? So we reduce it to one variable at a time. It’s a reductionist approach. And that’s worked really well, but a lot of the problems that we want to go after are simply too complex for that reductionist approach. And so the benefit of being able to use artificial intelligence is that high dimensionality is no problem, right? Tens of dimensions search over very complex high-dimensional parameter space, which is overwhelming to humans, right? Is just basically bread and butter for AI. The other part to it is the iterative part. The beauty of doing autonomous experimentation is that you’re constantly iterating. You’re constantly learning over what just happened. You might also say, well, not only do I know what happened experimentally, but I have other sources of prior knowledge, right? So for example, ideal gas law says that this should happen, right? Or Gibbs phase rule might say, this can happen or this can’t happen. So you can use that prior knowledge to say, “Okay, I’m not going to do those experiments because that’s not going to work. I’m going to try here because this has the best chance of working.”

And within that, there are many different machine learning or artificial intelligence algorithms. Bayesian optimization is a popular one to help you choose what experiment is best. There’s also new AI that people are trying to develop to get better search.

Genkina: Cool. And so the software part of this autonomous robot is available for anyone to download, which is also really exciting. So what would someone need to do to be able to use that? Do they need to get a 3D printer and a Raspberry Pi and set it up? And what would they be able to do with it? Can they just build carbon nanotubes or can they do more stuff?

Maruyama: Right. So what we did, we built ARES OS, which is our open source software, and we’ll make sure to get you the GitHub link so that anyone can download it. And the idea behind ARES OS is that it provides a software framework for anyone to build their own autonomous research robot. And so the 3D printing example will be out there soon. But it’s the starting point. Of course, if you want to build your own new kind of robot, you still have to do the software development, for example, to link the ARES framework, the core, if you will, to your particular hardware, maybe your particular camera or 3D printer, or pipetting robot, or spectrometer, whatever that is. We have examples out there and we’re hoping to get to a point where it becomes much more user-friendly. So having direct Python connects so that you don’t— currently it’s programmed in C#. But to make it more accessible, we’d like it to be set up so that if you can do Python, you can probably have good success in building your own research robot.

Genkina: Cool. And you’re also working on a educational version of this, I understand. So what’s the status of that and what’s different about that version?

Maruyama: Yeah, right. So the educational version is going to be-- its sort of composition of a combination of hardware and software. So what we’re starting with is a low-cost 3D printer. And we’re collaborating now with the University at Buffalo, Materials Design Innovation Department. And we’re hoping to build up a robot based on a 3D printer. And we’ll see how it goes. It’s still evolving. But for example, it could be based on this very inexpensive $200 3D printer. It’s an Ender 3D printer. There’s another printer out there that’s based on University of Washington’s Jubilee printer. And that’s a very exciting development as well. So professors Lilo Pozzo and Nadya Peek at the University of Washington built this Jubilee robot with that idea of accessibility in mind. And so combining our ARES OS software with their Jubilee robot hardware is something that I’m very excited about and hope to be able to move forward on.

Genkina: What’s this Jubilee 3D printer? How is it different from a regular 3D printer?

Maruyama: It’s very open source. Not all 3D printers are open source and it’s based on a gantry system with interchangeable heads. So for example, you can get not just a 3D printing head, but other heads that might do things like do indentation, see how stiff something is, or maybe put a camera on there that can move around. And so it’s the flexibility of being able to pick different heads dynamically that I think makes it super useful. For the software, right, we have to have a good, accessible, user-friendly graphical user interface, a GUI. That takes time and effort, so we want to work on that. But again, that’s just the hardware software. Really to make ARES a good educational platform, we need to make it so that a teacher who’s interested can have the lowest activation barrier possible, right? We want she or he to be able to pull a lesson plan off of the internet, have supporting YouTube videos, and actually have the material that is a fully developed curriculum that’s mapped against state standards.

So that, right now, if you’re a teacher who— let’s face it, teachers are already overwhelmed with all that they have to do, putting something like this into their curriculum can be a lot of work, especially if you have to think about, well, I’m going to take all this time, but I also have to meet all of my teaching standards, all the state curriculum standards. And so if we build that out so that it’s a matter of just looking at the curriculum and just checking off the boxes of what state standards it maps to, then that makes it that much easier for the teacher to teach.

Genkina: Great. And what do you think is the timeline? Do you expect to be able to do this sometime in the coming year?

Maruyama: That’s right. These things always take longer than hoped for than expected, but we’re hoping to do it within this calendar year and very excited to get it going. And I would say for your listeners, if you’re interested in working together, please let me know. We’re very excited about trying to involve as many people as we can.

Genkina: Great. Okay, so you have the educational version, and you have the more research geared version, and you’re working on making this educational version more accessible. Is there something with the research version that you’re working on next, how you’re hoping to upgrade it, or is there something you’re using it for right now that you’re excited about?

There’s a number of things that we are very excited about the possibility of carbon nanotubes being produced at very large scale. So right now, people may remember carbon nanotubes as that great material that sort of never made it and was very overhyped. But there’s a core group of us who are still working on it because of the important promise of that material. So it’s material that is super strong, stiff, lightweight, electrically conductive. Much better than silicon as a digital electronics compute material. All of those great things, except we’re not making it at large enough scale. It’s actually used pretty significantly in lithium-ion batteries. It’s an important application. But other than that, it’s sort of like where’s my flying car? It’s never panned out. But there’s, as I said, a group of us who are working to really produce carbon nanotubes at much larger scale. So large scale for nanotubes now is sort of in the kilogram or ton scale. But what we need to get to is hundreds of millions of tons per year production rates. And why is that? Well, there’s a great effort that came out of ARPA-E. So the Department of Energy Advanced Research Projects Agency and the E is for Energy in that case.

So they funded a collaboration between Shell Oil and Rice University to pyrolyze methane, so natural gas into hydrogen for the hydrogen economy. So now that’s a clean burning fuel plus carbon. And instead of burning the carbon to CO2, which is what we now do, right? We just take natural gas and feed it through a turbine and generate electric power instead of— and that, by the way, generates so much CO2 that it’s causing global climate change. So if we can do that pyrolysis at scale, at hundreds of millions of tons per year, it’s literally a save the world proposition, meaning that we can avoid so much CO2 emissions that we can reduce global CO2 emissions by 20 to 40 percent. And that is the save the world proposition. It’s a huge undertaking, right? That’s a big problem to tackle, starting with the science. We still don’t have the science to efficiently and effectively make carbon nanotubes at that scale. And then, of course, we have to take the material and turn it into useful products. So the batteries is the first example, but thinking about replacing copper for electrical wire, replacing steel for structural materials, aluminum, all those kinds of applications. But we can’t do it. We can’t even get to that kind of development because we haven’t been able to make the carbon nanotubes at sufficient scale.

So I would say that’s something that I’m working on now that I’m very excited about and trying to get there, but it’s going to take some good developments in our research robots and some very smart people to get us there.

Genkina: Yeah, it seems so counterintuitive that making everything out of carbon is good for lowering carbon emissions, but I guess that’s the break.

Maruyama: Yeah, it is interesting, right? So people talk about carbon emissions, but really, the molecule that’s causing global warming is carbon dioxide, CO2, which you get from burning carbon. And so if you take that methane and parallelize it to carbon nanotubes, that carbon is now sequestered, right? It’s not going off as CO2. It’s staying in solid state. And not only is it just not going up into the atmosphere, but now we’re using it to replace steel, for example, which, by the way, steel, aluminum, copper production, all of those things emit lots of CO2 in their production, right? They’re energy intensive as a material production. So it’s kind of ironic.

Genkina: Okay, and are there any other research robots that you’re excited about that you think are also contributing to this democratization of science process?

Maruyama: Yeah, so we talked about Jubilee, the NIST robot, which is from Professor Ichiro Takeuchi at Maryland and Gilad Kusne at NIST, National Institute of Standards and Technology. Theirs is fun too. It’s LEGO as. So it’s actually based on a LEGO robotics platform. So it’s an actual chemistry robot built out of Legos. So I think that’s fun as well. And you can imagine, just like we have LEGO robot competitions, we can have autonomous research robot competitions where we try and do research through these robots or competitions where everybody sort of starts with the same robot, just like with LEGO robotics. So that’s fun as well. But I would say there’s a growing number of people doing these kinds of, first of all, low-cost science, accessible science, but in particular low-cost autonomous experimentation.

Genkina: So how far are we from a world where a high school student has an idea and they can just go and carry it out on some autonomous research system at some high-end lab?

Maruyama: That’s a really good question. I hope that it’s going to be in 5 to 10 years, that it becomes reasonably commonplace. But it’s going to take still some significant investment to get this going. And so we’ll see how that goes. But I don’t think there are any scientific impediments to getting this done. There is a significant amount of engineering to be done. And sometimes we hear, oh, it’s just engineering. The engineering is a significant problem. And it’s work to get some of these things accessible, low cost. But there are lots of great efforts. There are people who have used CDs, compact discs to make spectrometers out of. There are lots of good examples of citizen science out there. But it’s, I think, at this point, going to take investment in software, in hardware to make it accessible, and then importantly, getting students really up to speed on what AI is and how it works and how it can help them. And so I think it’s actually really important. So again, that’s the democratization of science is if we can make it available to everyone and accessible, then that helps people, everyone contribute to science. And I do believe that there are important contributions to be made by ordinary citizens, by people who aren’t you know PhDs working in a lab.

And I think there’s a lot of science out there to be done. If you ask working scientists, almost no one has run out of ideas or things they want to work on. There’s many more scientific problems to work on than we have the time where people are funding to work on. And so if we make science cheaper to do, then all of a sudden, more people can do science. And so those questions start to be resolved. And so I think that’s super important. And now we have, instead of, just those of us who work in big labs, you have millions, tens of millions, up to a billion people, that’s the billion scientist idea, who are contributing to the scientific community. And that, to me, is so powerful that many more of us can contribute than just the few of us who do it right now.

Genkina: Okay, that’s a great place to end on, I think. So, today we spoke to Dr. Benji Maruyama, a material scientist at AFRL, about his efforts to democratize scientific discovery through automated research robots. For IEEE Spectrum, I’m Dina Genkina, and I hope you’ll join us next time on Fixing the Future.

Figuring Out Semiconductor Manufacturing's Climate Footprint



Samuel K. Moore Hi. I’m Samuel K. Moore for IEEE Spectrum‘s Fixing the Future podcast. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. The semiconductor industry is in the midst of a major expansion driven by the seemingly insatiable demands of AI, the addition of more intelligence in transportation, and national security concerns, among many other things. Governments and the industry itself are starting to worry what this expansion might mean for chip-making’s carbon footprint and its sustainability generally. Can we make everything in our world smarter without worsening climate change? I’m here with someone who’s helping figure out the answer. Lizzie Boakes is a life cycle analyst in the Sustainable Semiconductor Technologies and Systems Program at IMEC, the Belgium-based nanotech research organization. Welcome, Lizzie.

Lizzie Boakes: Hello.

Moore: Thanks very much for coming to talk with us.

Boakes: You’re welcome. Pleasure to be here.

Moore: So let’s start with, just how big is the carbon footprint of the semiconductor industry? And is it really big enough for us to worry about?

Boakes: Yeah. So quantifying the carbon footprint of the semiconductor industry is not an easy task at all, and that’s because semiconductors are now embedded in so many industries. So the most obvious industry is the ICT industry, which is estimated to be about approximately 3 percent of the global emissions. However, semiconductors can also be found in so many other industries, and their embedded nature is increasing dramatically. So they’re embedded in automotives, they’re embedded in healthcare applications, as far as aerospace and defense applications too. So their expansion and adoption of semiconductors in all of these different industries just makes it very hard to quantify.

And the global impact of the semiconductor chip manufacturing itself is expected to increase as well because of the fact that we need more and more of these chips. So the global chip market is projected to have a 7 percent compound annual growth rate in the next coming years. And bearing in mind that the manufacturing of the IC chips itself often accounts for the largest share of the life cycle climate impact, especially for consumer electronics, for instance. This increase in demand for so many chips and the demand for the manufacturing of those chips will have a significant impact on the climate impact of the semiconductor industry. So it’s really crucial that we focus on this and we identify the challenges and try to work towards reducing the impact to achieve any of our ambitions at reaching net zero before 2050.

Moore: Okay. So the way you looked at this, it was sort of a— it was cradle-to-gate life cycle. Can you sort of explain what that entails, what that really means?

Boakes: Yeah. So cradle to gate here means that we quantify the climate impacts, not only of the IC manufacturing processes that occur inside the semiconductor fab, but also we quantify the embedded impact of all of the energy and material flows that are entering the fab that are necessary for the fab to operate. So in other words, we try to quantify the climate impact of the value chain upstream to the fab itself, and that’s where the cradle begins. So the extraction of all of the materials that you need, all of the energy sources. For instance, the extraction of coal for electricity production. That’s the cradle. And the gate refers to the point where you stop the analysis, you stop the quantification of the impact. And in our case, that is the end of the processing of the silicon wafer for a specific technology node.

Moore: Okay. So it stops basically when you’ve got the die, but it hasn’t been packaged and put in a computer.

Boakes: Exactly.

Moore: And so why do you feel like you have to look at all the upstream stuff that a chip-maker may not really have any control over, like coal and such like that?

Boakes: So there is a big need to analyze your scope through what is called— in greenhouse gas protocol, you have three different scopes. Your scope one is your direct emissions. Your scope two is the emissions related to the electricity consumption and the production of electricity that you have consumed in your operation. And scope three is basically everything else, and a lot of people start with scope three, all of their upstream materials. And it does have— it’s obviously the largest scope because it’s everything else other than what you’re doing. And I think it’s necessary to coordinate your supply chain so that you make sure you’re doing the most sustainable solution that you can. So if there are— you have power in your purchasing, you have power over how you choose your supply chain. And if you can manipulate it in a way where you have reduced emissions, then that should be done. Often, scope three is the largest proportion of the total impact, A, because it’s one of the biggest groups, but B, because there is a lot of materials and things coming in. So yeah, it’s necessary to have a look up there and see how you can best reduce your emissions. And yeah, you can have power in your influence over what you choose in the end, in terms of what you’re purchasing.

Moore: All right. So in your analysis, what did you see as sort of the biggest contributors to the chip fabs carbon output?

Boakes: So without effective abatement, the processed gases that are released as direct emissions, they would really dominate the total emissions of the IC chip manufacturing. And this is because the processed gases that are often consumed in IC manufacturing, they have a very high GWP value. So if you do not abate them and you do not destroy them in a small abatement system, then their emissions and contribution to global warming are very large. However, you can drastically reduce that emission already by deploying effective abatements on specific process areas, the high-impact process areas. And if you do that, then this distribution shifts.

So then you would see that the direct emission-- the contribution of the direct emissions would reduce because you’ve reduced your direct emission output. But then the next-biggest contributor would be the electrical energy. So the scope to the emissions that are related to the production of the electricity that you’re consuming. And as you can imagine, IC manufacturing is very energy-intensive. So there’s a lot of electricity coming in, so it’s necessary then to try to start to decarbonize your electricity provider or reduce your carbon intensity of your electricity that you’re purchasing.

And then once you do that step, you would also see that again the distribution changes, and your scope three, your upstream materials, would then be the largest contributors to the total impact. And the materials that we’ve identified as being the most or the largest contributors to that impact would be, for instance, the silicon wafers themselves, the raw wafers before you start processing, as well as wet chemicals. So these are chemicals that are very specific to the semiconductor industry. There’s a lot of consumption there, and they’re very specific and have a high GWP value.

Moore: Okay. So if we could start with— unpack a few of those. First off, what are some of these chemicals, and are they generally abated well these days? Or is this sort of something that’s still a coming problem?

Boakes: Yeah. So they could be from specific photoresists to— there is a very heavy consumption of basic chemicals for neutralization of wastewater, these types of things. So there’s a combination of having in a high embedded GWP value, which means that it takes a very large amount of-- or has a very large impact to produce the chemical itself, or you just have a lot that you’re consuming of it. So it might have a low embedded impact, but you’re just using so much of it that, in the end, it’s the higher contributor anyway. So you have two kind of buckets there. And yeah, it would just be a matter of, you have to multiply through the amounts by your embedded emission to see which ones come on top. But yeah, we see that often, the wastewater treatment uses a lot of these chemicals just for neutralization and treatment of wastewater on site, as well as very specific chemicals for the semiconductor industry such as photoresists and CMP cleans, those types of very specific chemistries which, again, it’s difficult to quantify the embedded impact of because often there’s a proprietary— you don’t exactly know what goes into it, and it’s a lot of difficulty trying to actually characterize those chemicals appropriately. So often we apply a proxy value to those. So this is something that we would really like to improve in the future would be having more communication with our supply chain and really understanding what the real embedded impact of those chemicals would be. This is something that we really would need to work on to really identify the high-impact chemicals and try anything we can to reduce them.

Moore: Okay. And what about those direct greenhouse gas emission chemicals? Are those generally abated, or is that something that’s still being worked on?

Boakes: So there is quite, yeah, a substantial amount of work going into the abatement system. So we have the usual methane combustion of processed gases. There’s also now development in plasma abatement systems. So there are different abatement systems being developed, and their effectiveness is quite high. However, we don’t have such a good oversight at the moment on the amount of abatement that’s being deployed in high-volume manufacturing. This, again, is quite a sensitive topic to discuss from a research perspective when you don’t have insight into the fab itself. So asking particular questions about how much abatement is deployed on certain tools is not such easy data to come across.

So we often go with models. So we apply the IPCC Tier 2c model where, basically, you calculate the direct emissions by how much you’ve used. So it’s a mathematical model based on how much you’ve consumed. There is a model that generates the amounts that would be emitted directly into the atmosphere. So this is the model that we’ve applied. And we see that, yeah, it does correlate sometimes with the top-down reporting that comes from the industry. So yeah, I think there is a lot of way forward where we can start comparing top-down reporting to these bottom-up models that we’ve been generating from a kind of research perspective. So yeah, there’s still a lot of work to do to match those.

Moore: Okay. Are there any particular nasties in terms of what those chemicals are? I don’t think people are familiar with really what comes out of the smokestack of chip fab.

Boakes: So one of the highest GWP gases, for instance, would be the sulfur hexafluoride, so SF6. This has a GWP value of 25,200 kilograms of CO2 equivalent. So that really means that it has over 25,000 times more damaging effects to the climate compared to a CO2, so the equivalent CO2 molecule. So this is extremely high. But there’s also others like NF4 that— these also have over 1,000 times more damaging to the climate than CO2. However, they can be abated. So in these abatement systems, you can destroy them and they’re no longer being released.

There are also efforts going into replacing high GWP gases such as these that I’ve mentioned to use alternatives which have a lower GWP value. However, this is going to take a lot of process development and a lot of effort to go into changing those process flows to adapt to these new alternatives. And this will then be a slow adoption into the high-volume fabs because, as we know, this industry is quite rigid to any changes that you suggest. So yeah, it will be a slow adoption if there are any alternatives. And for the meantime, effective abatement can destroy quite a lot. But it would really be having to employ and really have those abatement systems on those high-impact process areas.

Moore: As Moore’s Law continues, each step or manufacturing node might have a different carbon footprint. What were some of the big trends your research revealed regarding that?

Boakes: So in our model, we’ve assumed a constant fab operation condition, and this means that we’ve assumed the same abatement systems, the same electrical carbon intensities, for all of the different technology nodes, which-- yeah. So we see that there is a general increase in total emissions under these assumptions, and we double in total climate impact from N28 to A14. So when we evolve in that technology node, we do see it doubling between N28 and A14. And this can be attributed to the increased process complexity as well as the increased number of steps, in process steps, as well as the different chemistries being used, different materials that are being embedded in the chips. This all contributes to it. So generally, there is an increase because of the process complexities that’s required to really reach those aggressive pitches in the more advanced technology nodes.

Moore: I see. Okay. So as things are progressing, they’re also kind of getting worse in some ways. Is there anything—?

Boakes: Yeah.

Moore: Is this inevitable, or is there—?

Boakes: [laughter] Yeah. If you make things more complicated, it will probably take more energy and more materials to do it. Also, when you make things smaller, you need to change your processes and use-- yeah, for instance, with interconnect metals, we’ve really reached the physical limits sometimes because it’s gotten so small that the physical limits of really traditional metals like copper or tungsten has been reached. And now they’re looking for new alternatives like ruthenium, yeah, or platinum. Different types of metals which-- again, if it’s a platinum group metal, of course it’s going to have a higher embedded impact. So when we hit those limits, physical limits or limits to the current technology and we need to change it in a way that makes it more complicated, more energy-intensive— again, the move to EUV. EUV is an extremely energy-intensive tool compared to DUV.

But an interesting point there on the EUV topic would be that it’s really important to keep this holistic view because even though moving from a DUV tool to an EUV tool, it has a large jump in energy intensity per kilowatt hour. The power intensity of the tool is much higher. However, you’re able to reduce the number of total steps to achieve a certain deposition or edge. So you’re able to overall reduce your emissions, or you’re able to reduce your energy intensity of the process flow. So even though we make all these changes and we might think, “Oh, that’s a very powerful tool,” it could go and cut down on process steps in the holistic view. So it’s always good to keep a kind of life cycle perspective to be able to see, “Okay, if I implement this tool, it does have a higher power intensity, but I can reduce half of the number of steps to achieve the same result. So it’s overall better. So it’s always good to keep that kind of holistic view when we’re doing any type of sustainability assessment.

Moore: Oh, that’s interesting. That’s interesting. So you also looked at— as sort of the nodes get more advanced and processes get more complex. What did that do to water consumption?

Boakes: Also, so again, the number of steps in a similar sense. If you’re increasing your number of process steps, there would be an increase in the number of those wet clean steps as well that are often the high-water-consumption steps. So if you have an increased number of those particular process steps, then you’re going to have a higher water consumption in the end. So it is just based on the number of steps and the complexity of the process as we advance into the more advanced technology nodes.

Moore: Okay. So it sounds like complexity is kind of king in this field.

Boakes: Yeah.

Moore: What should the industry be focusing on most to achieve its carbon goals going forward?

Boakes: Yeah. So I think to start off, you need to think of the largest contributors and prioritize those. So of course, if you’re looking at the total impact and we’re looking at a system that doesn’t have effective abatement, then of course, direct emissions would be the first thing that you want to try to focus on and reducing, as they would be the largest contributors. However, once you start moving into a system which already has effective abatement, then your next objective would be to decarbonize your electricity production, go for a lower-carbon-intensity electricity provider, so you’re moving more towards green energy.

And at the same time, you would also want to try to target your high-impact value chain. So your materials and energy that are coming into the fab, you need to look at the ones that are the most highly impacting and then try to find a way to find a provider that does a kind of decarbonized version of the same material or try to design a way where you don’t need that certain material. So not necessarily that it has to be done in a sequential order. Of course, you can do it all in parallel. It would be better. So it doesn’t have to be one, two, three, but the idea and the prioritizing comes from targeting the largest contributors. And that would be direct emissions, decarbonizing your electricity production, and then looking at your supply chain and looking into those high-impact materials.

Moore: Okay. And as a researcher, I’m sure there’s data you would love to have that you probably don’t have. What could industry do better about providing that kind of data to make these models work?

Boakes: So for a lot of our a lot of our scope three, so that upstream, that cradle-to-fab, let’s call it— those impacts. We’ve had to use quite a lot— we had to rely quite a lot on life cycle assessment literature or life cycle assessment databases, which are available through purchasing, or sometimes if you’re lucky, you have a free database. So I would say-- and that’s also because my role in my research group is more looking at that LCA and upstream materials and quantifying the environmental impact of that. So from my perspective, I really think that this industry needs to work on providing data through the supply chain, which is standardized in a way that people can understand, which is product-specific so that we can really allocate embedded impact to a specific product and multiply that through then by our inventory, which we have data on. So for me, it’s really having a standardized way of communicating sustainability impact of production, upstream production, throughout the supply chain. Not only tier one, but all the way up to the cradle, the beginning of the value chain. So this is something-- and I know it is evolving and it will be slow, and it does need a lot of cooperation. But I do think that that would be very, very useful for really making our work more realistic, more representative. And then people can rely on it better when they start using our data in their product carbon footprints, for instance.

Moore: Okay. And speaking of sort of your work, can you tell me what imec.netzero is and how that works?

Boakes: Yeah. This is a web app that’s been developed in our program, so the SSTS program at IMEC. And this web app is a way for people to interact with the model that we’ve been building, the LCA model. So it’s based on life cycle assessment, and it’s really what we’ve been talking about with this cradle-to-gate model of the IC-chip-manufacturing process. It tries to model a generic fab. So we don’t necessarily point to any specific fab or process flow from a certain company. But we try to make a very generic industry average that people can use to estimate and get a more realistic view on the modern IC chip. Because we noticed that, in literature and what’s available in LCA databases, the semiconductor data is extremely old, and we know that this industry moves very quickly. So there is a huge gap between what’s happening now and what is going into your phones and what’s going into the computers and the LCA data that’s available to try to quantify that from a sustainability perspective. So imec.netzero, we work with all of— we have the benefit of being connected with the industry and now a position in IMEC, and we have a view on those more advanced technology nodes.

So not only do we have models for the nodes that are being generated and produced today, but we also predict the future nodes. And we have models to predict what will happen in 5 years’ time, in 10 years’ time. So it’s a really powerful tool, and it’s available publicly. We have a public version, which is a limited-- it has limited functionality in comparison to the program partner version. So we work with our program partners who have access to a much more complicated and, yeah, deep way of using the web app, as well as the other work that we do in our program. And our program partners also contribute data to the model, and we’re constantly evolving the model to improve always. So that’s a bit of an overview.

Moore: Cool. Cool. Thank you very much, Lizzie. I have been speaking to Lizzie Boakes, a life cycle analyst in the Sustainable Semiconductor Technologies and Systems Program at IMEC, the Belgium-based nanotech research organization. Thank you again, Lizzie. This has been fantastic.

❌