FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇IEEE Spectrum
  • Brain-Inspired Computer Approaches Brain-Like SizeDina Genkina
    Today Dresden, Germany–based startup SpiNNcloud Systems announced that its hybrid supercomputing platform, the SpiNNcloud Platform, is available for sale. The machine combines traditional AI accelerators with neuromorphic computing capabilities, using system-design strategies that draw inspiration from the human brain. Systems for purchase vary in size, but the largest commercially available machine can simulate 10 billion neurons, about one-tenth the number in the human brain. The announcement
     

Brain-Inspired Computer Approaches Brain-Like Size

8. Květen 2024 v 16:38


Today Dresden, Germany–based startup SpiNNcloud Systems announced that its hybrid supercomputing platform, the SpiNNcloud Platform, is available for sale. The machine combines traditional AI accelerators with neuromorphic computing capabilities, using system-design strategies that draw inspiration from the human brain. Systems for purchase vary in size, but the largest commercially available machine can simulate 10 billion neurons, about one-tenth the number in the human brain. The announcement was made at the ISC High Performance conference in Hamburg, Germany.

“We’re basically trying to bridge the gap between brain inspiration and artificial systems.” —Hector Gonzalez, SpiNNcloud Systems

SpiNNcloud Systems was founded in 2021 as a spin-off of the Dresden University of Technology. Its original chip, the SpiNNaker1, was designed by Steve Furber, the principal designer of the ARM microprocessor—the technology that now powers most cellphones. The SpiNNaker1 chip is already in use by 60 research groups in 23 countries, SpiNNcloud Systems says.

Human Brain as Supercomputer

Brain-emulating computers hold the promise of vastly lower energy computation and better performance on certain tasks. “The human brain is the most advanced supercomputer in the universe, and it consumes only 20 watts to achieve things that artificial intelligence systems today only dream of,” says Hector Gonzalez, cofounder and co-CEO of SpiNNcloud Systems. “We’re basically trying to bridge the gap between brain inspiration and artificial systems.”

Aside from sheer size, a distinguishing feature of the SpiNNaker2 system is its flexibility. Traditionally, most neuromorphic computers emulate the brain’s spiking nature: Neurons fire off electrical spikes to communicate with the neurons around them. The actual mechanism of these spikes in the brain is quite complex, and neuromorphic hardware often implements a specific simplified model. The SpiNNaker2 can implement a broad range of such models however, as they are not hardwired into its architecture.

Instead of looking how each neuron and synapse operates in the brain and trying to emulate that from the bottom up, Gonzalez says, the his team’s approach involved implementing key performance features of the brain. “It’s more about taking a practical inspiration from the brain, following particularly fascinating aspects such as how the brain is energy proportional and how it is simply highly parallel,” Gonzalez says.

To build hardware that is energy proportional—each piece draws power only when it’s actively in use and highly parallel—the company started with the building blocks. The basic unit of the system is the SpiNNaker2 chip, which hosts 152 processing units. Each processing unit has an ARM-based microcontroller, and unlike its predecessor the SpiNNaker1, also comes equipped with accelerators for use on neuromorphic models and traditional neural networks.

Vertical grey bars alternating with bright green lights The SpiNNaker2 supercomputer has been designed to model up to 10 billion neurons, about one-tenth the number in the human brain. SpiNNCloud Systems

The processing units can operate in an event-based manner: They can stay off unless an event triggers them to turn on and operate. This enables energy-proportional operation. The events are routed between units and across chips asynchronously, meaning there is no central clock coordinating their movements—which can allow for massive parallelism. Each chip is connected to six other chips, and the whole system is connected in the shape of a torus to ensure all connecting wires are equally short.

The largest commercially offered system is not only capable of emulating 10 billion neurons, but also performing 0.3 billion billion operations per second (exaops) of more traditional AI tasks, putting it on a comparable scale with the top 10 largest supercomputers today.

Among the first customers of the SpiNNaker2 system is a team at Sandia National Labs, which plans to use it for further research on neuromorphic systems outperforming traditional architectures and performing otherwise inaccessible computational tasks.

“The ability to have a general programmable neuron model lets you explore some of these more complex learning rules that don’t necessarily fit onto older neuromorphic systems,” says Fred Rothganger, senior member of technical staff at Sandia. “They, of course, can run on a general-purpose computer. But those general-purpose computers are not necessarily designed to efficiently handle the kind of communication patterns that go on inside a spiking neural network. With [the SpiNNaker2 system] we get the ideal combination of greater programmability plus efficient communication.”

  • ✇IEEE Spectrum
  • The UK's ARIA Is Searching For Better AI TechDina Genkina
    Dina Genkina: Hi, I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. And today our guest on the show is Suraj Bramhavar. Recently, Bramhavar left his job as a co-founder and CTO of Sync Computing to start a new chapter. The UK go
     

The UK's ARIA Is Searching For Better AI Tech

1. Květen 2024 v 18:10


Dina Genkina: Hi, I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. And today our guest on the show is Suraj Bramhavar. Recently, Bramhavar left his job as a co-founder and CTO of Sync Computing to start a new chapter. The UK government has just founded the Advanced Research Invention Agency, or ARIA, modeled after the US’s own DARPA funding agency. Bramhavar is heading up ARIA’s first program, which officially launched on March 12th of this year. Bramhavar’s program aims to develop new technology to make AI computation 1,000 times more cost efficient than it is today. Siraj, welcome to the show.

Suraj Bramhavar: Thanks for having me.

Genkina: So your program wants to reduce AI training costs by a factor of 1,000, which is pretty ambitious. Why did you choose to focus on this problem?

Bramhavar: So there’s a couple of reasons why. The first one is economical. I mean, AI is basically to become the primary economic driver of the entire computing industry. And to train a modern large-scale AI model costs somewhere between 10 million to 100 million pounds now. And AI is really unique in the sense that the capabilities grow with more computing power thrown at the problem. So there’s kind of no sign of those costs coming down anytime in the future. And so this has a number of knock-on effects. If I’m a world-class AI researcher, I basically have to choose whether I go work for a very large tech company that has the compute resources available for me to do my work or go raise 100 million pounds from some investor to be able to do cutting edge research. And this has a variety of effects. It dictates, first off, who gets to do the work and also what types of problems get addressed. So that’s the economic problem. And then separately, there’s a technological one, which is that all of this stuff that we call AI is built upon a very, very narrow set of algorithms and an even narrower set of hardware. And this has scaled phenomenally well. And we can probably continue to scale along kind of the known trajectories that we have. But it’s starting to show signs of strain. Like I just mentioned, there’s an economic strain, there’s an energy cost to all this. There’s logistical supply chain constraints. And we’re seeing this now with kind of the GPU crunch that you read about in the news.

And in some ways, the strength of the existing paradigm has kind of forced us to overlook a lot of possible alternative mechanisms that we could use to kind of perform similar computations. And this program is designed to kind of shine a light on those alternatives.

Genkina: Yeah, cool. So you seem to think that there’s potential for pretty impactful alternatives that are orders of magnitude better than what we have. So maybe we can dive into some specific ideas of what those are. And you talk about in your thesis that you wrote up for the start of this program, you talk about natural computing systems. So computing systems that take some inspiration from nature. So can you explain a little bit what you mean by that and what some of the examples of that are?

Bramhavar: Yeah. So when I say natural-based or nature-based computing, what I really mean is any computing system that either takes inspiration from nature to perform the computation or utilizes physics in a new and exciting way to perform computation. So you can think about kind of people have heard about neuromorphic computing. Neuromorphic computing fits into this category, right? It takes inspiration from nature and usually performs a computation in most cases using digital logic. But that represents a really small slice of the overall breadth of technologies that incorporate nature. And part of what we want to do is highlight some of those other possible technologies. So what do I mean when I say nature-based computing? I think we have a solicitation call out right now, which calls out a few things that we’re interested in. Things like new types of in-memory computing architectures, rethinking AI models from an energy context. And we also call out a couple of technologies that are pivotal for the overall system to function, but are not necessarily so eye-catching, like how you interconnect chips together, and how you simulate a large-scale system of any novel technology outside of the digital landscape. I think these are critical pieces to realizing the overall program goals. And we want to put some funding towards kind of boosting that workup as well.

Genkina: Okay, so you mentioned neuromorphic computing is a small part of the landscape that you’re aiming to explore here. But maybe let’s start with that. People may have heard of neuromorphic computing, but might not know exactly what it is. So can you give us the elevator pitch of neuromorphic computing?

Bramhavar: Yeah, my translation of neuromorphic computing— and this may differ from person to person, but my translation of it is when you kind of encode the information in a neural network via spikes rather than kind of discrete values. And that modality has shown to work pretty well in certain situations. So if I have some camera and I need a neural network next to that camera that can recognize an image with very, very low power or very, very high speed, neuromorphic systems have shown to work remarkably well. And they’ve worked in a variety of other applications as well. One of the things that I haven’t seen, or maybe one of the drawbacks of that technology that I think I would love to see someone solve for is being able to use that modality to train large-scale neural networks. So if people have ideas on how to use neuromorphic systems to train models at commercially relevant scales, we would love to hear about them and that they should submit to this program call, which is out.

Genkina: Is there a reason to expect that these kinds of— that neuromorphic computing might be a platform that promises these orders of magnitude cost improvements?

Bramhavar: I don’t know. I mean, I don’t know actually if neuromorphic computing is the right technological direction to realize that these types of orders of magnitude cost improvements. It might be, but I think we’ve intentionally kind of designed the program to encompass more than just that particular technological slice of the pie, in part because it’s entirely possible that that is not the right direction to go. And there are other more fruitful directions to put funding towards. Part of what we’re thinking about when we’re designing these programs is we don’t really want to be prescriptive about a specific technology, be it neuromorphic computing or probabilistic computing or any particular thing that has a name that you can attach to it. Part of what we tried to do is set a very specific goal or a problem that we want to solve. Put out a funding call and let the community kind of tell us which technologies they think can best meet that goal. And that’s the way we’ve been trying to operate with this program specifically. So there are particular technologies we’re kind of intrigued by, but I don’t think we have any one of them selected as like kind of this is the path forward.

Genkina: Cool. Yeah, so you’re kind of trying to see what architecture needs to happen to make computers as efficient as brains or closer to the brain’s efficiency.

Bramhavar: And you kind of see this happening in the AI algorithms world. As these models get bigger and bigger and grow their capabilities, they’re starting to introduce things that we see in nature all the time. I think probably the most relevant example is this stable diffusion, this neural network model where you can type in text and generate an image. It’s got diffusion in the name. Diffusion is a natural process. Noise is a core element of this algorithm. And so there’s lots of examples like this where they’ve kind of— that community is taking bits and pieces or inspiration from nature and implementing it into these artificial neural networks. But in doing that, they’re doing it incredibly inefficiently.

Genkina: Yeah. Okay, so great. So the idea is to take some of the efficiencies out in nature and kind of bring them into our technology. And I know you said you’re not prescribing any particular solution and you just want that general idea. But nevertheless, let’s talk about some particular solutions that have been worked on in the past because you’re not starting from zero and there are some ideas about how to do this. So I guess neuromorphic computing is one such idea. Another is this noise-based computing, something like probabilistic computing. Can you explain what that is?

Bramhavar: Noise is a very intriguing property? And there’s kind of two ways I’m thinking about noise. One is just how do we deal with it? When you’re designing a digital computer, you’re effectively designing noise out of your system, right? You’re trying to eliminate noise. And you go through great pains to do that. And as soon as you move away from digital logic into something a little bit more analog, you spend a lot of resources fighting noise. And in most cases, you eliminate any benefit that you get from your kind of newfangled technology because you have to fight this noise. But in the context of neural networks, what’s very interesting is that over time, we’ve kind of seen algorithms researchers discover that they actually didn’t need to be as precise as they thought they needed to be. You’re seeing the precision kind of come down over time. The precision requirements of these networks come down over time. And we really haven’t hit the limit there as far as I know. And so with that in mind, you start to ask the question, “Okay, how precise do we actually have to be with these types of computations to perform the computation effectively?” And if we don’t need to be as precise as we thought, can we rethink the types of hardware platforms that we use to perform the computations?

So that’s one angle is just how do we better handle noise? The other angle is how do we exploit noise? And so there’s kind of entire textbooks full of algorithms where randomness is a key feature. I’m not talking necessarily about neural networks only. I’m talking about all algorithms where randomness plays a key role. Neural networks are kind of one area where this is also important. I mean, the primary way we train neural networks is stochastic gradient descent. So noise is kind of baked in there. I talked about stable diffusion models like that where noise becomes a key central element. In almost all of these cases, all of these algorithms, noise is kind of implemented using some digital random number generator. And so there the thought process would be, “Is it possible to redesign our hardware to make better use of the noise, given that we’re using noisy hardware to start with?” Notionally, there should be some savings that come from that. That presumes that the interface between whatever novel hardware you have that is creating this noise, and the hardware you have that’s performing the computing doesn’t eat away all your gains, right? I think that’s kind of the big technological roadblock that I’d be keen to see solutions for, outside of the algorithmic piece, which is just how do you make efficient use of noise.

When you’re thinking about implementing it in hardware, it becomes very, very tricky to implement it in a way where whatever gains you think you had are actually realized at the full system level. And in some ways, we want the solutions to be very, very tricky. The agency is designed to fund very high risk, high reward type of activities. And so there in some ways shouldn’t be consensus around a specific technological approach. Otherwise, somebody else would have likely funded it.

Genkina: You’re already becoming British. You said you were keen on the solution.

Bramhavar: I’ve been here long enough.

Genkina: It’s showing. Great. Okay, so we talked a little bit about neuromorphic computing. We talked a little bit about noise. And you also mentioned some alternatives to backpropagation in your thesis. So maybe first, can you explain for those that might not be familiar what backpropagation is and why it might need to be changed?

Bramhavar: Yeah, so this algorithm is essentially the bedrock of all AI training currently you use today. Essentially, what you’re doing is you have this large neural network. The neural network is composed of— you can think about it as this long chain of knobs. And you really have to tune all the knobs just right in order to get this network to perform a specific task, like when you give it an image of a cat, it says that it is a cat. And so what backpropagation allows you to do is to tune those knobs in a very, very efficient way. Starting from the end of your network, you kind of tune the knob a little bit, see if your answer gets a little bit closer to what you’d expect it to be. Use that information to then tune the knobs in the previous layer of your network and keep on doing that iteratively. And if you do this over and over again, you can eventually find all the right positions of your knobs such that your network does whatever you’re trying to do. And so this is great. Now, the issue is every time you tune one of these knobs, you’re performing this massive mathematical computation. And you’re typically doing that across many, many GPUs. And you do that just to tweak the knob a little bit. And so you have to do it over and over and over and over again to get the knobs where you need to go.

There’s a whole bevy of algorithms. What you’re really doing is kind of minimizing error between what you want the network to do and what it’s actually doing. And if you think about it along those terms, there’s a whole bevy of algorithms in the literature that kind of minimize energy or error in that way. None of them work as well as backpropagation. In some ways, the algorithm is beautiful and extraordinarily simple. And most importantly, it’s very, very well suited to be parallelized on GPUs. And I think that is part of its success. But one of the things I think both algorithmic researchers and hardware researchers fall victim to is this chicken and egg problem, right? Algorithms researchers build algorithms that work well on the hardware platforms that they have available to them. And at the same time, hardware researchers develop hardware for the existing algorithms of the day. And so one of the things we want to try to do with this program is blend those worlds and allow algorithms researchers to think about what is the field of algorithms that I could explore if I could rethink some of the bottlenecks in the hardware that I have available to me. Similarly in the opposite direction.

Genkina: Imagine that you succeeded at your goal and the program and the wider community came up with a 1/1000s compute cost architecture, both hardware and software together. What does your gut say that that would look like? Just an example. I know you don’t know what’s going to come out of this, but give us a vision.

Bramhavar: Similarly, like I said, I don’t think I can prescribe a specific technology. What I can say is that— I can say with pretty high confidence, it’s not going to just be one particular technological kind of pinch point that gets unlocked. It’s going to be a systems level thing. So there may be individual technology at the chip level or the hardware level. Those technologies then also have to meld with things at the systems level as well and the algorithms level as well. And I think all of those are going to be necessary in order to reach these goals. I’m talking kind of generally, but what I really mean is like what I said before is we got to think about new types of hardware. We also have to think about, “Okay, if we’re going to scale these things and manufacture them in large volumes cost effectively, we’re going to have to build larger systems out of building blocks of these things. So we’re going to have to think about how to stitch them together in a way that makes sense and doesn’t eat away any of the benefits. We’re also going to have to think about how to simulate the behavior of these things before we build them.” I think part of the power of the digital electronics ecosystem comes from the fact that you have cadence and synopsis and these EDA platforms that allow you with very high accuracy to predict how your circuits are going to perform before you build them. And once you get out of that ecosystem, you don’t really have that.

So I think it’s going to take all of these things in order to actually reach these goals. And I think part of what this program is designed to do is kind of change the conversation around what is possible. So by the end of this, it’s a four-year program. We want to show that there is a viable path towards this end goal. And that viable path could incorporate kind of all of these aspects of what I just mentioned.

Genkina: Okay. So the program is four years, but you don’t necessarily expect like a finished product of a 1/1000s cost computer by the end of the four years, right? You kind of just expect to develop a path towards it.

Bramhavar: Yeah. I mean, ARIA was kind of set up with this kind of decadal time horizon. We want to push out-- we want to fund, as I mentioned, high-risk, high reward technologies. We have this kind of long time horizon to think about these things. I think the program is designed around four years in order to kind of shift the window of what the world thinks is possible in that timeframe. And in the hopes that we change the conversation. Other folks will pick up this work at the end of that four years, and it will have this kind of large-scale impact on a decadal.

Genkina: Great. Well, thank you so much for coming today. Today we spoke with Dr. Suraj Bramhavar, lead of the first program headed up by the UK’s newest funding agency, ARIA. He filled us in on his plans to reduce AI costs by a factor of 1,000, and we’ll have to check back with him in a few years to see what progress has been made towards this grand vision. For IEEE Spectrum, I’m Dina Genkina, and I hope you’ll join us next time on Fixing the Future.

❌
❌