FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • Software-Defined Vehicle Momentum GrowsAnn Mutschler
    Experts at the Table: The automotive ecosystem is undergoing a transformation toward software-defined vehicles, spurring new architectures with more software. Semiconductor Engineering sat down to discuss the impact of these changes with Suraj Gajendra, vice president of products and solutions in Arm‘s automotive line of business; Chuck Alpert, R&D automotive fellow at Cadence; Steve Spadoni, zone controller and power distribution application manager at Infineon; Rebeca Delgado, chief techno
     

Software-Defined Vehicle Momentum Grows

9. Květen 2024 v 09:06

Experts at the Table: The automotive ecosystem is undergoing a transformation toward software-defined vehicles, spurring new architectures with more software. Semiconductor Engineering sat down to discuss the impact of these changes with Suraj Gajendra, vice president of products and solutions in Arm‘s automotive line of business; Chuck Alpert, R&D automotive fellow at Cadence; Steve Spadoni, zone controller and power distribution application manager at Infineon; Rebeca Delgado, chief technology officer and principal AI engineer at Intel Automotive; Cyril Clocher, senior director in the automotive product line for high-performance computing at Renesas; David Fritz, vice president, hybrid and virtual systems at Siemens EDA; and Marc Serughetti, senior director, systems design group at Synopsys. What follows are excerpts of that discussion.

L-R: Arm’s Gajendra, Cadence’s Alpert, Infineon’s Spadoni, Intel’s Delgado, Renesas’ Clocher, Siemens’ Fritz, Synopsys’ Serughetti.

L-R: Arm’s Gajendra, Cadence’s Alpert, Infineon’s Spadoni, Intel’s Delgado, Renesas’ Clocher, Siemens’ Fritz, Synopsys’ Serughetti.

SE: The automotive ecosystem is undergoing a technology evolution the likes of which has not been seen, including the move to software-defined vehicles. To set a baseline for this discussion, what is your definition of an SDV?

Gajendra: A software-defined vehicle is a concept, a trend, an idea, where the whole ecosystem can drive new capabilities and new user experiences into the car, even after it rolls out of the showroom or dealership. It’s a pretty loaded concept. There’s a lot of infrastructure that needs to come together, such as software development in the cloud, seamless deployment of that software development onto the car, the whole deployment of over-the-air updates, and the connectivity. In short, the concept of a software-defined vehicle is expecting a world where we can drive new experiences, new capabilities, and new features into the car throughout its lifetime.

Alpert: In thinking about what SDV means, one example is the battery — especially in an EV. I’m not talking about the technology of the battery that’s evolved, but rather the idea that in the past when you wanted to charge your car in your garage and you were worried about starting a fire, you’d think, ‘No, don’t do that because your whole house could burn down.’ The idea is that in the past, maybe we might put a temperature sensor on the battery, but now we actually have software that can monitor it. It might even have AI to predict if the battery is reaching some state that might cause a fire in the future. You also might have something that connects to the power grid and learns when is a good time to charge, because it’s a low-usage period so it’s cheaper. This is just one part of the car, but you can imagine a whole bunch of software that you want to put on top of it in order to connect to the universe. You need a software-defined vehicle platform in order for this, or in all the other parts of your car, to communicate with the world and provide the best user experience.

Spadoni: Infineon’s definition of a software-defined vehicle is a redefining of architecture — specifically, electrical and electronic architecture, feature allocation, and the entire topology of the vehicle, from power generation and storage to power distribution and high compute. It really means new electrical architectures, and it has consequences for the business model of every OEM and Tier 1 involved. It’s a major change to previous methodologies in the last 30 years.

Delgado: Software-defined vehicle is not just over-the-air updates. It’s truly a new methodology and a new philosophy for how to architect every ingredient of the vehicle to continue to deliver value over time, in which the value is very tightly attached to the software that delivers the user experience. Ultimately, this architecture must enable the different practices on how to deliver this new value over time. What’s very interesting is that these practices of moving to software-defined architecture has been done by many other industries already. Intel has a ton of heritage, and actually helped those industries transform. That transformation is truly what we’re observing here. It’s an incredible opportunity, and possibly a crisis if not done right.

Clocher: To apply an analogy here, the car is the new smartphone. But for us, it’s more than that. I’ve heard about the platform, yes, and it’s the major architecture evolution that we’ll see in the next decade. For us at Renesas, it will be a journey that will take time to enhance the user experience, to generate new revenue streams for the industry as it moves from decentralized to centralized classic compute with zonal architecture. We can apply all those buzzwords to a software-defined vehicle. Those platform will need big computers and heavy complex hardware solutions and this will generate evolutions, upgrades to the car during its entire lifetime, but underneath we know — at least at Renesas, and certainly at some other players and silicon vendors — that this will need a huge amount of hardware resources to manage what we have in mind to deploy this platform.

Fritz: I see software-defined vehicles a bit differently than what’s been mentioned so far. For many years, you’d have the hardware team doing their design, and the software team doing their design, and it all needs to come together. There’s an English natural language discussion about what needs to happen, and as we all know, that never really goes terribly well. In automotive that becomes an integration storm, and it is a nightmare. With the new compute requirements that have been mentioned already, that just compounds the issue. So the way I see this is that we tend, as people who have an engineering background, to dive into how we’re going to do things. We hear ‘software-defined vehicle,’ we immediately think about how to do that. There’s not a lot of thought about why it needs to be done, and what needs to happen. We jump into the ‘how’ too early, and a lot of the discussion here is exemplary of that kind of approach. When I’m looking at software-defined vehicles, I’m looking at why it’s important that the software needs to run effectively on a piece of hardware. And for that hardware, why is it important for it to actually operate properly on the software? Then you can decide how to put together a new methodology that’s going to bring those things together. In the past, it’s been called hardware/software co-design. There have been attempts many times, and as has been mentioned, other industries have made this transition. What’s unique about automotive is that it’s not just one transition that needs to happen. It’s hundreds or thousands of transitions. The ecosystem needs to be turned upside down, which we’re seeing happen right now, and you need to bring all that together. It really is a methodology where you need the tooling, you need the processes, you need the thinking, you need the organizations to change so that they can make this transition in a realistic way. SDV is a huge transition. It is a way for the automotive industry to morph into something that has longevity and can meet customer expectations, which it really hasn’t met for some time now.

Serughetti: At the end of the day, if we look starting at the top from our perspective, SDV is a means to bring and enhance the car experience for the customer. That’s the end result that the OEMs look at, but they look at it from the perspective of how that improves the OEM efficiencies, and how that creates new business opportunities. The way we look at it, and what’s important, is the impact it has on the industry, the impact on the processes, on the methodologies, on the people, on the ecosystem, on the technology. It’s really a transformation of the automotive market that is going to fundamentally change how the industry moves forward and bring the OEM into a world in which they are really looking at how they become efficient in delivering cars, how they bring new features, but at the same time, how they evolve their business as well.

SE: As you’ve all described, SDV requires many inter-dependencies, and the entire ecosystem has to have an understanding of the ‘why,’ which should then lead back to laying out the plan for how to get there. Where does the ecosystem stand today in terms of realizing SDV?

Fritz: OEMs have decided in the last few years that they’ve got to take control of their own destiny. They cannot simply take what the suppliers provide. They need a methodology — like this whole SDV concept, and any tooling necessary to provide that — to push down into their suppliers, such that, ‘Here’s what I need. If you can’t do this for me, I will go find someone that will.’ This is not the old ecosystem that bubbled up from the IP to the Tier 2s, to the Tier 1s, and then to the OEMs, which gave them limited choices to go from. So when I say, “Turn the ecosystem upside down,” that’s what is happening. But every OEM has their own ecosystem, and they’re not all in the same place. Even region-to-region, they can be very different.

Delgado: This is a critical discussion, and effectively where the industry has to eventually settle. The magnitude of the transformation of the ecosystem includes roles in the technology evolution. The silicon content is expected to quadruple over the next few years in the vehicle for defining the in-cabin experience of the end user. At the end of the day, the complexity of the transition of roles is of such magnitude that the proprietary, fragmented, and broken approaches that David articulated are really not going to enable the industry to transform at the speed it requires to deliver and meet the experiences. But more than anything, they are not going to address the actual technology changes necessary to implement and allow for this value delivery mechanism. At the end of the day, this is where Intel really believes collaboration is key, and anybody who wants to participate in this ecosystem must provide scalability — also known as top-to-bottom support of the different product lines that our OEMs and Tier 1s are having to support, versus a broken-up approach on these ever-evolving higher performance and higher performance compute needs. It has to be future-proof, because you’re going to launch the vehicle eventually. So certain hardware has to be future-proofed to a certain affordability envelope, and there has to be a strategy around that. And then the ecosystem and that collaboration must be able to deliver that aggregation. It has to be done with certain anchoring technology that will allow us to deliver that performance. Collaboration is key in the sense that these technologies cannot be single-handedly owned, developed, let alone owned, defined, developed, and integrated by OEMs in silos with a proprietary end-to-end architecture definition. There obviously will be differentiations on the actual implementation, but the technologies at large have to have a sense of reuse, particularly from other verticals that have already done software-defined transformations and then tuned in the right ways toward the automotive requirements.

Spadoni: There are probably a wide variety of implementations. At Infineon, we partner with OEMs and Tier 1s and we see different approaches. For example, General Motors has more of a modular approach that emulates what happened in in the mobile phone space. It seems that Ford has a more pragmatic approach, along with Stellantis, but all of them are facing very similar challenges in that affordability has become a big problem. There are multiple generations of implementations that are going to occur, and you’ll see a striving toward how to pay for this extra hardware. It leads to tradeoffs in implementations of other systems that have to have savings in order for them to afford these vehicles. No one ever goes into a dealership and says, ‘Give me a software-defined vehicle.’ Everyone’s looking for value, and you can see it now with volumes going down. There’s a saturation of people buying at the high level. The OEMs want to get more sales, which means they’ll have to go to the lower-cost-value vehicles, and that’s going to affect the electrical and electronic architectures and the software-defined vehicle.

Clocher: What we’re seeing I would summarize as the impact on the ecosystem. We’re moving to an OEM-centric ecosystem. One size does not fit all, meaning OEMs will have their different tastes, their different definitions of levels of integration they want to have in their software-defined vehicle — especially given more complex tasks that we all have to do, rather than the challenge we have to solve, because we’re not talking about a common umbrella of software-defined vehicle. But it really does mean different implementations and different meanings for OEM A from OEM B. I would fully agree with David and Steve that we are far from having a common understanding of, at least, the market itself. And that’s fine, because this will bring differentiation, and ultimately that’s why a customer will go to Dealership A versus Dealership B. This is what the industry wants to see — continue to differentiate, continue to add value to the ultimate product, which is the car.

Serughetti: The important point in all this is, of course, you’re breaking the model that exists today. That’s one of the big challenges. We used to have Tier 1s that were building boxes, and delivering software. This was a complete black box. When it would go to integration, there were all sorts of problems. And now you’re going to break this? The challenge for the OEM is how they do this. They want to control software, but are they equipped to do this today? We see the problems today that some of the legacy OEMs have in setting up their software organizations, the challenges of CARIAD and all such organizations that are trying to do this. It’s not easy to change those companies. Of course, the new entrants don’t have this problem because they are coming from a brand new design versus the ones that deal with legacy. So for the OEM, it’s about how to take control of the software. What does that mean in terms of the processes, in terms of agile development, digital twins, and all of these technologies everybody’s talking about? The other side is, ‘It’s all nice, this software,’ but this software runs on all the companies that are delivering hardware, and that becomes essential to it. You can have the best software, but if your hardware is not there to support performance, power, and all of those aspects, you’re not going to be successful. So the ecosystem is evolving how hardware, software, and all of this comes together. The OEM wants to be the central point. That’s what we’re talking about in terms of the process methodology aspects that are making this transition evolve.

Gajendra: Where are we in this journey? How far have we come? And where are we going? Going back to the point that David mentioned earlier about supply chain evolving and the supply chain turned upside down, five years ago, if we sat here in this sort of a panel and discussed software-defined vehicles, the conversation would have been entirely different. It would have been stuck with the traditional supply chain that we’ve seen for the last 35 or 40 years in the automotive industry. There are fundamentally two aspects here. The supply chain is evolving, and the infrastructure that we, as a community — this team, for example, and many others in the community — are trying to enable is going to be key to making our EDA partners happy. The use of virtual platforms today in the cloud to try and shift left and develop and validate some of these technologies and software wasn’t even there five years ago, so we’ve come a long way. We’ve made a lot of progress together as an industry. Yes, we have a long way to go until we actually have a truly software-defined vehicle. We can go and ask for a software-defined vehicle in the dealership. But the changes we are seeing in terms of all sorts of technology providers trying to make sure that the technology that we eventually will have in the hardware is provided in some sort of virtual form, be it fast models or whatever it is in the cloud, for the vast majority of software ecosystem in automotive this is a big change. I was at Embedded World, and the amount of virtual platforms and the demos that people were actually showing — silicon partners like we have here, Intel, Renesas, Infineon, EDA companies — pointed to a strong movement of, ‘Let’s build the infrastructure that we can build, and then provide that infrastructure to the OEMs to take it from there.’ There is a lot of work going on. Together we will make the infrastructure across the board, be it virtual platform or others, richer and more capable.

Alpert: For sure, OEMs have to control their own destiny. In the past, they would do it by differentiating maybe because they had better engine performance, or some other feature. But going forward, the differentiation is going to be their software. Whoever can make software that will provide additional value, and brand it, that’s going to be the differentiator and that’s the trend. In terms of how you get there, a shared ecosystem is important. SOAFEE is a potential way that, together with virtual platforms, you can provide a shared ecosystem for development, but still allow everyone to differentiate and plug-and-play. That’s one reason we’re working closely with Arm on trying to have a reference design specifically for this purpose. But again, we’re not saying, ‘This is the design you use. This is how you do it.’ That’s not it. The point is, let’s start somewhere, and then people can start swapping out pieces and doing different things. As long as OEMs can plug-and-play, then they can still differentiate. But they don’t have to invent everything themselves, which would be too costly.

Related Reading
Software-Defined Vehicles Ready To Roll
New approach could have big effects on cost, safety, security, and time to market.

The post Software-Defined Vehicle Momentum Grows appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Software-Defined Vehicle Momentum GrowsAnn Mutschler
    Experts at the Table: The automotive ecosystem is undergoing a transformation toward software-defined vehicles, spurring new architectures with more software. Semiconductor Engineering sat down to discuss the impact of these changes with Suraj Gajendra, vice president of products and solutions in Arm‘s automotive line of business; Chuck Alpert, R&D automotive fellow at Cadence; Steve Spadoni, zone controller and power distribution application manager at Infineon; Rebeca Delgado, chief techno
     

Software-Defined Vehicle Momentum Grows

9. Květen 2024 v 09:06

Experts at the Table: The automotive ecosystem is undergoing a transformation toward software-defined vehicles, spurring new architectures with more software. Semiconductor Engineering sat down to discuss the impact of these changes with Suraj Gajendra, vice president of products and solutions in Arm‘s automotive line of business; Chuck Alpert, R&D automotive fellow at Cadence; Steve Spadoni, zone controller and power distribution application manager at Infineon; Rebeca Delgado, chief technology officer and principal AI engineer at Intel Automotive; Cyril Clocher, senior director in the automotive product line for high-performance computing at Renesas; David Fritz, vice president, hybrid and virtual systems at Siemens EDA; and Marc Serughetti, senior director, systems design group at Synopsys. What follows are excerpts of that discussion.

L-R: Arm’s Gajendra, Cadence’s Alpert, Infineon’s Spadoni, Intel’s Delgado, Renesas’ Clocher, Siemens’ Fritz, Synopsys’ Serughetti.

L-R: Arm’s Gajendra, Cadence’s Alpert, Infineon’s Spadoni, Intel’s Delgado, Renesas’ Clocher, Siemens’ Fritz, Synopsys’ Serughetti.

SE: The automotive ecosystem is undergoing a technology evolution the likes of which has not been seen, including the move to software-defined vehicles. To set a baseline for this discussion, what is your definition of an SDV?

Gajendra: A software-defined vehicle is a concept, a trend, an idea, where the whole ecosystem can drive new capabilities and new user experiences into the car, even after it rolls out of the showroom or dealership. It’s a pretty loaded concept. There’s a lot of infrastructure that needs to come together, such as software development in the cloud, seamless deployment of that software development onto the car, the whole deployment of over-the-air updates, and the connectivity. In short, the concept of a software-defined vehicle is expecting a world where we can drive new experiences, new capabilities, and new features into the car throughout its lifetime.

Alpert: In thinking about what SDV means, one example is the battery — especially in an EV. I’m not talking about the technology of the battery that’s evolved, but rather the idea that in the past when you wanted to charge your car in your garage and you were worried about starting a fire, you’d think, ‘No, don’t do that because your whole house could burn down.’ The idea is that in the past, maybe we might put a temperature sensor on the battery, but now we actually have software that can monitor it. It might even have AI to predict if the battery is reaching some state that might cause a fire in the future. You also might have something that connects to the power grid and learns when is a good time to charge, because it’s a low-usage period so it’s cheaper. This is just one part of the car, but you can imagine a whole bunch of software that you want to put on top of it in order to connect to the universe. You need a software-defined vehicle platform in order for this, or in all the other parts of your car, to communicate with the world and provide the best user experience.

Spadoni: Infineon’s definition of a software-defined vehicle is a redefining of architecture — specifically, electrical and electronic architecture, feature allocation, and the entire topology of the vehicle, from power generation and storage to power distribution and high compute. It really means new electrical architectures, and it has consequences for the business model of every OEM and Tier 1 involved. It’s a major change to previous methodologies in the last 30 years.

Delgado: Software-defined vehicle is not just over-the-air updates. It’s truly a new methodology and a new philosophy for how to architect every ingredient of the vehicle to continue to deliver value over time, in which the value is very tightly attached to the software that delivers the user experience. Ultimately, this architecture must enable the different practices on how to deliver this new value over time. What’s very interesting is that these practices of moving to software-defined architecture has been done by many other industries already. Intel has a ton of heritage, and actually helped those industries transform. That transformation is truly what we’re observing here. It’s an incredible opportunity, and possibly a crisis if not done right.

Clocher: To apply an analogy here, the car is the new smartphone. But for us, it’s more than that. I’ve heard about the platform, yes, and it’s the major architecture evolution that we’ll see in the next decade. For us at Renesas, it will be a journey that will take time to enhance the user experience, to generate new revenue streams for the industry as it moves from decentralized to centralized classic compute with zonal architecture. We can apply all those buzzwords to a software-defined vehicle. Those platform will need big computers and heavy complex hardware solutions and this will generate evolutions, upgrades to the car during its entire lifetime, but underneath we know — at least at Renesas, and certainly at some other players and silicon vendors — that this will need a huge amount of hardware resources to manage what we have in mind to deploy this platform.

Fritz: I see software-defined vehicles a bit differently than what’s been mentioned so far. For many years, you’d have the hardware team doing their design, and the software team doing their design, and it all needs to come together. There’s an English natural language discussion about what needs to happen, and as we all know, that never really goes terribly well. In automotive that becomes an integration storm, and it is a nightmare. With the new compute requirements that have been mentioned already, that just compounds the issue. So the way I see this is that we tend, as people who have an engineering background, to dive into how we’re going to do things. We hear ‘software-defined vehicle,’ we immediately think about how to do that. There’s not a lot of thought about why it needs to be done, and what needs to happen. We jump into the ‘how’ too early, and a lot of the discussion here is exemplary of that kind of approach. When I’m looking at software-defined vehicles, I’m looking at why it’s important that the software needs to run effectively on a piece of hardware. And for that hardware, why is it important for it to actually operate properly on the software? Then you can decide how to put together a new methodology that’s going to bring those things together. In the past, it’s been called hardware/software co-design. There have been attempts many times, and as has been mentioned, other industries have made this transition. What’s unique about automotive is that it’s not just one transition that needs to happen. It’s hundreds or thousands of transitions. The ecosystem needs to be turned upside down, which we’re seeing happen right now, and you need to bring all that together. It really is a methodology where you need the tooling, you need the processes, you need the thinking, you need the organizations to change so that they can make this transition in a realistic way. SDV is a huge transition. It is a way for the automotive industry to morph into something that has longevity and can meet customer expectations, which it really hasn’t met for some time now.

Serughetti: At the end of the day, if we look starting at the top from our perspective, SDV is a means to bring and enhance the car experience for the customer. That’s the end result that the OEMs look at, but they look at it from the perspective of how that improves the OEM efficiencies, and how that creates new business opportunities. The way we look at it, and what’s important, is the impact it has on the industry, the impact on the processes, on the methodologies, on the people, on the ecosystem, on the technology. It’s really a transformation of the automotive market that is going to fundamentally change how the industry moves forward and bring the OEM into a world in which they are really looking at how they become efficient in delivering cars, how they bring new features, but at the same time, how they evolve their business as well.

SE: As you’ve all described, SDV requires many inter-dependencies, and the entire ecosystem has to have an understanding of the ‘why,’ which should then lead back to laying out the plan for how to get there. Where does the ecosystem stand today in terms of realizing SDV?

Fritz: OEMs have decided in the last few years that they’ve got to take control of their own destiny. They cannot simply take what the suppliers provide. They need a methodology — like this whole SDV concept, and any tooling necessary to provide that — to push down into their suppliers, such that, ‘Here’s what I need. If you can’t do this for me, I will go find someone that will.’ This is not the old ecosystem that bubbled up from the IP to the Tier 2s, to the Tier 1s, and then to the OEMs, which gave them limited choices to go from. So when I say, “Turn the ecosystem upside down,” that’s what is happening. But every OEM has their own ecosystem, and they’re not all in the same place. Even region-to-region, they can be very different.

Delgado: This is a critical discussion, and effectively where the industry has to eventually settle. The magnitude of the transformation of the ecosystem includes roles in the technology evolution. The silicon content is expected to quadruple over the next few years in the vehicle for defining the in-cabin experience of the end user. At the end of the day, the complexity of the transition of roles is of such magnitude that the proprietary, fragmented, and broken approaches that David articulated are really not going to enable the industry to transform at the speed it requires to deliver and meet the experiences. But more than anything, they are not going to address the actual technology changes necessary to implement and allow for this value delivery mechanism. At the end of the day, this is where Intel really believes collaboration is key, and anybody who wants to participate in this ecosystem must provide scalability — also known as top-to-bottom support of the different product lines that our OEMs and Tier 1s are having to support, versus a broken-up approach on these ever-evolving higher performance and higher performance compute needs. It has to be future-proof, because you’re going to launch the vehicle eventually. So certain hardware has to be future-proofed to a certain affordability envelope, and there has to be a strategy around that. And then the ecosystem and that collaboration must be able to deliver that aggregation. It has to be done with certain anchoring technology that will allow us to deliver that performance. Collaboration is key in the sense that these technologies cannot be single-handedly owned, developed, let alone owned, defined, developed, and integrated by OEMs in silos with a proprietary end-to-end architecture definition. There obviously will be differentiations on the actual implementation, but the technologies at large have to have a sense of reuse, particularly from other verticals that have already done software-defined transformations and then tuned in the right ways toward the automotive requirements.

Spadoni: There are probably a wide variety of implementations. At Infineon, we partner with OEMs and Tier 1s and we see different approaches. For example, General Motors has more of a modular approach that emulates what happened in in the mobile phone space. It seems that Ford has a more pragmatic approach, along with Stellantis, but all of them are facing very similar challenges in that affordability has become a big problem. There are multiple generations of implementations that are going to occur, and you’ll see a striving toward how to pay for this extra hardware. It leads to tradeoffs in implementations of other systems that have to have savings in order for them to afford these vehicles. No one ever goes into a dealership and says, ‘Give me a software-defined vehicle.’ Everyone’s looking for value, and you can see it now with volumes going down. There’s a saturation of people buying at the high level. The OEMs want to get more sales, which means they’ll have to go to the lower-cost-value vehicles, and that’s going to affect the electrical and electronic architectures and the software-defined vehicle.

Clocher: What we’re seeing I would summarize as the impact on the ecosystem. We’re moving to an OEM-centric ecosystem. One size does not fit all, meaning OEMs will have their different tastes, their different definitions of levels of integration they want to have in their software-defined vehicle — especially given more complex tasks that we all have to do, rather than the challenge we have to solve, because we’re not talking about a common umbrella of software-defined vehicle. But it really does mean different implementations and different meanings for OEM A from OEM B. I would fully agree with David and Steve that we are far from having a common understanding of, at least, the market itself. And that’s fine, because this will bring differentiation, and ultimately that’s why a customer will go to Dealership A versus Dealership B. This is what the industry wants to see — continue to differentiate, continue to add value to the ultimate product, which is the car.

Serughetti: The important point in all this is, of course, you’re breaking the model that exists today. That’s one of the big challenges. We used to have Tier 1s that were building boxes, and delivering software. This was a complete black box. When it would go to integration, there were all sorts of problems. And now you’re going to break this? The challenge for the OEM is how they do this. They want to control software, but are they equipped to do this today? We see the problems today that some of the legacy OEMs have in setting up their software organizations, the challenges of CARIAD and all such organizations that are trying to do this. It’s not easy to change those companies. Of course, the new entrants don’t have this problem because they are coming from a brand new design versus the ones that deal with legacy. So for the OEM, it’s about how to take control of the software. What does that mean in terms of the processes, in terms of agile development, digital twins, and all of these technologies everybody’s talking about? The other side is, ‘It’s all nice, this software,’ but this software runs on all the companies that are delivering hardware, and that becomes essential to it. You can have the best software, but if your hardware is not there to support performance, power, and all of those aspects, you’re not going to be successful. So the ecosystem is evolving how hardware, software, and all of this comes together. The OEM wants to be the central point. That’s what we’re talking about in terms of the process methodology aspects that are making this transition evolve.

Gajendra: Where are we in this journey? How far have we come? And where are we going? Going back to the point that David mentioned earlier about supply chain evolving and the supply chain turned upside down, five years ago, if we sat here in this sort of a panel and discussed software-defined vehicles, the conversation would have been entirely different. It would have been stuck with the traditional supply chain that we’ve seen for the last 35 or 40 years in the automotive industry. There are fundamentally two aspects here. The supply chain is evolving, and the infrastructure that we, as a community — this team, for example, and many others in the community — are trying to enable is going to be key to making our EDA partners happy. The use of virtual platforms today in the cloud to try and shift left and develop and validate some of these technologies and software wasn’t even there five years ago, so we’ve come a long way. We’ve made a lot of progress together as an industry. Yes, we have a long way to go until we actually have a truly software-defined vehicle. We can go and ask for a software-defined vehicle in the dealership. But the changes we are seeing in terms of all sorts of technology providers trying to make sure that the technology that we eventually will have in the hardware is provided in some sort of virtual form, be it fast models or whatever it is in the cloud, for the vast majority of software ecosystem in automotive this is a big change. I was at Embedded World, and the amount of virtual platforms and the demos that people were actually showing — silicon partners like we have here, Intel, Renesas, Infineon, EDA companies — pointed to a strong movement of, ‘Let’s build the infrastructure that we can build, and then provide that infrastructure to the OEMs to take it from there.’ There is a lot of work going on. Together we will make the infrastructure across the board, be it virtual platform or others, richer and more capable.

Alpert: For sure, OEMs have to control their own destiny. In the past, they would do it by differentiating maybe because they had better engine performance, or some other feature. But going forward, the differentiation is going to be their software. Whoever can make software that will provide additional value, and brand it, that’s going to be the differentiator and that’s the trend. In terms of how you get there, a shared ecosystem is important. SOAFEE is a potential way that, together with virtual platforms, you can provide a shared ecosystem for development, but still allow everyone to differentiate and plug-and-play. That’s one reason we’re working closely with Arm on trying to have a reference design specifically for this purpose. But again, we’re not saying, ‘This is the design you use. This is how you do it.’ That’s not it. The point is, let’s start somewhere, and then people can start swapping out pieces and doing different things. As long as OEMs can plug-and-play, then they can still differentiate. But they don’t have to invent everything themselves, which would be too costly.

Related Reading
Software-Defined Vehicles Ready To Roll
New approach could have big effects on cost, safety, security, and time to market.

The post Software-Defined Vehicle Momentum Grows appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Design Considerations In PhotonicsKaren Heyman
    Experts at the Table: Semiconductor Engineering sat down to talk about what CMOS and photonics engineers need to know to successfully collaborate, with James Pond, fellow at Ansys; Gilles Lamant, distinguished engineer at Cadence; and Mitch Heins, business development manager for photonic solutions at Synopsys. What follows are excerpts of that conversation. To view part one of this discussion, click here. Part two is here. L-R: Ansys’s Pond, Cadence’s Lamant, Synopsys’ Heins SE: What do engine
     

Design Considerations In Photonics

1. Květen 2024 v 09:05

Experts at the Table: Semiconductor Engineering sat down to talk about what CMOS and photonics engineers need to know to successfully collaborate, with James Pond, fellow at Ansys; Gilles Lamant, distinguished engineer at Cadence; and Mitch Heins, business development manager for photonic solutions at Synopsys. What follows are excerpts of that conversation. To view part one of this discussion, click here. Part two is here.


L-R: Ansys’s Pond, Cadence’s Lamant, Synopsys’ Heins

SE: What do engineers who have spent their careers in CMOS need to know about designing for photonics?

Lamant:  It’s hard, no illusion. I had good mentors, including both James and Mitch, so I actually did that transition. Ten years ago, I knew nearly nothing about photonics. It takes having good mentors who can help you. That’s the biggest thing. It’s not enough to just try the software on your own. In addition, having an RF background is very useful in many ways. Photonics is the multiplication of RF. In photonics, you have multiple modes. In RF, you tend to only consider one mode, but a lot of the theory behind photonics is very much a generalization of RF.

Heins: We try to make our photonics flow look as much as we can like our electronics flow. We try to take the last 30 to 40 years of learning in EDA and apply it to photonics. One thing we see a lot is that when people are coming right out of school in photonics, they don’t necessarily have a deep background in how to do IC design. There are a lot of things we’ve learned, like design rule checking, that we now take for granted. It’s like breathing. You’ve got to do it. Layout versus schematic, you’ve got to do it. Even circuit-level simulation. As CMOS veterans, you’d think, of course, you always simulate your circuit before you go to manufacture, but that’s not the case in photonics.

Lamant: Those people actually know photonics, but they don’t know how to create a system. This is a different type of challenge. People who know photonics, know how to make a device. They’re expert at that. But they have no idea how to take that device and bring it to a full system that they can sell. I see that in so many startups. It’s not to make the point for EDA software. They use free software. They use Klayout and all those things that they have access to in the university. But all of those tools are not part of the ecosystem of trying to make a system. They say, ‘We wrote a custom simulator to simulate our ring.’ But the question then is, ‘How do you simulate the driver for your ring that goes with it?’ I see many startups fail because they don’t have that ability to take it from academic thinking to production.

You have the electronics people trying to do photonics, they have some methodology background, and other things, but they have a gap in knowledge. Fortunately, they can get caught up, especially if they’re an analog designer or an RF designer. They can close that gap by talking to the right people. Unfortunately, the people who know photonics do not have the knowledge of how to make a full system out of it, and this is greatly hurting the photonics world.

Pond: I would agree. We have two worlds of engineers who have been coming together over the last decade or so. Those who came from an EDA background — electrical circuit design, especially RF — have probably had the easiest time. We’ve been doing better and better for them. Ten years ago there was nothing. Now, there’s a more traditional workflow that looks more like an EDA workflow. Still, they have a lot to learn. But the workflow, the cockpit, and so on, follows along with the EDA model.

In the other direction, maybe we haven’t done quite as good a job because people coming from a photonics background can be really thrown off by the scale and complexity of EDA tools. My impression, coming from photonics, is EDA tools have been developed over many decades. When that happens, you end up with tools that are incredibly powerful, but you wonder if they’d been developed more recently, maybe things wouldn’t be done this way. There’s a resistance on the photonic engineer side to dive into that world because there’s a lot to learn about the EDA workflows. People from photonics have to embrace and take on that EDA world, because, as Gilles says, it’s necessary, it really has to be done.

Heins:  Now, you’re seeing a ton of work going into how to apply AI to help folks bring these kinds of more complex flows under control. There’s so much to learn, but if AI can help you take care of the plumbing, if you will, you can advance much faster. We already extensively use AI for SoCs or packaged designs where you have tens and hundreds of billions of transistors. Photonics is a different vector. The signal itself is much more complex than electrical. The optimization that you have to go through is much more complex. But AI can help get a handle on that, so as we go forward, you’ll start to see these kinds of complexities simplified for people.

SE: Is there something analogous to error correction/parity checks in the photonics world?

Lamant: That can’t be analogized to photonics, because that’s about knowing the original signal and comparing it to the others. Once you have reconstituted your data, and it’s back to being a digital set of bits, then you have a parity check or different types of things that today have nothing to do with photonics because it’s the physical link. In physical links, you can do retiming or a lot of things, but the error correction happens independently, on both sides.

Heins: Tuning might be something closer to it. If my resonance frequencies are not as expected, can I detect that and then adjust for it? That happens a lot. You could think of those kinds of things as error correction.

Pond: Most of the kind of error correction we’re talking about is just using all the standard methods, whether you have an optical link or a copper link. But there are some really interesting things. We had a workflow, developed between Ansys and Cadence a few years ago on a PAM-4 system, where we did a driver simulation and the photonic link together. You look into shifting the timing of signals to compensate for different effects. If you look at the eye at different locations, it may look completely distorted and wrong, because you’re pre-compensating for an effect that’s going to come later through the photonic portion of the link. That’s one of the reasons why it’s important to be able to do the full system simulation. You can’t just independently optimize the driver electronics and the photonics. They have to be done together, so you can perform the signal correction work.

Heins: You do things like equalization. Dispersion is another one. You get different wavelengths traveling at different speeds, and we compensate for that. At the physical level, there are some corrections that do take place, depending on the kind of system you’re trying to make. If you’re in coherent systems, where path links matter, phase matters, that’s more like trying to make the circuit correct by construction, so that you don’t encounter problems.

That raises another issue, which is manufacturing variances. There, you’re back to doing lots of sensitivity analysis through Monte Carlo-type simulations, parameterized simulations, etc., where you’re trying to get a feel for the sensitivity of your device, to a shift that could occur, either through the manufacturing process or just as this system sits in its ecosystem of whatever’s around it. It’s not quite error correction, per se, but certainly trying to design for that is something we care about.

SE: Any concluding thoughts?

Lamant: There is a lot of wondering and pondering right now, but it’s also exciting. We’ve reached the point where photonics is here to stay and will be part of more and more things. Looking forward, the interesting question is where it will become part of the actual data processing. Sensing is a terrific application for photonics, but I am not totally sold on the actual data processing. I’m not even using the word “computing” here, because processing and computing are very different things. Photonics is probably never going to be doing general computing. It may be doing specialized niche, like a Fourier transform-type of processing, and it needs to be part of a system.

Heins: It comes down to two things. What will really happen with quantum computing? And will quantum computing use photonics? A lot of people are looking at photonics for quantum computing because you can do a lot more of that work at room temperature than at 4 Kelvin or something like that — not all of it, but big chunks of it. If quantum computing actually becomes more than prototypes, and photonics is a big part of that, that could shift the answer. The other big issue in compute is we don’t have memory for photonics. If someone makes a breakthrough where suddenly states can be stored in some fashion, then all bets are off and everything changes again. But at this point, I don’t see anything promising.

One of the biggest challenges we have going forward for the whole ecosystem, in general, is lack of standards in this space, which makes interoperability between tools from our companies very difficult. The signal in photonics is very complex. It’s actually complex math, with real and imaginary parts. There are a lot of extra things that we have to take into account, and a lot of times we don’t even have common nomenclature or agreement on metrics and how to measure things. This is going to take time, but it’s being pushed by customers driving us to work together. For example, chiplets are great for photonics because a photonic IC is a chiplet. But all of a sudden, now you’re in a mixed domain, multi-physics type of environment, and there are some huge challenges to make that all work together. We have a pretty good handle on system functional verification, design-for-test, and all these things in the electronic IC world. In photonics, we’ve got a lot of work to do.

Pond: For me, it’s been exciting. I’ve been doing this for more than 20 years. In 2022, when I saw the first product with fibers actually coming out of the package, that was the dream from 20 years back. It took a lot of effort to get there. Things have been maturing very fast, especially in the last decade. That’s really promising from an EDA/EPDA-type of workflow perspective. The datacoms, as we’ve all said, are proven and not going to go away, given the investment from foundries, which is going to continue and even accelerate. It’s exciting times for all these other applications, from sensing to quantum and so on. There’s a lot of innovation possible. It’s not clear what’s going to be a winner yet and what’s not, but it’s a great time to be in photonics.

Read parts one and two of the discussion:
Photonics: The Former And Future Solution
Twenty-five years ago, photonics was supposed to be the future of high technology. Has that future finally arrived?
The Challenges Of Working With Photonics
From curvilinear designs to thermal vulnerabilities, what engineers need to know about the advantages and disadvantages of photonics

The post Design Considerations In Photonics appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • V2X Path To Deployment Still MurkyAnn Mutschler
    Experts at the Table: Semiconductor Engineering sat down to discuss Vehicle-To-Everything (V2X) technology and the path to deployment, with Shawn Carpenter, program director for 5G and space at Ansys; Lang Lin, principal product manager at Ansys; Daniel Dalpiaz, senior manager product marketing, Americas, green industrial power division at Infineon; David Fritz, vice president of virtual and hybrid systems at Siemens EDA; and Ron DiGiuseppe, senior marketing manager, automotive IP segment at Syn
     

V2X Path To Deployment Still Murky

7. Březen 2024 v 09:05

Experts at the Table: Semiconductor Engineering sat down to discuss Vehicle-To-Everything (V2X) technology and the path to deployment, with Shawn Carpenter, program director for 5G and space at Ansys; Lang Lin, principal product manager at Ansys; Daniel Dalpiaz, senior manager product marketing, Americas, green industrial power division at Infineon; David Fritz, vice president of virtual and hybrid systems at Siemens EDA; and Ron DiGiuseppe, senior marketing manager, automotive IP segment at Synopsys. What follows are excerpts from that conversation.

L-R: Ansys' Carpenter; Ansys' Lin; Infineon’s Dalpiaz; Siemens EDA’s Fritz; Synopsys‘ DiGiuseppe.

L-R: Ansys’ Carpenter; Ansys’ Lin; Infineon’s Dalpiaz; Siemens EDA’s Fritz; Synopsys‘ DiGiuseppe.

SE: What is the potential of vehicle-to-everything technology, and what role will the semiconductor ecosystem play in making this a reality?

DiGiuseppe: V2X is a technology that’s not just years, but decades, in the making. It initially started as a dedicated short-range communications (DSRC) type of technology, and has globally transitioned into a cellular technology, although many of those V2X applications are not just cellular. There are other spectrum allocations V2X can run on, including WiFi or other general-use technology. So it’s not limited to cellular. Also, it’s not just a technology. It’s an application, an outcome, and there are a lot of valuable uses, many of which are safety-related, but there are others, such as efficiency of traffic management notifications. V2X has a wide number of uses. The deployment will be done in stages, and there’s a lot of activity even though it’s taken a long time.

Lin: When I see the keyword V2X, it reminds me of everything about how the car can communicate with anything in the world. It’s a very exciting moment that we’re here today to be able to make some kind of technology to enable great communication between vehicles and people, in network infrastructures and car to car communications. Today, there is already something implemented. For instance, in car network systems we can connect our phone to the car already, but we’re still in the first mile. We’ve started on the journey, but we have a long way to go as far as how to connect car-to-car, how to connect the car to the entire infrastructure of networks, and to the internet. There are a lot of unknowns on the road while we start driving on this journey, and safety and security are definitely the biggest concerns. What if my network is being jeopardized?

Dalpiaz: V2X is part of a much bigger smart grid ecosystem. This will certainly play a very important role, especially as the grid becomes smart and decentralized. This is what will enable the future energy ecosystem, having renewable energies, energy storage systems all connected. And as we see more EVs being used as mobile battery storage. this is something that will certainly enable, and is part of, a smart grid ecosystem that everybody’s talking about.

Fritz: The days of independent semiconductor and software development are over. It is the need for OEMs to control their own destiny, driven by growing consumer and competitive demand, that has all but eliminated the ability to sell a one-size-fits-all product. We’ve known for a very long time that software needs to drive semiconductors, and semiconductors need to drive software. This symbiotic relationship, and the tools and methodologies needed to support this paradigm shift, are essential to producing a highly successful, complex, and competitive solution that meets consumer demands.

SE: What are the discrete pieces of V2X that need to be connected?

Dalpiaz: From the semiconductor point of view, especially with the usage of wide bandgap materials, a few companies are seeing that it’s possible to increase efficiency and power density. Being able to not only provide such solutions, but have everything connected in one box, is part of the smart ecosystem. Then, having the electric vehicles, energy storage, solar — everything combined into one box. Twenty years ago, before the iPhone, we used to have a fax machine, a camera for photographs, a computer. The future of this ecosystem is going to have one box sitting in your home, and have all this stuff connected together. So from the semiconductor point of view, especially with silicon carbide, it is something that is possible today, and it can achieve a very high level of efficiency — about 99%, very close to 100%. And of course, we need to make the system smaller to fit in a vehicle.

DiGiuseppe: One of the key stakeholders is the cellular companies. When we look at cellular V2X, one of the main challenges is interoperability. You have different devices in different model-year cars, so for the vehicle-to-vehicle communications, those different devices need to be interoperable. Then, the car will be talking to the infrastructure, so the roadside units need to be interoperable with the cars and devices in the cars. Then, of course, you have vehicle-to-pedestrians, vehicle-to-e-mobility like vehicle-to-bicycles, vehicle-to-motorcycles interoperability between all the devices over the medium. Whether it’s cellular or Wi-Fi or other technologies, it all needs to be interoperable. That will allow deployments in one locality to work in another locality, because even if they’re interoperable in one deployment in one region, we’ve got to make sure they’re also interoperable in other regions. So it’s a large scale interoperability goal.

Lin: Ron, you’re talking about interoperability, and Daniel talked about the ecosystem. From my side, I would also mention some standards are necessary. For EDA, to help build such an ecosystem and chips, we need some rules to give to engineers as to what’s to be followed. There are two important standards in my mind. One is the vehicle safety standard ISO 26262, which regulates a couple of safety standards for on-road vehicle chip design. Another is the cyber security standard, ISO 21434. If I make a tool, I probably will follow those standards, and then think about how the tool could help users decide a pass/failure criteria regarding their design, making sure to meet the security and safety target from the standard.

DiGiuseppe: In addition to standards, last October the U.S. Department of Transportation released its national V2X deployment plan. That plan, which is still in draft feedback stage, lays out — at least in the U.S. — the whole timeline for deployments. That kind of oversight plan overlays onto the standards that Lang was just talking about. That deployment plan outlines the different contributions from all the different stakeholders, from the automakers/OEMs to the software developers for the applications. So overlaid on top of standards is a deployment plan, and a government deployment plan outlines that. Plus, there are a lot of government stakeholders, like the FCC allocating spectrum, and the Department of Transportation deploying all these deployments, and that’s in addition to the technology providers.

Fritz: It would take days to adequately answer those questions, but at the core, the root design components are connectivity, power, performance, and acceleration. Connectivity with the proper protocols allows computational tasks to be distributed. This is particularly important in automotive, where the physical distance between sensing, actuating and computing nodes is critical for predictable performance. In the case of V2X, connectivity enables the normalization of external data, whether it involves smart city infrastructure or another vehicle. It’s important to note that the form of the shared data grows exponentially with the capacity to describe the environment, and therefore the compute requirements to process and understand it. For example, a data form that can describe signage in the U.S. is relatively small, but one that is universal with variations recognizable is much larger and more ambiguous. This drives design parameters that directly impact manufacturing, development, and service cost functions. Further, the normalization of the data has an impact on the overall design and design component interactions. In the case of power, it goes without saying that high compute requirements, and the associated necessary cooling, can have a significant impact on EV range and manufacturing costs. Performance can take many forms, but as software loads increase with hypervisors, specialized operating systems, and protocol stacks, not to mention very complex application software, all must meet stringent mission critical requirements. Finally, acceleration is of growing importance because it allows workloads to be handed off to specialized hardware that is better equipped to handle that load. An example is running AI inferencing on a CPU is typically far slower and more power-hungry than on an NPU, but a GPU could be idle and available to do the same task. On the other hand, a small CNN can be handled quite easily on a CPU with a few simple instructions. It is at the intersection of these major design components where an OEM will find its differentiation. Therefore, having a system capable of exploring this complex hardware and software space quickly, and with a small team, is critical for an OEM to demand of its suppliers what is required for the success of its platforms. Again, controlling your own destiny is essential to survival.

SE: With all of this interoperability, what happens when there are parts of the ‘everything’ — whether it’s the car or the infrastructure or pedestrians — that are not updated with the latest technologies or different aspects of what needs to be there for conductivity?

DiGiuseppe: In addition to that challenge, this includes backward compatibility for automotive. For someone buying a car in 2025, you would expect any V2X technology to work in 2040. But in the meantime, all those standards that we’re talking about are continuing to evolve, so they need to be backward compatible.

Carpenter: This highlights the need for a digital twin capability for modeling this infrastructure to be able to understand that when we get two years down the road, some devices may not be reprogrammable. We may not be able to flash a particular device. We need to be able to look at that, and be able to simulate that in advance to understand what will happen. What will this do? We’re seeing this show a little bit, even giving a nod to what Ron was talking about earlier with interoperability. We have customers who want to be able to validate real hardware stuff that they’re developing on the lab bench, but they want to do it with the fidelity of a real system operating on a car, in a virtual city, with the live interaction of the channel with a gNodeB 5G base station mounted up on a building someplace, and they want to know how this will work in the context of the situation that it’s supposed to serve. And if something goes wrong in that scene, can we introduce something into this device and run our real silicon development platform against it to understand what happens here. If we go into a deep shadow, a deep fade area, and I’m not getting updates, yet I’m hurtling down the road at a certain speed, how long can I do this before I receive corrective information? What if someone’s software deck out there doesn’t get reprogrammed or doesn’t get the latest version of the standard safety protocols or something like that? We’re going to need this ability to carry models of stuff that was built two or three years ago in today’s infrastructure, model that, and understand in advance what’s going to happen with it so that we have an approach to do this. This is what the Department of Defense is doing today with their digital thread enablement, to have a way to capture that with legacy models of what they built years ago, but apply it in modern missions and understand, ‘Does it work? Does it fit? Does it not fit? What do we need to do to the existing system to make sure that we’re safe here?’ That is an approach we clearly see the automakers beginning to look at as a way to future-proof some of these systems and make sure that they’ve got a way to test them as they go forward.

Fritz: It’s become very clear from several popularized incidents that simply stopping and waiting for tech support to find you and get you going again is not going to be a successful strategy. In the end, the vehicle must make decisions at least as thoughtful as an average human would make. This is entirely possible, but not if too much emphasis is placed in the design phase on the dependencies between communicating (or non-communicating) actors. For this reason, we will always require sophisticated decision-making in-vehicle to be widely accepted.

SE: How does the design team stay up to date with everything?

DiGiuseppe: On the vehicle side, they’re going to be relying on over-the-air (OTA) software updates, which is relatively new in the automotive industry. But clearly, once we identify a software update, we’re going to need to roll out that software update, and OTA is obviously going to be used hand-in-hand with the updates to V2X as it moves forward.

SE: From a developer standpoint, they have to design to these all these regulations. What are the issues here?

Lin: As a software developer, if you think about a vehicle 10 years ago, you mainly just replaced hardware. You replaced your brakes, you replaced your engine, adding some fluid. These are all old styles. Right now, if you have the V2X network, you’d expect probably daily updates because software is evolving daily, and your whole communication system infrastructure is under the whole internet evolution, so you’re going to have to keep pace with it. That’s a lot of work for developers.

Carpenter: There could be implications on edge processing. The telecommunications providers are going to need to put a lot more compute closer to the radio head, and clearly they’re already exploring the possibilities of getting not just central processing cores, CPU cores, but there will be GPU cores and Tensor Processing Units, and we don’t know what all yet for AI, that will be a part of this safety infrastructure and information/infotainment delivery. There’s a lot more compute that’s going to have to happen with a much shorter latency. Augmented reality with heads-up displays — imagine the possibilities coming in safety systems with heads-up displays in cars. Then imagine the amount of processing that it’s going to take. So the telecom providers will need to be a major part of that, together with most of the local government regulatory groups that are going to foster that safety system. Each municipality probably has to decide what do they adopt, what level of standard will they use, and deliver. Who invests in that? The future is really exciting, but there are a few things yet to be sorted out in terms of the investment needed to really deliver that promise.

Dalpiaz: I’m more in the infrastructure side, and one of the questions we always have is, ‘With all this focus on renewables and decentralization of the grid, can the grid handle such expectations or such projects?’ Having more people connecting and feeding energy back into the grid, and managing all of this, that’s always the question that you have to go through and consider.

Fritz: The fact is that keeping up to date is not practical. However, that doesn’t mean that a methodology cannot be employed to accept changes into the development system, and therefore be folded into the development process. CI/CD systems with digital twin golden models already are being developed, with nightly regressions run against complex (and possibly changing) requirements. In this way, requirement changes are automatically addressed as they occur, and solutions can be rolled into an Agile methodology through nightly regressions. This is an important benefit of a modern development methodology that has been used in other industries for years, but it’s just now finding purchase in progressive automotive companies.

Related Reading
Growing Challenges For Increasingly Connected Vehicles
OEMs have high expectations for connected vehicles and global growth opportunities, but it’s not that simple.
Software-Defined Vehicles Ready To Roll
New approach could have big effects on cost, safety, security, and time to market.

The post V2X Path To Deployment Still Murky appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Commercial Chiplet Ecosystem May Be A Decade AwayAnn Mutschler
    Experts at the Table: Semiconductor Engineering sat down to talk about the challenges of establishing a commercial chiplet ecosystem with Frank Schirrmeister, vice president solutions and business development at Arteris; Mayank Bhatnagar, product marketing director in the Silicon Solutions Group at Cadence; Paul Karazuba, vice president of marketing at Expedera; Stephen Slater, EDA product management/integrating manager at Keysight; Kevin Rinebold, account technology manager for advanced packagi
     

Commercial Chiplet Ecosystem May Be A Decade Away

29. Únor 2024 v 09:08

Experts at the Table: Semiconductor Engineering sat down to talk about the challenges of establishing a commercial chiplet ecosystem with Frank Schirrmeister, vice president solutions and business development at Arteris; Mayank Bhatnagar, product marketing director in the Silicon Solutions Group at Cadence; Paul Karazuba, vice president of marketing at Expedera; Stephen Slater, EDA product management/integrating manager at Keysight; Kevin Rinebold, account technology manager for advanced packaging solutions at Siemens EDA; and Mick Posner, vice president of product management for high-performance computing IP solutions at Synopsys. What follows are excerpts of that discussion.

Experts at the Table: Semiconductor Engineering sat down to talk about the challenges of establishing a commercial chiplet ecosystem with Frank Schirrmeister, vice president solutions and business development at Arteris; Mayank Bhatnagar, product marketing director in the Silicon Solutions Group at Cadence; Paul Karazuba, vice president of marketing at Expedera; Stephen Slater, EDA product management/integrating manager at Keysight; Kevin Rinebold, account technology manager for advanced packaging solutions at Siemens EDA; and Mick Posner, vice president of product management for high-performance computing IP solutions at Synopsys. What follows are excerpts of that discussion.

L-R: Arteris’ Schirrmeister, Cadence’s Bhatnagar, Expedera’s Karazuba, Keysight’s Slater, Siemens EDA’s Rinebold, and Synopsys’ Posner.

SE: There’s a lot of buzz and activity around every aspect of chiplets today. What is your impression of where the commercial chiplet ecosystem stands today?

Schirrmeister: There’s a lot of interest today in an open chiplet ecosystem, but we are probably still quite a bit away from true openness. The proprietary versions of chiplets are alive and kicking out there. We see them in designs. We vendors are all supporting those to make it reality, like the UCIe proponents, but it will take some time to get to a fully open ecosystem. It’s probably at least three to five years before we get to a PCI Express type exchange environment.

Bhatnagar: The commercial chiplet ecosystem is at a very early stage. Many companies are providing chiplets, are designing them, and they’re shipping products — but they’re still single-vendor products, where the same company is designing all the pieces. I hope that with the advancements the UCIe standard is making, and with more standardization, we eventually can get to a marketplace-like environment for chiplets. We are not there.

Karazuba: The commercialization of homogeneous chiplets is pretty well understood by groups like AMD. But for the commercialization of heterogeneous chiplets, which is chiplets from multiple suppliers, there are still a lot of questions out there about that.

Slater: We participate in a lot of the board discussions, and attend industry events like TSMC’s OIP, and there’s a lot of excitement out there at the moment. I see a lot of even midsize and small customers starting to think about their development plans for what chiplet should be. I do think those that are going to be successful first will be those that are within a singular foundry ecosystem like TSMC’s. Today if you’re selecting your IP, you’ve got a variety of ways to be able to pick and choose which IP, see what’s been taped out before, how successful it’s been so you have a way to manage your risk and your costs as you’re putting things together. What we’ll see in the future will be that now you have a choice. Are you purchasing IP, or are you purchasing chiplets? Crucially, it’s all coming from the same foundry and put together in the same manner. The technical considerations of things like UCIe standard packaging versus advanced packaging, and the analysis tool sets for high-speed simulation, as well as for things like thermal, are going to just become that much more important.

Rinebold: I’ve been doing this about 30 years, so I can date back to some of the very earliest days of multi-chip modules and such. When we talk about the ecosystem, there are plenty of examples out there today where we see HBM and logic getting combined at the interposer level. This works if you believe HBM is a chiplet, and that’s a whole other argument. Some would argue that HBM falls into that category. The idea of a true LEGO, snap-together mix and match of chiplets continues to be aspirational for the mainstream market, but there are some business impediments that need to get addressed. Again, there are exceptions in some of the single-vendor solutions, where it’s more or less homogeneous integration, or an entirely vertically integrated type of environment where single vendors are integrating their own chiplets into some pretty impressive packages.

Posner: Aspirational is the word we use for an open ecosystem. I’m going to be a little bit more of a downer by saying I believe it’s 5 to 10 years out. Is it possible? Absolutely. But the biggest issue we see at the moment is a huge knowledge gap in what that really means. And as technology companies become more educated on really what that means, we’ll find that there will be some acceleration in adoption. But within what we call ‘captive’ — within a single company or a micro-ecosystem — we’re seeing multi-die systems pick up.

SE: Is it possible to define the pieces we have today from a technology point of view, to make a commercial chiplet ecosystem a reality?

Rinebold: What’s encouraging is the development of standards. There’s some adoption. We’ve already mentioned UCIe for some of the die-to-die protocols. Organizations like JEDEC announced the extension of their JEP30 PartModel format into the chiplet ecosystem to incorporate chiplet-style data. Think about this as an electronic data sheet. A lot of this work has been incorporated into the CDX working group under Open Compute. That’s encouraging. There were some comments a little bit earlier about having an open marketplace. I would agree we’re probably 3 to 10 years away from that coming to fruition. The underlying framework and infrastructure is there, but a lot of the licensing and distribution issues have to get resolved before you see any type of broad adoption.

Posner: The infrastructure is available. The EDA tools to create, to package, to analyze, to simulate, to manufacture — those tools are all there. The intellectual property that sits around it, either UCIe or some of the more traditional die-to-die interfaces, all of that’s there. What’s not established are full methodology and flows that lead to interoperability. Everything within captive is possible, but a broader ecosystem, a marketplace, is going to require silicon interoperability, simulation, packaging, all of that. That’s the area that we believe is missing — and still building.

Schirrmeister: Do we know what’s required? We probably can define that reasonably well. If the vision is an open ecosystem with IP on chiplets that you can just plug together like LEGO blocks, then the IP industry informs us of what’s required, and then there are some gaps on top of them. I hear people from the hard-coded IP world talking about the equivalent of PDKs for chiplets, but today’s IP ecosystem and the IP deliverables are informing us it doesn’t work like LEGO blocks yet. We are improving every year. But this whole, ‘I take my whiteboard and then everything just magically functions together’ is not what we have today. We need to think really hard about what the additional challenges are when you disaggregate that into chiplets and protocols. Then you get big systemic issues to deal with, like how do you deal with coherency across chiplets? It was challenging enough to get it done on a chip. Now you potentially have to deal with other partnerships you don’t even own. This is not a captive environment in an open ecosystem. That makes it very challenging, and it creates job security for at least 5 to 10 years.

Bhatnagar: On the technical side, what’s going well is adoption. We can see big companies like Intel, and then of course, IP providers like us and Synopsys. Everybody’s working toward standardizing chiplet integration, and that is working very well. EDA tools are also coming up to support that. But we are still very far from a marketplace because there are many issues that are not sorted out, like licensing and a few other things that need a bit more time.

Slater: The standards bodies and networking groups have excited a lot of people, and we’re getting a broad set of customers that are coming along. And one point I was thinking, is this only for very high-end compute? From the companies that I see presenting in those types of forums, it’s even companies working in automotive or aerospace/defense, planning out their future for the next 10 years or more. In the automotive case, it was a company that was thinking about creating chiplets for internal consumption — so maybe reorganizing how they look at creating many different variations or evolutions of their products, trying to do it as more modular chiplet types of blocks. ‘If we take the microprocessor part of it, would we sell that as a chiplet externally for other customers to integrate together into a bigger design?’ For me, the aha moment was seeing how broad the application would be. I do think that the standards work has been moving very fast, and that’s worked really well. For instance, at Keysight EDA, we just released a chiplet PHY designer. It’s a simulation for the high-speed digital link for UCIe, and that only comes about by having a standard that’s published, so an EDA company can take a look at it and realize what they need to do with it. The EDA tools are ready to handle these kinds of things. And maybe then, on to the last point is, in order to share the IP, in order to ensure that it’s available, database and process management is going to become all the more important. You need to keep track of which chip is made on which process, and be able to make it available inside the company to other potential users of that.

SE: What’s in place today from a business perspective, and what still needs to be worked out?

Karazuba: From a business perspective, speaking strictly of heterogeneous chiplets, I don’t think there’s anything really in place. Let me qualify that by asking, ‘Who is responsible for warranty? Who is responsible for testing? Who is responsible for faults? Who is responsible for supply chain?’ With homogeneous chiplets or monolithic silicon, that’s understood because that’s the way that this industry has been doing business since its inception. But when you talk about chiplets that are coming from multiple suppliers, with multiple IPs — and perhaps different interfaces, made in multiple fabs, then constructed by a third party, put together by a third party, tested by a fourth party, and then shipped — what happens when something goes wrong? Who do you point the finger at? Who do you go to and talk to? If a particular chiplet isn’t functioning as intended, it’s not necessarily that chiplet that’s bad. It may be the interface on another chiplet, or on a hub, whatever it might be. We’re going to get there, but right now that’s not understood. It’s not understood who is going to be responsible for things such as that. Is it the multi-chip module manufacturer, or is it the person buying it? I fear a return to the Wintel issue, where the chipmaker points to the OS maker, which points at the hardware maker, which points at the chipmaker. Understanding of the commercial side is is a real barrier to chiplets being adopted. Granted, the technical is much more difficult than the commercial, but I have no doubt the engineers will get there quicker than the business people.

Rinebold: I completely agree. What are the repercussions, warranty-related issues, things like that? I’d also go one step further. If you look at some of the larger silicon foundries right now, there is some concern about taking third-party wafers into their facilities to integrate in some type of heterogeneous, chiplet-type package. There are a lot of business and logistical issues that have to get addressed first. The technical stuff will happen quickly. It’s just a lot of these licensing- and distribution-type issues that need to get resolved. The other thing I want to back up to involves customers in the defense/industrial space. The trust and traceability and the province tracking of IP is going to be key for them, because they have so much expectation of multi-die or chiplet-type packaging as an alternative to monolithic scaling. Just look at all the government programs out there right now, with RESHAPE [Reshore Ecosystem for Secure Heterogeneous Advanced Packaging Electronics] and NGMM [Next-Generation Microelectronics Manufacturing] and such. They’re all in on this chiplet perspective, but they’re going to require a lot of security measures to understand who has touched the IP, where it comes from, how to you verify that.

Posner: Micro-ecosystems are forming because of all these challenges. If you naively think you can just go pick a die off the shelf and put it into your device, how do you warranty that? Who owns it? These micro-ecosystems are building up to fundamentally sort that out. So within a couple of different companies, be it automotive or high-performance compute, they’ll come to terms that are acceptable across all of them. And it’s these micro-ecosystems that are really going to end up driving open chiplets, and I think it’s going to be an organic type of growth. Chiplets are available for a specific application today, but we have this vision that someone else could use it, and we see that with the multiple modes being built into the dies. One mode is, ‘I’m connecting to myself. It’s a very tight, low-latency link.’ But I have this vision in the future that I’m going to need to have an interface or protocol that is more open and uses standard available stacks, and which can be bought off the shelf and integrated. That’s one side of the logistics. I want to mention two more things. It is possible to do interoperability across nodes. We demonstrated our TSMC N3 UCIe with Intel’s in-house UCIe, all put together on an Intel process. This was two separate companies working together, showing the first physical interoperability, so it’s possible. But going forward that’s still just a small part of the overall effort. In the IP space we’ve lived with an IP model of, ‘Build once, sell many.’ With the chiplet marketplace, unless there is a revenue stream from that chiplet, it will break that model. Companies think, ‘I only have to buy the IP once, and then I’m selling my silicon.’ But the infrastructure, the resources that are required to build all of this does not go away. There has to be money at the end of that tunnel for all of these different companies to be investing.

Schirrmeister: Mick is 100% right, but we may have a definition issue here with what we really mean by an ‘open’ chiplet ecosystem. I have two distinct conversations when I talk to partners and customers. On the one hand, you have board designers who are doing more and more integration, and they look at you with a wrinkled forehead and say, ‘We’ve been doing this for years. What are you talking about?’ It may not have been 3D-IC in the classic sense of all items, but they say, ‘Yeah, there are issues with warranties, and the user figures it out.’ The board people arrive from one side of the equation at chiplets because that’s the next evolution of integration. You need to be very efficient. That’s not what we call an open ecosystem of chiplets here. The idea is that you have this marketplace to mix things up, and you have the economies of scale by selling the same chiplet to multiple people. That’s really what the chip designers are thinking about, and some of them think even further because if you do it all in true 3D-IC fashion, then you actually have to co-design those chiplets in a way, and that’s a whole other dimension that needs to be sorted out. To pick a little bit on the big companies that have board and chip design groups in house, you see this even within the messaging of these companies. You have people who come from the board side, and for them it’s not a solved problem. It always has been challenging, but they’re going to take it to the next level. The chip guys are looking at this from a perspective of one interface, like PCI Express, now being UCIe. And then I think about this because the networks on chip need to become super NoCs across chiplets, which poses its own challenges. And that all needs to work together. But those are really chiplets designed for the purpose of being in a chiplet ecosystem. And to that end, Mick’s estimation of longer than five years is probably correct because those purpose-built chiplets, for the purpose of being in an open ecosystem, have all these challenges the board guys have already been dealing with for quite some time. They’re now ‘just getting smaller’ in the amount of integration they do.

Slater: When you put all these chiplets together and start to do that integration, in what order do you start placing the components down? You don’t want to throw away one very expensive chiplet because there was an issue with one of the smaller cheaper ones. So, there are now a lot of thoughts about how to go about doing almost like unit tests on individual chiplets first, but then you want to do some form of system test as you go along. That’s definitely something we need to think about. On the business front,  who is going to be most interested in purchasing a chiplet-style solution. It comes down to whether you have a yield problem. If your chips are getting to the size where you have yield concerns, then definitely it makes sense to think about using chiplets and breaking it up into smaller pieces. Not everything scales, so why move to the lowest process node when you could purchase something at a different process node that has less risk and costs less to manufacture, and then put it all together. The ones that have been successful so far — the big companies like Intel, AMD — were already driven to that edge. The chips got to a size that couldn’t fit on the reticle. We think about how many companies fit into that category, and that will factor into whether or not the cost and risk is worth it for them.

Bhatnagar: From a business perspective, what is really important is the standardization. Inside of the chiplets is fine, but how it impacts other chiplets around it is important. We would like to be able to make something and sell many copies of it. But if there is no standardization, then either we are taking a gamble by going for one thing and assuming everybody moves to it, or we make multiple versions of the same thing and that adds extra costs. To really justify a business case for any chiplet or, or any sort of IP with the chiplet, the standardization is key for the electrical interconnect, packaging, and all other aspects of a system.

Fig. 1:  A chiplet design. Source: Cadence. 

Related Reading
Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.
Proprietary Vs. Commercial Chiplets
Who wins, who loses, and where are the big challenges for multi-vendor heterogeneous integration.

The post Commercial Chiplet Ecosystem May Be A Decade Away appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Broad Impact From Accelerating Tech CyclesEd Sperling
    Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are ex
     

Broad Impact From Accelerating Tech Cycles

21. Únor 2024 v 09:01

Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are excerpts of that conversation. Panelists were chosen by GSA’s EMTECH Interest Group. To view part one of this discussion, click here.


L-R: Accenture’s Kurniawan; Renesas’ Vora; Expedera’s Karazuba; Arm’s Yanamadala.

SE: In the past, a lot of data center applications were for things like enterprise resource planning (ERP), and those were 10- or 15-year cycles. Cycles now are 1 or 2 years at most. With ChatGPT, that’s about six months. How do companies plan for this today?

Kurniawan: In the past, businesses were very focused on just the technology. But technology is everywhere today. ERP is there to support the business initiatives, and there is a very intimate relationship between technology and business at this point. So virtually all businesses are technology businesses. We advise clients before implementing their technologies to think first about, ‘What are your business initiatives? What’s the business strategy? What’s the business imperative for where you want to go? What’s your vision?’ And then, once you understand that and get alignment from the leaders, you can think about the technology. You kind of jump back and forth, because those are really two sides of the same coin. You cannot separate them anymore. And your vision encompasses everything you want to achieve in the future while providing room for flexibility and testing out the technology plan you want to put in place to see how that supports your business vision. With every challenge comes opportunity. Our job as a consultant is really to be able to see what’s happening out there, continuously scanning the market, and trying to get ahead of the curve to advise clients.

Yanamadala: The rapid evolution of advanced technologies like generative AI can present challenges to data centers due to the short technology cycles and demanding workloads. Some of the key challenges with advanced workloads include fluctuating resource needs, because they can demand bursts of high compute. That means static resource allocation will be inefficient in handling these demands. Additionally, the growing demand for heterogenous computing can also present additional challenges in deploying a flexible compute infrastructure. Data centers are adding flexibility through adoption of containerization and virtualization. Adopting hardware-agnostic software frameworks like TensorFlow and PyTorch also can help to facilitate switching between different computing architectures. So can the development of efficient hardware and specialized AI accelerators.

SE: A lot of technology advancements are incremental, but if you get enough of these incremental improvements they can be combined in ways most people never imagined. We’ve seen systems shrink from mainframes to PCs to smart phones, and now computing is happening just about everywhere. Are we at the on the cusp of moving beyond a box, which we’ve been tethered to since the start of computing, and particularly with AR/VR.

Vora: I find it fascinating that somebody could wear a pair of glasses, get immersed in that world, and get used to it. From a user experience perspective, it seems like an extreme shift. Although I do see some play in certain verticals, it’s not clear there will be mass consumerization or adoption of this technology.

Kurniawan: Right now, generative AI is getting a lot of attention. ChatGPT captured the attention of hundreds of millions of people in 60 days. That says something. You input a prompt and you get a response back. ChatGPT is super-intuitive. It’s a technology with potential for many killer use cases. AR/VR is promising technology with upside potential, but there’s still work that needs to be done to tie that technology to the use case. Virtual reality gaming is number one, for sure. But the path to leveraging that technology to enhance how we operate other stuff still needs more clarity. That said, we recently published a white paper talking about the build-outs around the globe, driven by the combination of public incentivies and private investments. Everywhere around the world, everybody wants to build up their manufacturing facilities. We conducted interviews with semiconductor experts, and touched on AR/VR when we asked what they did during COVID when the whole world shut down. Is AR/VR like a hammer looking for nails? The overall response we got was pretty positive. They said that AR/VR probably will be tremendously useful at some future date. But they like where the technologies are going. For example, there are constraints like heat dissipation and the size of the headset, but the belief is the technology will evolve. As it matures to become more user-centric, you might think about using an AR/VR device to control the operations of the equipment in a fab. But there is work needed from a value perspective — connectivity and processing, for example.

Karazuba: AR/VR in the past has largely been a victim of its own hype cycle. There’s a lot of promises people have made. We’ve spent a little bit of time with AR/VR folks. There’s certainly an acknowledgement that whatever success the Apple AR/VR headset has will largely set the tone for the next half decade for what the AR/VR market is. These folks are not undeterred by that. Are we at a point today where you can walk around all day with mixed reality? No. With a home gaming system, being tied to the wall is probably a small price to pay for the constant AC power and the performance advantages that will provide. This is going to take some time. The value proposition is there, but the timing may not be right today. We saw this with the watch and wearables. Now, everybody has one of these. But it took five to seven years before it really took off.

Vora: We’ve worn watches for decades, so it’s not something new. It’s just that what we wear now is different. But with AR/VR, we’ve never done that before. How do you suddenly expect massive change like that?

Karazuba: But most of us are wearing eyeglasses. If you have a form factor that is a version of what we have now, where information is just simply overlaid on what we’re seeing, it’s not that far of a jump for mixed reality or augmented reality. However, with virtual reality, I find it hard to believe that people are going to walk into a conference room with a bunch of other people and put a headset on.

Yanamadala: We’ve seen devices and sensors deployed practically everywhere. Platforms that offer high-performance computing, along with secure, power-efficient hardware and connectivity are available today, and they will make this trend possible. But untethered or ambient consumer experiences in the mass market will have their challenges. We will need to invest in substantial infrastructure to enable technology to operate invisibly in the background. So while consumer-facing technology deployments increasingly become untethered, the compute and connectivity infrastructure will still require connections for power and bandwidth.

SE: People have been sounding the alarm for hardware security for years, but with limited success. What’s changed today is that we have many more connected devices and more valuable data. Is the chip industry starting to take this seriously? Or is the problem now so immense and pervasive that anything we do is just going to be a drop in the bucket?

Yanamada: Security is fundamental from the chip level, and five years ago we saw an opportunity to proactively improve the quality of chip security. IoT was in its early stages, and each chip vendor had varied and fragmented approaches to security. They also rarely approached an independent evaluation lab to check the robustness of their security implementation. But with increasing connectivity and data becoming more valuable, hackers were paying close attention, and governments were considering what action to take to protect consumers. That’s why in 2019, we launched PSA Certified – to rally the ecosystem to be proactive with security best practices. It’s critically important that chip vendors, software platforms, OEMs, and CSPs can deploy and access standardized Root of Trust services. Security is complicated. You need the whole value chain to work together.

Vora: Security architectures, at least on the hardware side, have come a long way. We pretty much now have a semiconductor TPM-like [Trusted Platform Module] capability, with security capabilities built into even small microcontrollers. They have cryptographic engines, randomizers, and all sorts of security elements built in. The fundamental challenge with security is that just putting some security features on a chip and providing all the technology pieces won’t solve the security challenge. Security is more of a system challenge and a policy challenge. In many cases, people have to think about it within the context of the entire network. And then, it’s only as strong as the weakest link in the network. That piece of security is going to grow in complexity as we start seeing more complex use cases with AI coming into play with IoT. On the other side, though, as data handling of AI moves closer to the edge, we will start seeing more local inferencing and local data being worked on without the need to mindlessly transport data across layers of networks and across the cloud. We’re going to see some lower risk and improvements from a data-in-flight perspective, because of a lot of more localization of intelligence and compute happening at different layers of the edge. As we start moving more to the edge, AI starts getting more of a hold there. But as a whole, security will remain a challenge. The fundamental challenges with security have not changed. It’s just the context and the systems in which we will have to apply them are different.

Karazuba: The semiconductor industry is finally starting to understand the true nature of what security breaches could mean with the type of data we’re handling. Security is a day zero responsibility of anyone building a product, whether that product is a chip or a device, and security responsibilities proliferate across the entire lifecycle of the of any device, from the person who is architecting the chip, to the person designing the smartphone, to the carrier. I would argue that carrier responsibilities for security go as far as the stopping those robo calls that we all get, and the spam calls and phishing calls. The internet service providers have a responsibility to stop the phishing e-mails. That’s all part of security. Obviously, with banks and financial institutions, their security is generally pretty good. But it stretches the entire way, and in the security world, the weakest link is always the security profile of your device. We’re getting better. We always could be better. But I am more encouraged now than I’ve been at any point since I really started looking at security of devices. I’m more encouraged by the way chips are being designed, deployed, manufactured, and delivered to customers.

Kurniawan: There’s some certification for IoT devices before those are sent into the market to make sure there is some security standard they adhere to. But two key words I mentioned before, collaboration and flexibility, are applicable to security, as well. Collaboration involves where you see the rest of the system, including other components in the technology set, going to evolve in the future. And flexibility is required, because security is a moving target. It needs to evolve because as you upgrade your system, your software, a vulnerability will move, as well. You need flexibility and security-minded thinking infused into your chip design.

Related Reading
Preparing For An AI-Driven Future In Chips (part 1 of above roundtable)
Designs need to be flexible enough to handle an onslaught of continuous and rapid changes, but secure enough to protect data.

The post Broad Impact From Accelerating Tech Cycles appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Broad Impact For Accelerating Tech CyclesEd Sperling
    Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are ex
     

Broad Impact For Accelerating Tech Cycles

21. Únor 2024 v 09:01

Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are excerpts of that conversation. Panelists were chosen by GSA’s EMTECH Interest Group. To view part one of this discussion, click here.


L-R: Accenture’s Kurniawan; Renesas’ Vora; Expedera’s Karazuba; Arm’s Yanamadala.

SE: In the past, a lot of data center applications were for things like enterprise resource planning (ERP), and those were 10- or 15-year cycles. Cycles now are 1 or 2 years at most. With ChatGPT, that’s about six months. How do companies plan for this today?

Kurniawan: In the past, businesses were very focused on just the technology. But technology is everywhere today. ERP is there to support the business initiatives, and there is a very intimate relationship between technology and business at this point. So virtually all businesses are technology businesses. We advise clients before implementing their technologies to think first about, ‘What are your business initiatives? What’s the business strategy? What’s the business imperative for where you want to go? What’s your vision?’ And then, once you understand that and get alignment from the leaders, you can think about the technology. You kind of jump back and forth, because those are really two sides of the same coin. You cannot separate them anymore. And your vision encompasses everything you want to achieve in the future while providing room for flexibility and testing out the technology plan you want to put in place to see how that supports your business vision. With every challenge comes opportunity. Our job as a consultant is really to be able to see what’s happening out there, continuously scanning the market, and trying to get ahead of the curve to advise clients.

Yanamadala: The rapid evolution of advanced technologies like generative AI can present challenges to data centers due to the short technology cycles and demanding workloads. Some of the key challenges with advanced workloads include fluctuating resource needs, because they can demand bursts of high compute. That means static resource allocation will be inefficient in handling these demands. Additionally, the growing demand for heterogenous computing can also present additional challenges in deploying a flexible compute infrastructure. Data centers are adding flexibility through adoption of containerization and virtualization. Adopting hardware-agnostic software frameworks like TensorFlow and PyTorch also can help to facilitate switching between different computing architectures. So can the development of efficient hardware and specialized AI accelerators.

SE: A lot of technology advancements are incremental, but if you get enough of these incremental improvements they can be combined in ways most people never imagined. We’ve seen systems shrink from mainframes to PCs to smart phones, and now computing is happening just about everywhere. Are we at the on the cusp of moving beyond a box, which we’ve been tethered to since the start of computing, and particularly with AR/VR.

Vora: I find it fascinating that somebody could wear a pair of glasses, get immersed in that world, and get used to it. From a user experience perspective, it seems like an extreme shift. Although I do see some play in certain verticals, it’s not clear there will be mass consumerization or adoption of this technology.

Kurniawan: Right now, generative AI is getting a lot of attention. ChatGPT captured the attention of hundreds of millions of people in 60 days. That says something. You input a prompt and you get a response back. ChatGPT is super-intuitive. It’s a technology with potential for many killer use cases. AR/VR is promising technology with upside potential, but there’s still work that needs to be done to tie that technology to the use case. Virtual reality gaming is number one, for sure. But the path to leveraging that technology to enhance how we operate other stuff still needs more clarity. That said, we recently published a white paper talking about the build-outs around the globe, driven by the combination of public incentivies and private investments. Everywhere around the world, everybody wants to build up their manufacturing facilities. We conducted interviews with semiconductor experts, and touched on AR/VR when we asked what they did during COVID when the whole world shut down. Is AR/VR like a hammer looking for nails? The overall response we got was pretty positive. They said that AR/VR probably will be tremendously useful at some future date. But they like where the technologies are going. For example, there are constraints like heat dissipation and the size of the headset, but the belief is the technology will evolve. As it matures to become more user-centric, you might think about using an AR/VR device to control the operations of the equipment in a fab. But there is work needed from a value perspective — connectivity and processing, for example.

Karazuba: AR/VR in the past has largely been a victim of its own hype cycle. There’s a lot of promises people have made. We’ve spent a little bit of time with AR/VR folks. There’s certainly an acknowledgement that whatever success the Apple AR/VR headset has will largely set the tone for the next half decade for what the AR/VR market is. These folks are not undeterred by that. Are we at a point today where you can walk around all day with mixed reality? No. With a home gaming system, being tied to the wall is probably a small price to pay for the constant AC power and the performance advantages that will provide. This is going to take some time. The value proposition is there, but the timing may not be right today. We saw this with the watch and wearables. Now, everybody has one of these. But it took five to seven years before it really took off.

Vora: We’ve worn watches for decades, so it’s not something new. It’s just that what we wear now is different. But with AR/VR, we’ve never done that before. How do you suddenly expect massive change like that?

Karazuba: But most of us are wearing eyeglasses. If you have a form factor that is a version of what we have now, where information is just simply overlaid on what we’re seeing, it’s not that far of a jump for mixed reality or augmented reality. However, with virtual reality, I find it hard to believe that people are going to walk into a conference room with a bunch of other people and put a headset on.

Yanamadala: We’ve seen devices and sensors deployed practically everywhere. Platforms that offer high-performance computing, along with secure, power-efficient hardware and connectivity are available today, and they will make this trend possible. But untethered or ambient consumer experiences in the mass market will have their challenges. We will need to invest in substantial infrastructure to enable technology to operate invisibly in the background. So while consumer-facing technology deployments increasingly become untethered, the compute and connectivity infrastructure will still require connections for power and bandwidth.

SE: People have been sounding the alarm for hardware security for years, but with limited success. What’s changed today is that we have many more connected devices and more valuable data. Is the chip industry starting to take this seriously? Or is the problem now so immense and pervasive that anything we do is just going to be a drop in the bucket?

Yanamada: Security is fundamental from the chip level, and five years ago we saw an opportunity to proactively improve the quality of chip security. IoT was in its early stages, and each chip vendor had varied and fragmented approaches to security. They also rarely approached an independent evaluation lab to check the robustness of their security implementation. But with increasing connectivity and data becoming more valuable, hackers were paying close attention, and governments were considering what action to take to protect consumers. That’s why in 2019, we launched PSA Certified – to rally the ecosystem to be proactive with security best practices. It’s critically important that chip vendors, software platforms, OEMs, and CSPs can deploy and access standardized Root of Trust services. Security is complicated. You need the whole value chain to work together.

Vora: Security architectures, at least on the hardware side, have come a long way. We pretty much now have a semiconductor TPM-like [Trusted Platform Module] capability, with security capabilities built into even small microcontrollers. They have cryptographic engines, randomizers, and all sorts of security elements built in. The fundamental challenge with security is that just putting some security features on a chip and providing all the technology pieces won’t solve the security challenge. Security is more of a system challenge and a policy challenge. In many cases, people have to think about it within the context of the entire network. And then, it’s only as strong as the weakest link in the network. That piece of security is going to grow in complexity as we start seeing more complex use cases with AI coming into play with IoT. On the other side, though, as data handling of AI moves closer to the edge, we will start seeing more local inferencing and local data being worked on without the need to mindlessly transport data across layers of networks and across the cloud. We’re going to see some lower risk and improvements from a data-in-flight perspective, because of a lot of more localization of intelligence and compute happening at different layers of the edge. As we start moving more to the edge, AI starts getting more of a hold there. But as a whole, security will remain a challenge. The fundamental challenges with security have not changed. It’s just the context and the systems in which we will have to apply them are different.

Karazuba: The semiconductor industry is finally starting to understand the true nature of what security breaches could mean with the type of data we’re handling. Security is a day zero responsibility of anyone building a product, whether that product is a chip or a device, and security responsibilities proliferate across the entire lifecycle of the of any device, from the person who is architecting the chip, to the person designing the smartphone, to the carrier. I would argue that carrier responsibilities for security go as far as the stopping those robo calls that we all get, and the spam calls and phishing calls. The internet service providers have a responsibility to stop the phishing e-mails. That’s all part of security. Obviously, with banks and financial institutions, their security is generally pretty good. But it stretches the entire way, and in the security world, the weakest link is always the security profile of your device. We’re getting better. We always could be better. But I am more encouraged now than I’ve been at any point since I really started looking at security of devices. I’m more encouraged by the way chips are being designed, deployed, manufactured, and delivered to customers.

Kurniawan: There’s some certification for IoT devices before those are sent into the market to make sure there is some security standard they adhere to. But two key words I mentioned before, collaboration and flexibility, are applicable to security, as well. Collaboration involves where you see the rest of the system, including other components in the technology set, going to evolve in the future. And flexibility is required, because security is a moving target. It needs to evolve because as you upgrade your system, your software, a vulnerability will move, as well. You need flexibility and security-minded thinking infused into your chip design.

Related Reading
Preparing For An AI-Driven Future In Chips (part 1 of above roundtable)
Designs need to be flexible enough to handle an onslaught of continuous and rapid changes, but secure enough to protect data.

The post Broad Impact For Accelerating Tech Cycles appeared first on Semiconductor Engineering.

❌
❌