FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Using AI/ML To Combat Cyberattacks

Od: John Koon

Machine learning is being used by hackers to find weaknesses in chips and systems, but it also is starting to be used to prevent breaches by pinpointing hardware and software design flaws.

To make this work, machine learning (ML) must be trained to identify vulnerabilities, both in hardware and software. With proper training, ML can detect cyber threats and prevent them from accessing critical data. As ML encounters additional cyberattack scenarios, it can learn and adapt, helping to build a more sophisticated defense system that includes hardware, software, and how they interface with larger systems. It also can automate many cyber defense tasks with minimum human intervention, which saves time, effort, and money.

ML is capable of sifting through large volumes of data much faster than humans. Potentially, it can reduce or remove human errors, lower costs, and boost cyber defense capability and overall efficiency. It also can perform such tasks as connection authentication, system design, vulnerability detection, and most important, threat detection through pattern and behavioral analysis.

“AI/ML is finding many roles protecting and enhancing security for digital devices and services,” said David Maidment, senior director of market development at Arm. “However, it is also being used as a tool for increasingly sophisticated attacks by threat actors. AI/ML is essentially a tool tuned for very advanced pattern recognition across vast data sets. Examples of how AI/ML can enhance security include network-based monitoring to spot rogue behaviors at scale, code analysis to look for vulnerabilities on new and legacy software, and automating the deployment of software to keep devices up-to-date and secure.”

This means that while AI/ML can be used as a force for good, inevitably bad actors will use it to increase the sophistication and scale of attacks. “Building devices and services based on security best practices, having a hardware-protected root of trust (RoT), and an industry-wide methodology to standardize and measure security are all essential,” Maidment said. “The focus on security, including the rapid growth of AI/ML, is certainly driving industry and government discussions as we work on solutions to maximize AI/ML’s benefits and minimize any potential harmful impact.”

Zero trust is a fundamental requirement when it comes to cybersecurity. Before a user or device is allowed to connect to the network or server, requests have to be authenticated to make sure they are legitimate and authorized. ML will enhance the authentication process, including password management, phishing prevention, and malware detection.

Areas that bad actors look to exploit are software design vulnerabilities and weak points in systems and networks. Once hackers uncover these vulnerabilities, they can be used as a point of entrance to the network or systems. ML can detect these vulnerabilities and alert administrators.

Taking a proactive approach by doing threat detection is essential in cyber defense. ML pattern and behavioral analysis strengths support this strategy. When ML detects unusual behavior in data traffic flow or patterns, it sends an alert about abnormal behavior to the administrator. This is similar to the banking industry’s practice of watching for credit card use that does not follow an established pattern. A large purchase overseas on a credit card with a pattern of U.S. use only for moderate amounts would trigger an alert, for example.

As hackers become more sophisticated with new attack vectors, whether it is new ransomware or distributed denial of service (DDoS) attacks, ML will do a much better job than humans in detecting these unknown threats.

Limitations of ML in cybersecurity
While ML provides many benefits, its value depends on the data used to train it. The more that can be used to train the ML model, the better it is at detecting fraud and cyber threats. But acquiring this data raises overall cybersecurity system design expenses. The model also needs constant maintenance and tuning to sustain peak performance and meet the specific needs of users. And while ML can do many of the tasks, it still requires some human involvement, so it’s essential to understand both cybersecurity and how well ML functions.

While ML is effective in fending off many of the cyberattacks, it is not a panacea. “The specific type of artificial intelligence typically referenced in this context is machine learning (ML), which is the development of algorithms that can ingest large volumes of training data, then generalize and make meaningful observations and decisions based on novel data,” said Scott Register, vice president of security solutions at Keysight Technologies. “With the right algorithms and training, AI/ML can be used to pinpoint cyberattacks which might otherwise be difficult to detect.”

However, no one — at least in the commercial space — has delivered a product that can detect very subtle cyberattacks with complete accuracy. “The algorithms are getting better all the time, so it’s highly probable that we’ll soon have commercial products that can detect and respond to attacks,” Register said. “We must keep in mind, however, that attackers don’t sit still, and they’re well-funded and patient. They employ ‘offensive AI,’ which means they use the same types of techniques and algorithms to generate attacks which are unlikely to be detected.”

ML implementation considerations
For any ML implementation, a strong cyber defense system is essential, but there’s no such thing as a completely secure design. Instead, security is a dynamic and ongoing process that requires constant fine-tuning and improvement against ever-changing cyberattacks. Implementing ML requires a clear security roadmap, which should define requirements. It also requires implementing a good cybersecurity process, which secures individual hardware and software components, as well as some type of system testing.

“One of the things we advise is to start with threat modeling to identify a set of critical design assets to protect from an adversary under confidentiality or integrity,” said Jason Oberg, CTO at Cycuity. “From there, you can define a set of very succinct, secure requirements for the assets. All of this work is typically done at the architecture level. We do provide education, training and guidance to our customers, because at that level, if you don’t have succinct security requirements defined, then it’s really hard to verify or check something in the design. What often happens is customers will say, ‘I want to have a secure chip.’ But it’s not as easy as just pressing a button and getting a green check mark that confirms the chip is now secure.”

To be successful, engineering teams must start at the architectural stages and define the security requirements. “Once that is done, they can start actually writing the RTL,” Oberg said. “There are tools available to provide assurances these security requirements are being met, and run within the existing simulation and emulation environments to help validate the security requirements, and help identify any unknown design weaknesses. Generally, this helps hardware and verification engineers increase their productivity and build confidence that the system is indeed meeting the security requirements.”

Figure 1: A cybersecurity model includes multiple stages, progressing from the very basic to in-depth. It is important for organizations to know what stages their cyber defense system are. Source: Cycuity

Fig. 1: A cybersecurity model includes multiple stages, progressing from the very basic to in-depth. It is important for organizations to know what stages their cyber defense system are. Source: Cycuity

Steve Garrison, senior vice president, marketing of Stellar Cyber, noted that if cyber threats were uncovered during the detection process, so many data files may be generated that they will be difficult for humans to sort through. Graphical displays can speed up the process and reduce the overall mean time to detection (MTTD) and mean time to response (MTTR).

Figure 2: Using graphical displays  would reduce the overall meantime to detection (MTTD) and meantime to response (MTTR). Source: Stellar Cyber

Fig. 2: Using graphical displays  would reduce the overall meantime to detection (MTTD) and meantime to response (MTTR). Source: Stellar Cyber

Testing is essential
Another important stage in the design process is testing, whereby each system design requires a vigorous attack simulation tool to weed out the basic oversights to ensure it meets the predefined standard.

“First, if you want to understand how defensive systems will function in the real world, it’s important to test them under conditions, which are as realistic as possible,” Keysight’s Register said. “The network environment should have the same amount of traffic, mix of applications, speeds, behavioral characteristics, and timing as the real world. For example, the timing of a sudden uptick in email and social media traffic corresponds to the time when people open up their laptops at work. The attack traffic needs to be as realistic as possible as well – hackers try hard not to be noticed, often preferring ‘low and slow’ attacks, which may take hours or days to complete, making detection much more difficult. The same obfuscation techniques, encryption, and decoy traffic employed by threat actors needs to be simulated as accurately as possible.”

Further, due to mistaken assumptions during testing, defensive systems often perform great in the lab, yet fail spectacularly in production networks.  “Afterwards we hear, for example, ‘I didn’t think hackers would encrypt their malware,’ or ‘Internal e-mails weren’t checked for malicious attachments, only those from external senders,’” Register explained. “Also, in security testing, currency is key. Attacks and obfuscation techniques are constantly evolving. If a security system is tested against stale attacks, then the value of that testing is limited. The offensive tools should be kept as up to date as possible to ensure the most effective performance against the tools a system is likely to encounter in the wild.”

Semiconductor security
Almost all system designs depend on semiconductors, so it is important to ensure that any and all chips, firmware, FPGAs, and SoCs are secure – including those that perform ML functionality.

“Semiconductor security is a constantly evolving problem and requires an adaptable solution, said Jayson Bethurem, vice president marketing and business development at Flex Logix. “Fixed solutions with current cryptography that are implemented today will inevitably be challenged in the future. Hackers today have more time, resources, training, and motivation to disrupt technology. With technology increasing in every facet of our lives, defending against this presents a real challenge. We also have to consider upcoming threats, namely quantum computing.”

Many predict that quantum computing will be able to crack current cryptography solutions in the next few years. “Fortunately, semiconductor manufacturers have solutions that can enable cryptography agility, which can dynamically adapt to evolving threats,” Bethurem said. “This includes both updating hardware accelerated cryptography algorithms and obfuscating them, an approach that increases root of trust and protects valuable IP secrets. Advanced solutions like these also involve devices randomly creating their own encryption keys, making it harder for algorithms to crack encryption codes.”

Advances in AI/ML algorithms can adapt to new threats and reduce latency of algorithm updates from manufacturers. This is particularly useful with reconfigurable eFPGA IP, which can be implemented into any semiconductor device to thwart all current and future threats and optimized to run AI/ML-based cryptography solutions. The result is a combination of high-performance processing, scalability, and low-latency attack response.

Chips that support AI/ML algorithms need not only computing power, but also accelerators for those algorithms. In addition, all of this needs to happen without exceeding a tight power budget.

“More AI/ML systems run at tiny edges rather than at the core,” said Detlef Houdeau, senior director of design system architecture at Infineon Technologies. “AI/ML systems don’t need any bigger computer and/or cloud. For instance, a Raspberry Pi for a robot in production can have more than 3 AI/ML algorithms working in parallel. A smartphone has more than 10 AI/ML functions in the phone, and downloading new apps brings new AI/ML algorithms into the device. A pacemaker can have 2 AI/ML algorithms. Security chips, meanwhile, need a security architecture as well as accelerators for encryption. Combining an AI/ML accelerator with an encryption accelerator in the same chip could increase the performance in microcontroller units, and at the same time foster more security at the edge. The next generation of microelectronics could show this combination.”

After developers have gone through design reviews and the systems have run vigorous tests, it helps to have third-party certification and/or credentials to ensure the systems are indeed secure from a third-party independent viewpoint.

“As AI, and recently generative AI, continue to transform all markets, there will be new attack vectors to mitigate against,” said Arm’s Maidment. “We expect to see networks become smarter in the way they monitor traffic and behaviors. The use of AI/ML allows network-based monitoring at scale to allow potential unexpected or rogue behavior to be identified and isolated. Automating network monitoring based on AI/ML will allow an extra layer of defense as networks scale out and establish effectively a ‘zero trust’ approach. With this approach, analysis at scale can be tuned to look at particular threat vectors depending on the use case.”

With an increase in AI/ML adoption at the edge, a lot of this is taking place on the CPU. “Whether it is handling workloads in their entirety, or in combination with a co-processor like a GPU or NPU, how applications are deployed across the compute resources needs to be secure and managed centrally within the edge AI/ML device,” Maidment said. “Building edge AI/ML devices based on a hardware root of trust is essential. It is critical to have privileged access control of what code is allowed to run where using a trusted memory management architecture. Arm continually invests in security, and the Armv9 architecture offers a number of new security features. Alongside architecture improvements, we continue to work in partnership with the industry on our ecosystem security framework and certification scheme, PSA Certified, which is based on a certified hardware RoT. This hardware base helps to improve the security of systems and fulfill the consumer expectation that as devices scale, they remain secure.”

Outlook
It is important to understand that threat actors will continue to evolve attacks using AI/ML. Experts suggest that to counter such attacks, organizations, institutions, and government agencies will have to continually improve defense strategies and capabilities, including AI/ML deployment.

AI/ML can be used as weapon from an attacker for industrial espionage and/or industrial sabotage, and stopping incursions will require a broad range of cyberattack prevention and detection tools, including AI/ML functionality for anomaly detection. But in general, hackers are almost always one step ahead.

According to Register, “the recurring cycle is: 1) hackers come out with a new tool or technology that lets them attack systems or evade detection more effectively; 2) those attacks cause enough economic damage that the industry responds and develops effective countermeasures; 3) the no-longer-new hacker tools are still employed effectively, but against targets that haven’t bothered to update their defenses; 4) hackers develop new offensive tools that are effective against the defensive techniques of high-value targets, and the cycle starts anew.”

Related Reading
Securing Chip Manufacturing Against Growing Cyber Threats
Suppliers are the number one risk, but reducing attacks requires industry-wide collaboration.
Data Center Security Issues Widen
The number and breadth of hardware targets is increasing, but older attack vectors are not going away. Hackers are becoming more sophisticated, and they have a big advantage.

The post Using AI/ML To Combat Cyberattacks appeared first on Semiconductor Engineering.

Using AI/ML To Combat Cyberattacks

Od: John Koon

Machine learning is being used by hackers to find weaknesses in chips and systems, but it also is starting to be used to prevent breaches by pinpointing hardware and software design flaws.

To make this work, machine learning (ML) must be trained to identify vulnerabilities, both in hardware and software. With proper training, ML can detect cyber threats and prevent them from accessing critical data. As ML encounters additional cyberattack scenarios, it can learn and adapt, helping to build a more sophisticated defense system that includes hardware, software, and how they interface with larger systems. It also can automate many cyber defense tasks with minimum human intervention, which saves time, effort, and money.

ML is capable of sifting through large volumes of data much faster than humans. Potentially, it can reduce or remove human errors, lower costs, and boost cyber defense capability and overall efficiency. It also can perform such tasks as connection authentication, system design, vulnerability detection, and most important, threat detection through pattern and behavioral analysis.

“AI/ML is finding many roles protecting and enhancing security for digital devices and services,” said David Maidment, senior director of market development at Arm. “However, it is also being used as a tool for increasingly sophisticated attacks by threat actors. AI/ML is essentially a tool tuned for very advanced pattern recognition across vast data sets. Examples of how AI/ML can enhance security include network-based monitoring to spot rogue behaviors at scale, code analysis to look for vulnerabilities on new and legacy software, and automating the deployment of software to keep devices up-to-date and secure.”

This means that while AI/ML can be used as a force for good, inevitably bad actors will use it to increase the sophistication and scale of attacks. “Building devices and services based on security best practices, having a hardware-protected root of trust (RoT), and an industry-wide methodology to standardize and measure security are all essential,” Maidment said. “The focus on security, including the rapid growth of AI/ML, is certainly driving industry and government discussions as we work on solutions to maximize AI/ML’s benefits and minimize any potential harmful impact.”

Zero trust is a fundamental requirement when it comes to cybersecurity. Before a user or device is allowed to connect to the network or server, requests have to be authenticated to make sure they are legitimate and authorized. ML will enhance the authentication process, including password management, phishing prevention, and malware detection.

Areas that bad actors look to exploit are software design vulnerabilities and weak points in systems and networks. Once hackers uncover these vulnerabilities, they can be used as a point of entrance to the network or systems. ML can detect these vulnerabilities and alert administrators.

Taking a proactive approach by doing threat detection is essential in cyber defense. ML pattern and behavioral analysis strengths support this strategy. When ML detects unusual behavior in data traffic flow or patterns, it sends an alert about abnormal behavior to the administrator. This is similar to the banking industry’s practice of watching for credit card use that does not follow an established pattern. A large purchase overseas on a credit card with a pattern of U.S. use only for moderate amounts would trigger an alert, for example.

As hackers become more sophisticated with new attack vectors, whether it is new ransomware or distributed denial of service (DDoS) attacks, ML will do a much better job than humans in detecting these unknown threats.

Limitations of ML in cybersecurity
While ML provides many benefits, its value depends on the data used to train it. The more that can be used to train the ML model, the better it is at detecting fraud and cyber threats. But acquiring this data raises overall cybersecurity system design expenses. The model also needs constant maintenance and tuning to sustain peak performance and meet the specific needs of users. And while ML can do many of the tasks, it still requires some human involvement, so it’s essential to understand both cybersecurity and how well ML functions.

While ML is effective in fending off many of the cyberattacks, it is not a panacea. “The specific type of artificial intelligence typically referenced in this context is machine learning (ML), which is the development of algorithms that can ingest large volumes of training data, then generalize and make meaningful observations and decisions based on novel data,” said Scott Register, vice president of security solutions at Keysight Technologies. “With the right algorithms and training, AI/ML can be used to pinpoint cyberattacks which might otherwise be difficult to detect.”

However, no one — at least in the commercial space — has delivered a product that can detect very subtle cyberattacks with complete accuracy. “The algorithms are getting better all the time, so it’s highly probable that we’ll soon have commercial products that can detect and respond to attacks,” Register said. “We must keep in mind, however, that attackers don’t sit still, and they’re well-funded and patient. They employ ‘offensive AI,’ which means they use the same types of techniques and algorithms to generate attacks which are unlikely to be detected.”

ML implementation considerations
For any ML implementation, a strong cyber defense system is essential, but there’s no such thing as a completely secure design. Instead, security is a dynamic and ongoing process that requires constant fine-tuning and improvement against ever-changing cyberattacks. Implementing ML requires a clear security roadmap, which should define requirements. It also requires implementing a good cybersecurity process, which secures individual hardware and software components, as well as some type of system testing.

“One of the things we advise is to start with threat modeling to identify a set of critical design assets to protect from an adversary under confidentiality or integrity,” said Jason Oberg, CTO at Cycuity. “From there, you can define a set of very succinct, secure requirements for the assets. All of this work is typically done at the architecture level. We do provide education, training and guidance to our customers, because at that level, if you don’t have succinct security requirements defined, then it’s really hard to verify or check something in the design. What often happens is customers will say, ‘I want to have a secure chip.’ But it’s not as easy as just pressing a button and getting a green check mark that confirms the chip is now secure.”

To be successful, engineering teams must start at the architectural stages and define the security requirements. “Once that is done, they can start actually writing the RTL,” Oberg said. “There are tools available to provide assurances these security requirements are being met, and run within the existing simulation and emulation environments to help validate the security requirements, and help identify any unknown design weaknesses. Generally, this helps hardware and verification engineers increase their productivity and build confidence that the system is indeed meeting the security requirements.”

Figure 1: A cybersecurity model includes multiple stages, progressing from the very basic to in-depth. It is important for organizations to know what stages their cyber defense system are. Source: Cycuity

Fig. 1: A cybersecurity model includes multiple stages, progressing from the very basic to in-depth. It is important for organizations to know what stages their cyber defense system are. Source: Cycuity

Steve Garrison, senior vice president, marketing of Stellar Cyber, noted that if cyber threats were uncovered during the detection process, so many data files may be generated that they will be difficult for humans to sort through. Graphical displays can speed up the process and reduce the overall mean time to detection (MTTD) and mean time to response (MTTR).

Figure 2: Using graphical displays  would reduce the overall meantime to detection (MTTD) and meantime to response (MTTR). Source: Stellar Cyber

Fig. 2: Using graphical displays  would reduce the overall meantime to detection (MTTD) and meantime to response (MTTR). Source: Stellar Cyber

Testing is essential
Another important stage in the design process is testing, whereby each system design requires a vigorous attack simulation tool to weed out the basic oversights to ensure it meets the predefined standard.

“First, if you want to understand how defensive systems will function in the real world, it’s important to test them under conditions, which are as realistic as possible,” Keysight’s Register said. “The network environment should have the same amount of traffic, mix of applications, speeds, behavioral characteristics, and timing as the real world. For example, the timing of a sudden uptick in email and social media traffic corresponds to the time when people open up their laptops at work. The attack traffic needs to be as realistic as possible as well – hackers try hard not to be noticed, often preferring ‘low and slow’ attacks, which may take hours or days to complete, making detection much more difficult. The same obfuscation techniques, encryption, and decoy traffic employed by threat actors needs to be simulated as accurately as possible.”

Further, due to mistaken assumptions during testing, defensive systems often perform great in the lab, yet fail spectacularly in production networks.  “Afterwards we hear, for example, ‘I didn’t think hackers would encrypt their malware,’ or ‘Internal e-mails weren’t checked for malicious attachments, only those from external senders,’” Register explained. “Also, in security testing, currency is key. Attacks and obfuscation techniques are constantly evolving. If a security system is tested against stale attacks, then the value of that testing is limited. The offensive tools should be kept as up to date as possible to ensure the most effective performance against the tools a system is likely to encounter in the wild.”

Semiconductor security
Almost all system designs depend on semiconductors, so it is important to ensure that any and all chips, firmware, FPGAs, and SoCs are secure – including those that perform ML functionality.

“Semiconductor security is a constantly evolving problem and requires an adaptable solution, said Jayson Bethurem, vice president marketing and business development at Flex Logix. “Fixed solutions with current cryptography that are implemented today will inevitably be challenged in the future. Hackers today have more time, resources, training, and motivation to disrupt technology. With technology increasing in every facet of our lives, defending against this presents a real challenge. We also have to consider upcoming threats, namely quantum computing.”

Many predict that quantum computing will be able to crack current cryptography solutions in the next few years. “Fortunately, semiconductor manufacturers have solutions that can enable cryptography agility, which can dynamically adapt to evolving threats,” Bethurem said. “This includes both updating hardware accelerated cryptography algorithms and obfuscating them, an approach that increases root of trust and protects valuable IP secrets. Advanced solutions like these also involve devices randomly creating their own encryption keys, making it harder for algorithms to crack encryption codes.”

Advances in AI/ML algorithms can adapt to new threats and reduce latency of algorithm updates from manufacturers. This is particularly useful with reconfigurable eFPGA IP, which can be implemented into any semiconductor device to thwart all current and future threats and optimized to run AI/ML-based cryptography solutions. The result is a combination of high-performance processing, scalability, and low-latency attack response.

Chips that support AI/ML algorithms need not only computing power, but also accelerators for those algorithms. In addition, all of this needs to happen without exceeding a tight power budget.

“More AI/ML systems run at tiny edges rather than at the core,” said Detlef Houdeau, senior director of design system architecture at Infineon Technologies. “AI/ML systems don’t need any bigger computer and/or cloud. For instance, a Raspberry Pi for a robot in production can have more than 3 AI/ML algorithms working in parallel. A smartphone has more than 10 AI/ML functions in the phone, and downloading new apps brings new AI/ML algorithms into the device. A pacemaker can have 2 AI/ML algorithms. Security chips, meanwhile, need a security architecture as well as accelerators for encryption. Combining an AI/ML accelerator with an encryption accelerator in the same chip could increase the performance in microcontroller units, and at the same time foster more security at the edge. The next generation of microelectronics could show this combination.”

After developers have gone through design reviews and the systems have run vigorous tests, it helps to have third-party certification and/or credentials to ensure the systems are indeed secure from a third-party independent viewpoint.

“As AI, and recently generative AI, continue to transform all markets, there will be new attack vectors to mitigate against,” said Arm’s Maidment. “We expect to see networks become smarter in the way they monitor traffic and behaviors. The use of AI/ML allows network-based monitoring at scale to allow potential unexpected or rogue behavior to be identified and isolated. Automating network monitoring based on AI/ML will allow an extra layer of defense as networks scale out and establish effectively a ‘zero trust’ approach. With this approach, analysis at scale can be tuned to look at particular threat vectors depending on the use case.”

With an increase in AI/ML adoption at the edge, a lot of this is taking place on the CPU. “Whether it is handling workloads in their entirety, or in combination with a co-processor like a GPU or NPU, how applications are deployed across the compute resources needs to be secure and managed centrally within the edge AI/ML device,” Maidment said. “Building edge AI/ML devices based on a hardware root of trust is essential. It is critical to have privileged access control of what code is allowed to run where using a trusted memory management architecture. Arm continually invests in security, and the Armv9 architecture offers a number of new security features. Alongside architecture improvements, we continue to work in partnership with the industry on our ecosystem security framework and certification scheme, PSA Certified, which is based on a certified hardware RoT. This hardware base helps to improve the security of systems and fulfill the consumer expectation that as devices scale, they remain secure.”

Outlook
It is important to understand that threat actors will continue to evolve attacks using AI/ML. Experts suggest that to counter such attacks, organizations, institutions, and government agencies will have to continually improve defense strategies and capabilities, including AI/ML deployment.

AI/ML can be used as weapon from an attacker for industrial espionage and/or industrial sabotage, and stopping incursions will require a broad range of cyberattack prevention and detection tools, including AI/ML functionality for anomaly detection. But in general, hackers are almost always one step ahead.

According to Register, “the recurring cycle is: 1) hackers come out with a new tool or technology that lets them attack systems or evade detection more effectively; 2) those attacks cause enough economic damage that the industry responds and develops effective countermeasures; 3) the no-longer-new hacker tools are still employed effectively, but against targets that haven’t bothered to update their defenses; 4) hackers develop new offensive tools that are effective against the defensive techniques of high-value targets, and the cycle starts anew.”

Related Reading
Securing Chip Manufacturing Against Growing Cyber Threats
Suppliers are the number one risk, but reducing attacks requires industry-wide collaboration.
Data Center Security Issues Widen
The number and breadth of hardware targets is increasing, but older attack vectors are not going away. Hackers are becoming more sophisticated, and they have a big advantage.

The post Using AI/ML To Combat Cyberattacks appeared first on Semiconductor Engineering.

Increased Automotive Data Use Raises Privacy, Security Concerns

Od: John Koon

The amount of data being collected, processed, and stored in vehicles is exploding, and so is the value of that data. That raises questions that are still not fully answered about how that data will be used, by whom, and how it will be secured.

Automakers are competing based on the latest versions of advanced technologies such as ADAS, 5G, and V2X, but the ECUs, software-defined vehicles, and in-cabin monitoring also demand more and more data, and they are using that data for purposes that extend beyond just getting the vehicle from point A to point B safely. They now are vying to offer additional subscription-based services according to customers’ interests, as various entities, including insurance companies, indicate a willingness to pay for information on drivers’ habits.

Collecting this data can help OEMs gain insights and potentially generate additional revenue. However, gathering it raises privacy and security concerns about who will own this massive amount of data and how it should be managed and used. And as automotive data use increases, how will it impact future automotive design?

Fig. 1: Connected vehicles rely on software to communicate between vehicles and the cloud. Source: McKinsey & Co.

Fig. 1: Connected vehicles rely on software to communicate between vehicles and the cloud. Source: McKinsey & Co.

“Much of the data generated in the vehicle will have immense value to OEMs and their partners for analyzing driver behavior and vehicle performance and for developing new or enhanced features,” said Sven Kopacz, autonomous vehicle section manager at Keysight Technologies. “On the other hand, the privacy of data use can be viewed as a risk to some. But the real value – as already implemented and used by Tesla and others – is the constant feedback to improve those ADAS algorithms, enable a CI/CD DevOps software development model, and allow the rapid download of updates. Only time will tell if law enforcement and the courts will demand this data and how lawmakers will respond.”

Types of data generated
According to Precedence Research, the global automotive data market size will grow from $2.19 billion in 2022 to $14.29 billion by 2032, with many types of data collected, including:

  • Autonomous driving: Data on all levels, from L1 to L5, including that collected from the multiple sensors installed on vehicles.
  • Infrastructure: Remote monitoring, OTA updates, and data used for remote control by control centers, V2X, and traffic patterns.
  • Infotainment: Information on how customers are using applications, such as voice control, gesture, maps, and parking.
  • Connected information: Information on payment to third-party parking apps, accident information, data from dashboard cameras, handheld devices, mobile applications, and driver behavior monitoring.
  • Vehicle health: Repair and maintenance records, insurance underwriting, fuel consumption, telematics.

This information may be useful for future automotive design, predictive maintenance, and safety improvements, and insurance companies are expected to be able to reduce underwriting costs with more comprehensive information on accidents. Based on the information collected, OEMs should be able to design more reliable and safer cars, and to stay in close touch with customer wants. For example, experiments can be conducted to gauge customer demand for subscription-based services such as automatic parking and more sophisticated voice input and commands.

“Diagnostic data for service and repair has been a core of automotive data analytics for decades,” noted Lorin Kennedy, senior staff product management manager for SLM in-field analytics at Synopsys. “With the advent of connected vehicles and advanced machine learning (ML) analytics, which enable a greater quantity of data to be routinely processed, this data has gained exponentially in value. As data drives feature enhancements such as mobile-like experiences and advanced driver assist capabilities, OEMs increasingly need to better understand the dependability and reliability of the semiconductor systems powering these new features. The collection of monitoring and sensor data from electronic components and the semiconductors themselves will be a growing diagnostic data requirement across all types of automotive technologies like ADAS, IVI, ECUs, etc. to ensure quality and reliability on these more advanced nodes.”

Anticipated updates to ISO 26262 regulations regarding the application of predictive maintenance to hardware, identifying degrading intermittent faults caused by silicon aging, and over-stress conditions in the field are areas to be addressed, as well. Those can include silicon lifecycle management (SLM) technologies, which can deliver more comprehensive knowledge about the health and remaining useful life of silicon as it ages.

“That knowledge, in turn, will enable service updates and future OTA releases that leverage additional semiconductor compute power,” Kennedy said. “Overall fleet performance will benefit, and the semiconductor and system design process will, too, as new insights help achieve greater efficiencies. OEM, Tier One, and semiconductor supplier collaboration on what the data brings to light – from silicon to software system performance – will enable vehicles to meet the functional safety design parameters that are becoming increasingly crucial in advanced electronics.”

Still, for data generated in vehicles, OEMs will need to prioritize which data can provide value for drivers immediately, and which data should be sent to the cloud via 5G connections.

“Tradeoffs between on-board processing to reduce data volume and data transmission network costs will likely dictate prioritization,” Keysight’s Kopacz said. “For example, camera, lidar, and radar sensor data for ADAS applications may have value for training ADAS algorithms, but the volume of raw data will be very costly to transmit and store. Likewise, driver attention data can have high value in UI design, and would be best gathered in a meta-data form. V2X data has a relatively lower data volume and should ultimately be a key data source for ADAS, providing in-car non-line-of-sight visibility of other vehicles, road infrastructure, and road conditions. Sharing this over V2N links can enable effective safety applications, but angle random walk (ARW) sensor data needs to be considered more carefully due to its complex nature. Infotainment streaming content into the vehicle also can be a valuable revenue stream for OEMs, and the content providers as well, as network operators working together.”

Impacts on automotive cybersecurity
As vehicles become more autonomous and connected, data use will increase, and so will the value of that data. This raises cybersecurity and data privacy concerns. Hackers want to steal personal data collected by the vehicles, and can use ransomware and other attacks to do so. The idea of taking control of vehicles — or worse, stealing them — also attracts hackers. Techniques used include hacking vehicle apps and wireless connections on the vehicles (diagnostics, key fob attacks and keyless jamming). Protecting data access, vehicles, and infrastructure from attacks is increasingly important and challenging.

Cybersecurity risks increase with software-defined vehicles. Memory especially will need to be safeguarded.

“The integration of advanced technology into EVs poses significant cybersecurity challenges that demand immediate attention and sophisticated solutions,” said Ilia Stolov, center head of secure memory solution at Winbond. “Central to the digital fortresses within modern electronic platforms are flash non-volatile memories, housing invaluable assets like code, private data, and company credentials. Unfortunately, their ubiquity has rendered them attractive targets for hackers seeking unauthorized access to sensitive information.”

Stolov noted that Winbond has been actively working to secure flash memory from hacks.

Additionally, there are important considerations in securing memory designs, such as:

  • DICE root of trust: The Device Identifier Composition Engine (DICE) should be used to create the secure flash root of trust for hardware security. This secure identity forms the basis for building trust in the hardware. Other security measures can therefore rely on the authenticity and integrity of the boot code, protecting against firmware and software attacks. The initial boot process and subsequent software execution are based on trusted and verified measurements, helping prevent the injection of malicious code into the system.
  • Code and data protection: Protecting code and data is crucial for maintaining system-wide integrity. Unauthorized modifications to code or data can lead to malfunctions, system instability, or the introduction of malicious code, compromising the hardware’s intended functionality or exploiting system vulnerabilities.
  • Authentication protocols: Authentication is a fundamental and crucial component of cybersecurity, serving as the frontline defense against unauthorized access and potential security breaches. Employing authentication protocols to restrict access to authorized actors and approved software layers only using cryptography credentials is important.
  • Secure software updates with rollback protection: Regular updates extend beyond bug fixes including remote firmware over-the-air (OTA) updates, guards against rollback attacks, and ensures the execution of only legitimate updates.
  • Post-quantum cryptography: Anticipating the post-quantum computing era to include NIST 800-208 Leighton-Micali Signature (LMS) cryptography safeguards EVs against the potential threats posed by future quantum computers.
  • Platform resiliency: Automatic detection of unauthorized code changes enables swift recovery to a secure state, effectively thwarting potential cyber threats. Adhering to NIST 800-193 recommendations for platform resiliency ensures a robust defense mechanism.
  • Secure supply chain: Guaranteeing the origin and integrity of flash content throughout the supply chain, these secure flash devices prevent content tampering and misconfiguration during platform assembly, transportation, and configuration. This, in turn, safeguards against cyber adversaries.

Considering the transition to SDVs and connected cars, data vulnerability becomes even more significant.

“Depending on where data resides, different protection measures are in place,” said Keysight’s Kopacz. “Intrusion detection systems (IDS), crypto services, and key management are becoming standard solutions in vehicles. Especially sensitive data for safety features needs to be protected and verified. Thus, redundancy becomes more relevant. With SDVs, the vehicle software is constantly updated or changed throughout the entire vehicle life cycle. Ever-evolving cyber threats are particularly challenging. Accordingly, the entire vehicle software must be continuously checked for new security gaps. OEMs are going to need comprehensive testing solutions to minimize security threats. This will need to include the cybersecurity testing of the entire attack surface, covering all vehicle interfaces – wired vehicle communication networks such as CAN or automotive Ethernet or wireless connections via Wi-Fi, Bluetooth, or cellular communications. OEMs will also need to test the backend that provides over-the-air (OTA) software updates. Such solutions can reduce the risk of damage or data theft by cybercriminals.”

Data management and privacy concerns
Another issue to be resolved is how the massive amount of data collected will be managed and used. Ideally, data will be analyzed to yield commercial value without causing privacy concerns. For example, infotainment platform data might reveal what types of music are most popular, helping the music industry to improve marketing strategies. Who will monitor the transfer of such data, though? How will customers be made aware of the data collection? And will they have an opportunity to opt out of having their data sold?

As with airplanes, vehicle black boxes are installed to record information for analysis of the data after an accident occurs. The information recorded includes vehicle speed, the braking situation, and the activation of air bags, among other things. If an accident occurs resulting in a fatality, and the data from ADAS and ECU uncover vulnerability in the designs, could that data be used as evidence in court against manufacturers or their supply chains? Armed with this information, the insurance industry may decline claims. Would one or more manufacturers of the ADAS/ECU be required to hand over the data when ordered by the authorities?

“Quality requirements for sophisticated electronic parts will continue to become more rigid and strict, allowing only a few defective parts per billion (DPPB) due to the impact failed components can have on the safety and well-being of human life,” noted Guy Cortez, senior staff product management manager for SLM analytics at Synopsys. “SLM data analytics will continue to play a substantial role in the health, maintainability, and sustainability of these devices throughout their life within the vehicle. Through the power of analytics, you can do proper root cause analysis of any failed device (e.g., return merchandise authorization, or RMA). What’s more, you will also be able to find ‘like’ devices that ultimately may exhibit similar failed behavior over time. Thus empowered, you can proactively recall these like devices before they fail during operation in the field. Upon further analysis, the device(s) in question may require a design re-spin by the device developer in order to correct any identified issue. With a proper SLM solution deployed throughout the automotive ecosystem, you can achieve a higher level of predictability, and thus higher quality and safety for the automotive manufacturer and consumer.”

OEM impact
While modern cars have been described as computers on wheels, they are now more like mobile phones on wheels. OEMs are designing cars that do not skimp on features. Semi-autonomous driving, voice-controlled infotainment systems, and the monitoring of many functions—including driver behavior— are yielding a large amount of data. While that data can be used to improve future designs. OEMs’ approaches to security and privacy vary, with some offering stronger security and privacy protection than others.

Mercedes-Benz is paying attention to data security and privacy, and is compliant to UN ECE R155 / R156, a European norm for cybersecurity and software update management systems, according to the company. Which data is processed in connection with digital vehicle services depends on which services the customer selects. Only the data required for the respective service will be processed. Additionally, the “Mercedes me connect” app’s terms of use and privacy information make it transparent for customers to see what data is needed for and how it is processed. Customers can determine which services they want to use.

Hyundai indicated it would follow a user-centric focus, prioritizing safety, information security, and data privacy with fault-tolerant software architectures to enhance cybersecurity. Hyundai Motor Group’s global software center, 42dot, is currently developing integrated hardware/software security solutions that detect and block data tampering, hacking, and external cyber threats, as well as abnormal communication using big data and AI algorithms.

And according to the BMW Group, the company manages a connected fleet of more than 20 million vehicles globally. More than 6 million vehicles are updated over-the-air on a regular basis. Together with other services, more than 110 terabytes of data traffic per day are processed between the connected vehicles and cloud-backend. All BMW vehicle interfaces permit consumers to opt in or out of various types of data collection and processing that may happen on their vehicles. If preferred, BMW customers may opt out of all optional data collection relating to their vehicles at any time by visiting the BMW iDrive screen in their vehicle. Additionally, to completely stop the transfer of any data from BMW vehicles to BMW services, customers can contact the company to request that the embedded SIM on their vehicles be disabled.

Not all OEMs hold the same philosophy on privacy. According to a study on 25 brands conducted by the Mozilla Foundation, a nonprofit organization, 56% will share data with law enforcement in response to an informal request, 84% share or sell personal data, and 100% earned the foundation’s “privacy not included” warning label.

More importantly, are customers educated or informed on the privacy issue?

Fig. 2: Once data is collected from a vehicle, it can go to multiple destinations without the knowledge of customers. Source: Mozilla, *Privacy Not Included.

Fig. 2: Once data is collected from a vehicle, it can go to multiple destinations without the knowledge of customers. Source: Mozilla, *Privacy Not Included.

Applying data to automotive design in the future
OEMs collect many different types of automotive data in relation to autonomous driving, infrastructure, infotainment, connected vehicles, and vehicle health and maintenance. The ultimate goal, however, is not just to compile massive raw data; rather, it is to extract value from it. One of the questions OEMs need to ask is how to apply technology to extract information that is really useful in future automotive design.

“OEMs are trying to test and validate the various functions of their vehicles,” said David Fritz, vice president of virtual and hybrid systems at Siemens EDA. “This can involve millions of terabytes of data. Sometimes, a huge portion of the data is redundant and useless. The real value in the data is, once it gets distilled, that it’s in a form where humans can relate to the meaning of the data, and it also can be pushed into the systems while they’re being developed and tested and before the vehicles are even on the ground. We’ve known for quite some time that many countries and regulatory bodies around the world have been collecting what they call an accident database. When an accident occurs, the police show up on the scene collecting relevant data. ‘There was an intersection here, a stop sign there. And this car was traveling in this direction roughly this many miles an hour. The weather condition is this. The car entered the intersection in the yellow light and caused an accident, etc.’ This is an accident scenario. Technologies are available to take those scenarios and put them in a standard form called Open Scenario. Based on the information, a new set of data can be generated to determine what the sensors would be seeing in those accident situations, and then push it through both a virtual version of the vehicle and environment and in the future, and push those scenarios through the sensors in this physical vehicle itself. This is really the distillation of that data into a form that a human can wrap their mind around. Otherwise, you could collect billions of terabytes of raw data and try to push that into these systems, and it wouldn’t actually help you any more than if someone were sitting in a car and dragging those for billions of miles.”

But that data also can be very useful. “If an OEM wants to obtain safety certification, say in Germany, the OEM can provide a set of data of scenarios on how the vehicle will navigate,” Fritz said. “An OEM can provide a set of data to the German authority, with a set of scenarios to prove the vehicle will navigate in a safe manner under various conditions. By comparing that with the data in the accident database, the German government can say that as long as you avoid 95% of the accidents in that database, you’re certified. That’s actionable from the perspectives of human drivers, insurance, engineering, and visual simulation. The data prove the vehicle is going to behave as expected. The alternative is to drive around, as in the case of autonomous vehicles, and try to justify the accident was not caused by the vehicle, while facing the lawsuit. It does not seem to make sense, but that’s what’s happening today.”

Related Reading
Curbing Automotive Cybersecurity Attacks
A growing number of standards and regulations within the automotive ecosystem promises to save developments costs by fending off cyberattacks.
Software-Defined Vehicles Ready To Roll
New approach could have big effects on cost, safety, security, and time to market.

The post Increased Automotive Data Use Raises Privacy, Security Concerns appeared first on Semiconductor Engineering.

Why Chiplets Are So Critical In Automotive

Od: John Koon

Chiplets are gaining renewed attention in the automotive market, where increasing electrification and intense competition are forcing companies to accelerate their design and production schedules.

Electrification has lit a fire under some of the biggest and best-known carmakers, which are struggling to remain competitive in the face of very short market windows and constantly changing requirements. Unlike in the past, when carmakers typically ran on five- to seven-year design cycles, the latest technology in vehicles today may well be considered dated within several years. And if they cannot keep up, there is a whole new crop of startups producing cheap vehicles with the ability to update or change out features as quickly as a software update.

But software has speed, security, and reliability limitations, and being able to customize the hardware is where many automakers are now putting their efforts. This is where chiplets fit in, and the focus now is on how to build enough interoperability across large ecosystems to make this a plug-and-play market. The key factors to enable automotive chiplet interoperability include standardization, interconnect technologies, communication protocols, power and thermal management, security, testing, and ecosystem collaboration.

Similar to non-automotive applications at the board level, many design efforts are focusing on a die-to-die approach, which is driving a number of novel design considerations and tradeoffs. At the chip level, the interconnects between various processors, chips, memory, and I/O are becoming more complex due to increased design performance requirements, spurring a flurry of standards activities. Different interconnect and interface types have been proposed to serve varying purposes, while emerging chiplet technologies for dedicated functions — processors, memories, and I/Os, to name a few — are changing the approach to chip design.

“There is a realization by automotive OEMs that to control their own destiny, they’re going to have to control their own SoCs,” said David Fritz, vice president of virtual and hybrid systems at Siemens EDA. “However, they don’t understand how far along EDA has come since they were in college in 1982. Also, they believe they need to go to the latest process node, where a mask set is going to cost $100 million. They can’t afford that. They also don’t have access to talent because the talent pool is fairly small. With all that together comes the realization by the OEMs that to control their destiny, they need a technology that’s developed by others, but which can be combined however needed to have a unique differentiated product they are confident is future-proof for at least a few model years. Then it becomes economically viable. The only thing that fits the bill is chiplets.”

Chiplets can be optimized for specific functions, which can help automakers meet reliability, safety, security requirements with technology that has been proven across multiple vehicle designs. In addition, they can shorten time to market and ultimately reduce the cost of different features and functions.

Demand for chips has been on the rise for the past decade. According to Allied Market Research, global automotive chip demand will grow from $49.8 billion in 2021 to $121.3 billion by 2031. That growth will attract even more automotive chip innovation and investment, and chiplets are expected to be a big beneficiary.

But the marketplace for chiplets will take time to mature, and it will likely roll out in phases.  Initially, a vendor will provide different flavors of proprietary dies. Then, partners will work together to supply chiplets to support each other, as has already happened with some vendors. The final stage will be universally interoperable chiplets, as supported by UCIe or some other interconnect scheme.

Getting to the final stage will be the hardest, and it will require significant changes. To ensure interoperability, large enough portions of the automotive ecosystem and supply chain must come together, including hardware and software developers, foundries, OSATs, and material and equipment suppliers.

Momentum is building
On the plus side, not all of this is starting from scratch. At the board level, modules and sub-systems always have used onboard chip-to-chip interfaces, and they will continue to do so. Various chip and IP providers, including Cadence, Diode, Microchip, NXP, Renesas, Rambus, Infineon, Arm, and Synopsys, provide off-the-shelf interface chips or IP to create the interface silicon.

The Universal Chiplet Interconnect Express (UCIe) Consortium is the driving force behind the die-to-die, open interconnect standard. The group released its latest UCIe 1.1 specification in August 2023. Board members include Alibaba, AMD, Arm, ASE, Google Cloud, Intel, Meta, Microsoft, NVIDIA, Qualcomm, Samsung, and others. Industry partners are showing widespread support. AIB and Bunch of Wires (BoW) also have been proposed. In addition, Arm just released its own Chiplet System Architecture, along with an updated AMBA spec to standardize protocols for chiplets.

“Chiplets are already here, driven by necessity,” said Arif Khan, senior product marketing group director for design IP at Cadence. “The growing processor and SoC sizes are hitting the reticle limit and the diseconomies of scale. Incremental gains from process technology advances are lower than rising cost per transistor and design. The advances in packaging technology (2.5D/3D) and interface standardization at a die-to-die level, such as UCIe, will facilitate chiplet development.”

Nearly all of the chiplets used today are developed in-house by big chipmakers such as Intel, AMD, and Marvell, because they can tightly control the characteristics and behavior of those chiplets. But there is work underway at every level to open this market to more players. When that happens, smaller companies can begin capitalizing on what the high-profile trailblazers have accomplished so far, and innovating around those developments.

“Many of us believe the dream of having an off-the-shelf, interoperable chiplet portfolio will likely take years before becoming a reality,” said Guillaume Boillet, senior director strategic marketing at Arteris, adding that interoperability will emerge from groups of partners who are addressing the risk of incomplete specifications.

This also is raising the attractiveness of FPGAs and eFPGAs, which can provide a level of customization and updates for hardware in the field. “Chiplets are a real thing,” said Geoff Tate, CEO of Flex Logix. “Right now, a company building two or more chiplets can operate much more economically than a company building near-reticle-size die with almost no yield. Chiplet standardization still appears to be far away. Even UCIe is not a fixed standard yet. Not all agree on UCIe, bare die testing, and who owns the problem when the integrated package doesn’t work, etc. We do have some customers who use or are evaluating eFPGA for interfaces where standards are in flux like UCIe. They can implement silicon now and use the eFPGA to conform to standards changes later.”

There are other efforts supporting chiplets, as well, although for somewhat different reasons — notably, the rising cost of device scaling and the need to incorporate more features into chips, which are reticle-constrained at the most advanced nodes. But those efforts also pave the way for chiplets in automotive, and there is strong industry backing to make this all work. For example, under the sponsorship of SEMI, ASME, and three IEEE Societies, the new Heterogeneous Integration Roadmap (HIR) looks at various microelectronics design, materials, and packaging issues to come up with a roadmap for the semiconductor industry. Their current focus includes 2.5D, 3D-ICs, wafer-level packaging, integrated photonics, MEMS and sensors, and system-in-package (SiP), aerospace, automotive, and more.

At the recent Heterogeneous Integration Global Summit 2023, representatives from AMD, Applied Materials, ASE, Lam Research, MediaTek, Micron, Onto Innovation, TSMC, and others demonstrated strong support for chiplets. Another group that supports chiplets is the Chiplet Design Exchange (CDX) working group , which is part of the Open Domain Specific Architecture (ODSA) and the Open Compute Project Foundation (OCP). The Chiplet Design Exchange (CDX) charter focuses on the various characteristics of chiplet and chiplet integration, including electrical, mechanical, and thermal design exchange standards of the 2.5D stacked, and 3D Integrated Circuits (3D-ICs). Its representatives include Ansys, Applied Materials, Arm, Ayar Labs, Broadcom, Cadence, Intel, Macom, Marvell, Microsemi, NXP, Siemens EDA, Synopsys, and others.

“The things that automotive companies want in terms of what each chiplet does in terms of functionality is still in an upheaval mode,” Siemens’ Fritz noted. “One extreme has these problems, the other extreme has those problems. This is the sweet spot. This is what’s needed. And these are the types of companies that can go off and do that sort of work, and then you could put them together. Then this interoperability thing is not a big deal. The OEM can make it too complex by saying, ‘I have to handle that whole spectrum of possibilities.’ The alternative is that they could say, ‘It’s just like a high speed PCIe. If I want to communicate from one to the other, I already know how to do that. I’ve got drivers that are running my operating system. That would solve an awful lot of problems, and that’s where I believe it’s going to end up.”

One path to universal chiplet development?

Moving forward, chiplets are a focal point for both the automotive and chip industries, and that will involve everything from chiplet IP to memory interconnects and customization options and limitations.

For example, Renesas Electronics announced in November 2023 plans for its next-generation SoCs and MCUs. The company is targeting all major applications across the automotive digital domain, including advance information about its fifth-generation R-Car SoC for high-performance applications with advanced in-package chiplet integration technology, which is meant to provide automotive engineers greater flexibility to customize their designs.

Renesas noted that if more AI performance is required in Advanced Driver Assistance Systems (ADAS), engineers will have the capability to integrate AI accelerators into a single chip. The company said this roadmap comes after years of collaboration and discussions with Tier 1 and OEM customers, which have been clamoring for a way to accelerate development without compromising quality, including designing and verifying the software even before the hardware is available.

“Due to the ever increasing needs to increase compute on demand, and the increasing need for higher levels of autonomy in the cars of tomorrow, we see challenges in monolithic solutions scaling and providing the performance needs of the market in the upcoming years,” said Vasanth Waran, senior director for SoC Business & Strategies at Renesas. “Chiplets allows for the compute solutions to scale above and beyond the needs of the market.”

Renesas announced plans to create a chiplet-based product family specifically targeted at the automotive market starting in 2025.

Standard interfaces allow for SoC customization
It is not entirely clear how much overlap there will be between standard processors, which is where most chiplets are used today, and chiplets developed for automotive applications. But the underlying technologies and developments certainly will build off each other as this technology shifts into new markets.

“Whether it is an AI accelerator or ADAS automotive application, customers need standard interface IP blocks,” noted David Ridgeway, senior product manager, IP accelerated solutions group at Synopsys. “It is important to provide fully verified IP subsystems around their IP customization requirements to support the subsystem components used in the customers’ SoCs. When I say customization, you might not realize how customizable IP has become over the course of the last 10 to 20 years, on the PHY side as well as the controller side. For example, PCI Express has gone from PCIe Gen 3 to Gen 4 to Gen 5 and now Gen 6. The controller can be configured to support multiple bifurcation modes of smaller link widths, including one x16, two x8, or four x4. Our subsystem IP team works with customers to ensure all the customization requirements are met. For AI applications, signal and power integrity is extremely important to meet their performance requirements. Almost all our customers are seeking to push the envelope to achieve the highest memory bandwidth speeds possible so that their TPU can process many more transactions per second. Whenever the applications are cloud computing or artificial intelligence, customers want the fastest response rate possible.”

Fig 1: IP blocks including processor, digital, PHY, and verification help developers implement the entire SoC. Source: Synopsys

Fig 1: IP blocks including processor, digital, PHY, and verification help developers implement the entire SoC. Source: Synopsys

Optimizing PPA serves the ultimate goal of increasing efficiency, and this makes chiplets particularly attractive in automotive applications. When UCIe matures, it is expected to improve overall performance exponentially. For example, UCIe can deliver a shoreline bandwidth of 28 to 224 GB/s/mm in a standard package, and 165 to 1317 GB/s/mm in an advanced package. This represents a performance improvement of 20- to 100-fold. Bringing latency down from 20ns to 2ns represents a 10-fold improvement. Around 10 times greater power efficiency, at 0.5 pJ/b (standard package) and 0.25 pJ/b (advanced package), is another plus. The key is shortening the interface distance whenever possible.

To optimize chiplet designs, the UCIe Consortium provides some suggestions:

  • Careful planning consideration of architectural cut-lines (i.e. chiplet boundaries), optimizing for power, latency, silicon area, and IP reuse. For example, customizing one chiplet that needs a leading-edge process node while re-using other chiplets on older nodes may impact cost and time.
  • Thermal and mechanical packaging constraints need to be planned out for package thermal envelopes, hot spots, chiplet placements and I/O routing and breakouts.
  • Process nodes need to be carefully selected, particularly in the context of the associated power delivery scheme.
  • Test strategy for chiplets and packaged/assembled parts need to be developed up front to ensure silicon issues are caught at the chiplet-level testing phase rather than after they are assembled into a package.

Conclusion
The idea of standardizing die-to-die interfaces is catching on quickly but the path to get there will take time, effort, and a lot of collaboration among companies that rarely talk with each other. Building a vehicle takes one determine carmaker. Building a vehicle with chiplets requires an entire ecosystem that includes the developers, foundries, OSATs, and material and equipment suppliers to work together.

Automotive OEMs are experts at putting systems together and at finding innovative ways to cut costs. But it remains to seen how quickly and effectively they can build and leverage an ecosystem of interoperable chiplets to shrink design cycles, improve customization, and adapt to a world in which the leading edge technology may be outdated by the time it is fully designed, tested, and available to consumers.

— Ann Mutschler contributed to this report.

Related Reading
Automotive Relationships Shifting With Chiplets
As the automotive ecosystem balances the best approaches for designing in increasingly advanced features, how companies interact is still evolving.

The post Why Chiplets Are So Critical In Automotive appeared first on Semiconductor Engineering.

❌