FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremSemiconductor Engineering
  • ✇Semiconductor Engineering
  • Chip Industry Week In ReviewThe SE Staff
    Synopsys refocused its security priorities around chips, striking a deal to sell off its Software Integrity Group subsidiary to private equity firms Clearlake Capital Group and Francisco Partners for about $2.1 billion. That deal comes on the heels of Synopsys’ recent acquisition of Intrinsic ID, which develops physical unclonable function IP. Sassine Ghazi, Synopsys’ president and CEO, said in an interview that the sale of the software group “gives us the ability to have management bandwidth, c
     

Chip Industry Week In Review

10. Květen 2024 v 09:01

Synopsys refocused its security priorities around chips, striking a deal to sell off its Software Integrity Group subsidiary to private equity firms Clearlake Capital Group and Francisco Partners for about $2.1 billion. That deal comes on the heels of Synopsys’ recent acquisition of Intrinsic ID, which develops physical unclonable function IP. Sassine Ghazi, Synopsys’ president and CEO, said in an interview that the sale of the software group “gives us the ability to have management bandwidth, capital, and to double down on what we’re doing in our core business.”

The U.S. Commerce Department reportedly pulled export licenses from Intel and Qualcomm that permitted them to ship semiconductors to Huawei, the Financial Times reported. The move comes after advanced chips from Intel reportedly were used in new laptops and smartphones from the China-based company. 

Apple debuted its second-generation 3nm M4 chip with the launch of the new iPad Pro. The CPU and GPU each have up to 10 cores, with a neural engine capable of 38 TOPS, and a total of 28 billion transistors. Apple also is working with TSMC to develop its own AI processors for running software in data centers, reports The Wall Street Journal.

The U.S. is expected to triple its semiconductor manufacturing capacity by 2032, according to a new report by the Semiconductor Industry Association and Boston Consulting. By that year, the U.S. is projected to have 28% of global capacity for advanced logic manufacturing and over a quarter of total global capital expenditures.

Fig. 1: Source: Semiconductor Industry Association and Boston Consulting Group.

Quick links to more news:

Global
Market Reports
Automotive
Security
Product News
Education and Training
Research
In-Depth
Events
Further Reading

Around The Globe

The U.S. Commerce Department plans to solicit bids from organizations interested in creating and managing a new CHIPS Manufacturing USA institute focused on digital twins in the semiconductor sector. The government will award up to $285 million to the selected proposal.

The U.S. National Science Foundation and Department of Energy announced the first 35 projects to be supported with computational time through the National Artificial Intelligence Research Resource (NAIRR) Pilot. The initial selected projects will gain access to several U.S. supercomputing centers and other resources, with the goal of advancing responsible AI research.

Through its new Federal AI Sandbox, MITRE is offering up its computing power to U.S. government agencies. “Our new Federal AI Sandbox will help level the playing field, making the high-quality compute power needed to train and test custom AI solutions available to any agency,” stated Charles Clancy, MITRE, senior vice president and chief technology officer, in the release.

Saudi Arabia’s $100 billion investment fund for semiconductor and AI technology pledged it would divest from China if requested by the U.S, reported Bloomberg.

Japan’s SoftBank is holding talks with UK-based AI Chip firm Graphcore about a possible acquisition, reports Bloomberg.

India’s chip industry is heating up. Mindgrove launched the country’s first SoC, named Secure IoT. The chip clocks at 700 MHz, and the company is touting its key security algorithms, secure boot, and on-chip OTP memory. Meanwhile, Lam Research is expanding its global semiconductor fabrication supply chain to include India.

Microsoft will build a $3.3 billion AI data center in Racine, Wisconsin, the same location as the failed Foxconn investment touted six years ago.

Markets And Money

The SIA announced first-quarter global semiconductor sales grew more than 15% YoY, still 5.7% below Q4 2023, but a big improvement over last year. Consider that the semiconductor materials market contracted 8.2% in 2023 to $66.7 billion, down from a record $72.7 billion in 2022, according to a new report from SEMI.

The demand for AI-powered consumer electronics will drive global AI chipset shipments to 1.3 billion by 2030, according to ABI Research.

TrendForce released several new industry reports this week. Among the highlights:

  • HBM prices are expected to increase by up to 10% in 2025, representing more than 30% of total DRAM value.
  • In Q2, DRAM contract prices rose 13% to 18%, while NAND flash prices increased 15% to 20%.
  • The top 10 design firms’ combined revenue increased 12% in 2023, with NVIDIA taking the lead for the first time.

A number of acquisitions were announced recently:

  • High-voltage IC company, Power Integrations, will purchase the assets of Odyssey Semiconductor Technologies, a developer of gallium nitride (GaN) transistors.
  • Mobix Labs agreed to buy RF design company RaGE Systems for $20 million in cash, stock, and incentives.
  • V-Tek, a packaging services and inspection company, acquired A&J Programming, a manufacturer of automated handling and programming equipment.

The global smartphone market grew 6% year-over-year, shipping 296.9 million units in Q124, according to a Counterpoint report.  Samsung toppled Apple for the top spot with a 20% share.

Automotive

U.S. Justice Department is investigating whether Tesla committed securities or wire fraud for misleading consumers and investors about its EV’s autopilot capabilities, according to Reuters.

The automotive ecosystem is undergoing a huge transformation toward software-defined vehicles, spurring new architectures that can be future-proofed and customized with software.

Infineon introduced a microcontroller for the automotive battery management sector, integrating high-precision analog and high-voltage subsystems on a single chip. Infineon also inked a deal with China’s Xiaomi to provide SiC power modules for Xiaomi’s new SU7 smart EV.

Keysight and ETAS are teaming up to embed ETAS fuzz testing software into Keysight’s automotive cybersecurity platform.

Also, Keysight’s device security research lab, Riscure Security Solutions, can now conduct vehicle type approval evaluations under United Nations R155/R156 regulations. Keysight acquired Riscure in March.

Two autonomous driving companies received big funding. British AI company Wayve received a $1.05 billion Series C investment from SoftBank, with contributions from NVIDIA and Microsoft. Hyundai spent an additional $475 million on Motional, according its recent earnings report.

The automotive imaging market grew to U.S. $5.7 billion in 2023 due to increased production, autonomy demand, and higher-resolution offerings.

Automotive Grade Linux (AGL), a collaborative cross-industry effort developing an open source platform for all Software-Defined Vehicles (SDVs), released cloud-native functionality, RISC-V architecture and flutter applications.

Security

SRAM security concerns are intensifying as a combination of new and existing techniques allow hackers to tap into data for longer periods of time after a device is powered down. This is particularly alarming as the leading edge of design shifts to heterogeneous systems in package, where chiplets frequently have their own memory hierarchy.

Machine learning is being used by hackers to find weaknesses in chips and systems, but it also is starting to be used to prevent breaches by pinpointing hardware and software design flaws.

txOne Networks, provider of Cyber-Physical Systems security, raised $51 million in Series B extension round of funding.

The U.S. Department of Justice charged a Russian national with his role as the creator, developer and administrator of the LockBit, a prolific ramsomware group, that allegedly stole $100 million in payments from 2,000 victims.

The Cybersecurity and Infrastructure Security Agency (CISA) launched “We Can Secure Our World,” a new public awareness program promoting “basic cyber hygiene” and the agency also issues a number of alerts/advisories.

Product News

Siemens unveiled its Solido IP Validation Suite software, an automated quality assurance product designed to work across all design IP types and formats. The suite includes Solido Crosscheck and IPdelta software, which both provide in-view, cross-view and version-to-version QA checks.

proteanTecs announced its lifecycle monitoring solution is being integrated into SAPEON’s new AI processors.

SpiNNcloud Systems revealed their SpiNNaker2 system, an event-based AI platform supercomputer containing chips that are a mesh of 152 ARM-based cores. The platform has the ability to emulate 10 billion neurons while still maintaining power efficiency and reliability.

Ansys partnered with Schrodinger to develop new computational materials. The collaboration will see Schrodinger’s molecular modeling technology used in Ansys’ simulation tools to evaluate performance ahead of the prototype phase.

Keysight introduced a pulse generator to its handheld radio frequency analyzer software options. The Option 357 pulse generator is downloadable on B- and C-Series FieldFox analyzers.

Education and Training

Semiconductor fever is hitting academia:

  • Penn State discussed its role in leading 15 universities to drive advances in chip integration and packaging.
  • Georgia Tech’s explained its research is happening at all the levels of the “semiconductor stack,” touting its 28,500 square feet of academic cleanroom space.
  • And in the past month Purdue University, Dassault Systems and Lam Research expanded an existing deal to use virtual twins and simulation tools in workforce development.

Arizona State University is beefing up their technology programs with a new bachelor’s and doctoral degree in robotics and autonomous systems.

Microsoft is partnering with Gateway Technical College in Wisconsin to create a Data Center Academy to train Wisconsinites for data center and STEM roles by 2030.

Research

Stanford-led researchers used ordinary-appearing glasses for an augmented reality headset, utilizing waveguide display techniques, holographic imaging, and AI.

UC Berkeley, LLNL, and MIT engineered a miniaturized on-chip energy storage and power delivery, using an atomic-scale approach to modify electrostatic capacitors.

ORNL and other researchers observed a “surprising isotope effect in the optoelectronic properties of a single layer of molybdenum disulfide” when they substituted heavier isotope of molybdenum in the crystal.

Three U.S. national labs are partnering with NVIDIA to develop advanced memory technologies for high performance computing.

In-Depth

In addition to this week’s Automotive, Security and Pervasive Computing newsletter, here are more top stories and tech talk from the week:

Events

Find upcoming chip industry events here, including:

Event Date Location
ASMC: Advanced Semiconductor Manufacturing Conference May 13 – 16 Albany, NY
ISES Taiwan 2024: International Semiconductor Executive Summit May 14 – 15 New Taipei City
Ansys Simulation World 2024 May 14 – 16 Online
Women In Semiconductors May 16 Albany, NY
European Test Symposium May 20 – 24 The Hague, Netherlands
NI Connect Austin 2024 May 20 – 22 Austin, Texas
ITF World 2024 (imec) May 21 – 22 Antwerp, Belgium
Embedded Vision Summit May 21 – 23 Santa Clara, CA
ASIP Virtual Seminar 2024 May 22 Online
Electronic Components and Technology Conference (ECTC) 2024 May 28 – 31 Denver, Colorado
Hardwear.io Security Trainings and Conference USA 2024 May 28 – Jun 1 Santa Clara, CA
Find All Upcoming Events Here

Upcoming webinars are here.

Further Reading

Read the latest special reports and top stories, or check out the latest newsletters:

Automotive, Security and Pervasive Computing
Systems and Design
Low Power-High Performance
Test, Measurement and Analytics
Manufacturing, Packaging and Materials

The post Chip Industry Week In Review appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • SRAM Security Concerns GrowKaren Heyman
    SRAM security concerns are intensifying as a combination of new and existing techniques allow hackers to tap into data for longer periods of time after a device is powered down. This is particularly alarming as the leading edge of design shifts from planar SoCs to heterogeneous systems in package, such as those used in AI or edge processing, where chiplets frequently have their own memory hierarchy. Until now, most cybersecurity concerns involving volatile memory have focused on DRAM, because it
     

SRAM Security Concerns Grow

9. Květen 2024 v 09:08

SRAM security concerns are intensifying as a combination of new and existing techniques allow hackers to tap into data for longer periods of time after a device is powered down.

This is particularly alarming as the leading edge of design shifts from planar SoCs to heterogeneous systems in package, such as those used in AI or edge processing, where chiplets frequently have their own memory hierarchy. Until now, most cybersecurity concerns involving volatile memory have focused on DRAM, because it is often external and easier to attack. SRAM, in contrast, does not contain a component as obviously vulnerable as a heat-sensitive capacitor, and in the past it has been harder to pinpoint. But as SoCs are disaggregated and more features are added into devices, SRAM is becoming a much bigger security concern.

The attack scheme is well understood. Known as cold boot, it was first identified in 2008, and is essentially a variant of a side-channel attack. In a cold boot approach, an attacker dumps data from internal SRAM to an external device, and then restarts the system from the external device with some code modification. “Cold boot is primarily targeted at SRAM, with the two primary defenses being isolation and in-memory encryption,” said Vijay Seshadri, distinguished engineer at Cycuity.

Compared with network-based attacks, such as DRAM’s rowhammer, cold boot is relatively simple. It relies on physical proximity and a can of compressed air.

The vulnerability was first described by Edward Felton, director of Princeton University’s Center for Information Technology Policy, J. Alex Halderman, currently director of the Center for Computer Security & Society at the University of Michigan, and colleagues. The breakthrough in their research was based on the growing realization in the engineering research community that data does not vanish from memory the moment a device is turned off, which until then was a common assumption. Instead, data in both DRAM and SRAM has a brief “remanence.”[1]

Using a cold boot approach, data can be retrieved, especially if an attacker sprays the chip with compressed air, cooling it enough to slow the degradation of the data. As the researchers described their approach, “We obtained surface temperatures of approximately −50°C with a simple cooling technique — discharging inverted cans of ‘canned air’ duster spray directly onto the chips. At these temperatures, we typically found that fewer than 1% of bits decayed even after 10 minutes without power.”

Unfortunately, despite nearly 20 years of security research since the publication of the Halderman paper, the authors’ warning still holds true. “Though we discuss several strategies for mitigating these risks, we know of no simple remedy that would eliminate them.”

However unrealistic, there is one simple and obvious remedy to cold boot — never leave a device unattended. But given human behavior, it’s safer to assume that every device is vulnerable, from smart watches to servers, as well as automotive chips used for increasingly autonomous driving.

While the original research exclusively examined DRAM, within the last six years cold boot has proven to be one of the most serious vulnerabilities for SRAM. In 2018, researchers at Germany’s Technische Universität Darmstadt published a paper describing a cold boot attack method that is highly resistant to memory erasure techniques, and which can be used to manipulate the cryptographic keys produced by the SRAM physical unclonable function (PUF).

As with so many security issues, it’s been a cat-and-mouse game between remedies and counter-attacks. And because cold boot takes advantage of slowing down memory degradation, in 2022 Yang-Kyu Choi and colleagues at the Korea Advanced Institute of Science and Technology (KAIST), described a way to undo the slowdown with an ultra-fast data sanitization method that worked within 5 ns, using back bias to control the device parameters of CMOS.

Fig. 1: Asymmetric forward back-biasing scheme for permanent erasing. (a) All the data are reset to 1. (b) All the data are reset to 0. Whether all the data where reset to 1 or 0 is determined by the asymmetric forward back-biasing scheme. Source: KAIST/Creative Commons [2]

Fig. 1: Asymmetric forward back-biasing scheme for permanent erasing. (a) All the data are reset to 1. (b) All the data are reset to 0. Whether all the data where reset to 1 or 0 is determined by the asymmetric forward back-biasing scheme. Source: KAIST/Creative Commons [2]

Their paper, as well as others, have inspired new approaches to combating cold boot attacks.

“To mitigate the risk of unauthorized access from unknown devices, main devices, or servers, check the authenticated code and unique identity of each accessing device,” said Jongsin Yun, memory technologist at Siemens EDA. “SRAM PUF is one of the ways to securely identify each device. SRAM is made of two inverters cross-coupled to each other. Although each inverter is designed to be the same device, normally one part of the inverter has a somewhat stronger NMOS than the other due to inherent random dopant fluctuation. During the initial power-on process, SRAM data will be either data 1 or 0, depending on which side has a stronger device. In other words, the initial data state of the SRAM array at the power on is decided by this unique random process variation and most of the bits maintain this property for life. One can use this unique pattern as a fingerprint of a device. The SRAM PUF data is reconstructed with other coded data to form a cryptographic key. SRAM PUF is a great way to anchor its secure data into hardware. Hackers may use a DFT circuit to access the memory. To avoid insecurely reading the SRAM information through DFT, the security-critical design makes DFT force delete the data as an initial process of TEST mode.”

However, there can be instances where data may be required to be kept in a non-volatile memory (NVM). “Data is considered insecure if the NVM is located outside of the device,” said Yun. “Therefore, secured data needs to be stored within the device with write protection. One-time programmable (OTP) memory or fuses are good storage options to prevent malicious attackers from tampering with the modified information. OTP memory and fuses are used to store cryptographic keys, authentication information, and other critical settings for operation within the device. It is useful for anti-rollback, which prevents hackers from exploiting old vulnerabilities that have been fixed in newer versions.”

Chiplet vulnerabilities
Chiplets also could present another vector for attack, due to their complexity and interconnections. “A chiplet has memory, so it’s going to be attacked,” said Cycuity’s Seshadri. “Chiplets, in general, are going to exacerbate the problem, rather than keeping it status quo, because you’re going to have one chiplet talking to another. Could an attack on one chiplet have a side effect on another? There need to be standards to address this. In fact, they’re coming into play already. A chiplet provider has to say, ‘Here’s what I’ve done for security. Here’s what needs to be done when interfacing with another chiplet.”

Yun notes there is a further physical vulnerability for those working with chiplets and SiPs. “When multiple chiplets are connected to form a SiP, we have to trust data coming from an external chip, which creates further complications. Verification of the chiplet’s authenticity becomes very important for SiPs, as there is a risk of malicious counterfeit chiplets being connected to the package for hacking purposes. Detection of such counterfeit chiplets is imperative.”

These precautions also apply when working with DRAM. In all situations, Seshardi said, thinking about security has to go beyond device-level protection. “The onus of protecting DRAM is not just on the DRAM designer or the memory designer,” he said. “It has to be secured by design principles when you are developing. In addition, you have to look at this holistically and do it at a system level. You must consider all the other things that communicate with DRAM or that are placed near DRAM. You must look at a holistic solution, all the way from software down to things like the memory controller and then finally, the DRAM itself.”

Encryption as a backup
Data itself always must be encrypted as second layer of protection against known and novel attacks, so an organization’s assets will still be protected even if someone breaks in via cold boot or another method.

“The first and primary method of preventing a cold boot attack is limiting physical access to the systems, or physically modifying the systems case or hardware preventing an attacker’s access,” said Jim Montgomery, market development director, semiconductor at TXOne Networks. “The most effective programmatic defense against an attack is to ensure encryption of memory using either a hardware- or software-based approach. Utilizing memory encryption will ensure that regardless of trying to dump the memory, or physically removing the memory, the encryption keys will remain secure.”

Montgomery also points out that TXOne is working with the Semiconductor Manufacturing Cybersecurity Consortium (SMCC) to develop common criteria based upon SEMI E187 and E188 standards to assist DM’s and OEM’s to implement secure procedures for systems security and integrity, including controlling the physical environment.

What kind and how much encryption will depend on use cases, said Jun Kawaguchi, global marketing executive for Winbond. “Encryption strength for a traffic signal controller is going to be different from encryption for nuclear plants or medical devices, critical applications where you need much higher levels,” he said. “There are different strengths and costs to it.”

Another problem, in the post-quantum era, is that encryption itself may be vulnerable. To defend against those possibilities, researchers are developing post-quantum encryption schemes. One way to stay a step ahead is homomorphic encryption [HE], which will find a role in data sharing, since computations can be performed on encrypted data without first having to decrypt it.

Homomorphic encryption could be in widespread use as soon as the next few years, according to Ronen Levy, senior manager for IBM’s Cloud Security & Privacy Technologies Department, and Omri Soceanu, AI Security Group manager at IBM.  However, there are still challenges to be overcome.

“There are three main inhibitors for widespread adoption of homomorphic encryption — performance, consumability, and standardization,” according to Levy. “The main inhibitor, by far, is performance. Homomorphic encryption comes with some latency and storage overheads. FHE hardware acceleration will be critical to solving these issues, as well as algorithmic and cryptographic solutions, but without the necessary expertise it can be quite challenging.”

An additional issue is that most consumers of HE technology, such as data scientists and application developers, do not possess deep cryptographic skills, HE solutions that are designed for cryptographers can be impractical. A few HE solutions require algorithmic and cryptographic expertise that inhibit adoption by those who lack these skills.

Finally, there is a lack of standardization. “Homomorphic encryption is in the process of being standardized,” said Soceanu. “But until it is fully standardized, large organizations may be hesitant to adopt a cryptographic solution that has not been approved by standardization bodies.”

Once these issues are resolved, they predicted widespread use as soon as the next few years. “Performance is already practical for a variety of use cases, and as hardware solutions for homomorphic encryption become a reality, more use cases would become practical,” said Levy. “Consumability is addressed by creating more solutions, making it easier and hopefully as frictionless as possible to move analytics to homomorphic encryption. Additionally, standardization efforts are already in progress.”

A new attack and an old problem
Unfortunately, security never will be as simple as making users more aware of their surroundings. Otherwise, cold boot could be completely eliminated as a threat. Instead, it’s essential to keep up with conference talks and the published literature, as graduate students keep probing SRAM for vulnerabilities, hopefully one step ahead of genuine attackers.

For example, SRAM-related cold boot attacks originally targeted discrete SRAM. The reason is that it’s far more complicated to attack on-chip SRAM, which is isolated from external probing and has minimal intrinsic capacitance. However, in 2022, Jubayer Mahmod, then a graduate student at Virginia Tech and his advisor, associate professor Matthew Hicks, demonstrated what they dubbed “Volt Boot,” a new method that could penetrate on-chip SRAM. According to their paper, “Volt Boot leverages asymmetrical power states (e.g., on vs. off) to force SRAM state retention across power cycles, eliminating the need for traditional cold boot attack enablers, such as low-temperature or intrinsic data retention time…Unlike other forms of SRAM data retention attacks, Volt Boot retrieves data with 100% accuracy — without any complex post-processing.”

Conclusion
While scientists and engineers continue to identify vulnerabilities and develop security solutions, decisions about how much security to include in a design is an economic one. Cost vs. risk is a complex formula that depends on the end application, the impact of a breach, and the likelihood that an attack will occur.

“It’s like insurance,” said Kawaguchi. “Security engineers and people like us who are trying to promote security solutions get frustrated because, similar to insurance pitches, people respond with skepticism. ‘Why would I need it? That problem has never happened before.’ Engineers have a hard time convincing their managers to spend that extra dollar on the costs because of this ‘it-never-happened-before’ attitude. In the end, there are compromises. Yet ultimately, it’s going to cost manufacturers a lot of money when suddenly there’s a deluge of demands to fix this situation right away.”

References

  1. S. Skorobogatov, “Low temperature data remanence in static RAM”, Technical report UCAM-CL-TR-536, University of Cambridge Computer Laboratory, June 2002.
  2. Han, SJ., Han, JK., Yun, GJ. et al. Ultra-fast data sanitization of SRAM by back-biasing to resist a cold boot attack. Sci Rep 12, 35 (2022). https://doi.org/10.1038/s41598-021-03994-2

The post SRAM Security Concerns Grow appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Securing AI In The Data CenterBart Stevens
    AI has permeated virtually every aspect of our digital lives, from personalized recommendations on streaming platforms to advanced medical diagnostics. Behind the scenes of this AI revolution lies the data center, which houses the hardware, software, and networking infrastructure necessary for training and deploying AI models. Securing AI in the data center relies on data confidentiality, integrity, and authenticity throughout the AI lifecycle, from data preprocessing to model training and infer
     

Securing AI In The Data Center

9. Květen 2024 v 09:07

AI has permeated virtually every aspect of our digital lives, from personalized recommendations on streaming platforms to advanced medical diagnostics. Behind the scenes of this AI revolution lies the data center, which houses the hardware, software, and networking infrastructure necessary for training and deploying AI models. Securing AI in the data center relies on data confidentiality, integrity, and authenticity throughout the AI lifecycle, from data preprocessing to model training and inference deployment.

High-value datasets containing sensitive information, such as personal health records or financial transactions, must be shielded from unauthorized access. Robust encryption mechanisms, such as Advanced Encryption Standard (AES), coupled with secure key management practices, form the foundation of data confidentiality in the data center. The encryption key used must be unique and used in a secure environment. Encryption and decryption operations of data are constantly occurring and must be performed to prevent key leakage. Should a compromise arise, it should be possible to renew the key securely and re-encrypt data with the new key.

The encryption key used must also be securely stored in a location that unauthorized processes or individuals cannot access. The keys used must be protected from attempts to read them from the device or attempts to steal them using side-channel techniques such as SCA (Side-Channel Attacks) or FIA (Fault Injection Attacks). The multitenancy aspect of modern data centers calls for robust SCA protection of key data.

Hardware-level security plays a pivotal role in safeguarding AI within the data center, offering built-in protections against a wide range of threats. Trusted Platform Modules (TPMs), secure enclaves, and Hardware Security Modules (HSMs) provide secure storage and processing environments for sensitive data and cryptographic keys, shielding them from unauthorized access or tampering. By leveraging hardware-based security features, organizations can enhance the resilience of their AI infrastructure and mitigate the risk of attacks targeting software vulnerabilities.

Ideally, secure cryptographic processing is handled by a Root of Trust core. The AI service provider manages the Root of Trust firmware, but it can also load secure applications that customers can write to implement their own cryptographic key management and storage applications. The Root of Trust can be integrated in the host CPU that orchestrates the AI operations, decrypting the AI model and its specific parameters before those are fed to AI or network accelerators (GPUs or NPUs). It can also be directly integrated with the GPUs and NPUs to perform encryption/decryption at that level. These GPUs and NPUs may also select to store AI workloads and inference models in encrypted form in their local memory banks and decrypt the data on the fly when access is required. Dedicated on-the-fly, low latency in-line memory decryption engines based on the AES-XTS algorithm can keep up with the memory bandwidth, ensuring that the process is not slowed down.

AI training workloads are often distributed among dozens of devices connected via PCIe or high-speed networking technology such as 800G Ethernet. An efficient confidentiality and integrity protocol such as MACsec using the AES-GCM algorithm can protect the data in motion over high-speed Ethernet links. AES-GCM engines integrated with the server SoC and the PCIe acceleration boards ensure that traffic is authenticated and optionally encrypted.

Rambus offers a broad portfolio of security IP covering the key security elements needed to protect AI in the data center. Rambus Root of Trust IP cores ensure a secure boot protocol that protects the integrity of its firmware. This can be combined with Rambus inline memory encryption engines, as well as dedicated solutions for MACsec up to 800G.

Resources

The post Securing AI In The Data Center appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Using AI/ML To Combat CyberattacksJohn Koon
    Machine learning is being used by hackers to find weaknesses in chips and systems, but it also is starting to be used to prevent breaches by pinpointing hardware and software design flaws. To make this work, machine learning (ML) must be trained to identify vulnerabilities, both in hardware and software. With proper training, ML can detect cyber threats and prevent them from accessing critical data. As ML encounters additional cyberattack scenarios, it can learn and adapt, helping to build a mor
     

Using AI/ML To Combat Cyberattacks

Od: John Koon
9. Květen 2024 v 09:07

Machine learning is being used by hackers to find weaknesses in chips and systems, but it also is starting to be used to prevent breaches by pinpointing hardware and software design flaws.

To make this work, machine learning (ML) must be trained to identify vulnerabilities, both in hardware and software. With proper training, ML can detect cyber threats and prevent them from accessing critical data. As ML encounters additional cyberattack scenarios, it can learn and adapt, helping to build a more sophisticated defense system that includes hardware, software, and how they interface with larger systems. It also can automate many cyber defense tasks with minimum human intervention, which saves time, effort, and money.

ML is capable of sifting through large volumes of data much faster than humans. Potentially, it can reduce or remove human errors, lower costs, and boost cyber defense capability and overall efficiency. It also can perform such tasks as connection authentication, system design, vulnerability detection, and most important, threat detection through pattern and behavioral analysis.

“AI/ML is finding many roles protecting and enhancing security for digital devices and services,” said David Maidment, senior director of market development at Arm. “However, it is also being used as a tool for increasingly sophisticated attacks by threat actors. AI/ML is essentially a tool tuned for very advanced pattern recognition across vast data sets. Examples of how AI/ML can enhance security include network-based monitoring to spot rogue behaviors at scale, code analysis to look for vulnerabilities on new and legacy software, and automating the deployment of software to keep devices up-to-date and secure.”

This means that while AI/ML can be used as a force for good, inevitably bad actors will use it to increase the sophistication and scale of attacks. “Building devices and services based on security best practices, having a hardware-protected root of trust (RoT), and an industry-wide methodology to standardize and measure security are all essential,” Maidment said. “The focus on security, including the rapid growth of AI/ML, is certainly driving industry and government discussions as we work on solutions to maximize AI/ML’s benefits and minimize any potential harmful impact.”

Zero trust is a fundamental requirement when it comes to cybersecurity. Before a user or device is allowed to connect to the network or server, requests have to be authenticated to make sure they are legitimate and authorized. ML will enhance the authentication process, including password management, phishing prevention, and malware detection.

Areas that bad actors look to exploit are software design vulnerabilities and weak points in systems and networks. Once hackers uncover these vulnerabilities, they can be used as a point of entrance to the network or systems. ML can detect these vulnerabilities and alert administrators.

Taking a proactive approach by doing threat detection is essential in cyber defense. ML pattern and behavioral analysis strengths support this strategy. When ML detects unusual behavior in data traffic flow or patterns, it sends an alert about abnormal behavior to the administrator. This is similar to the banking industry’s practice of watching for credit card use that does not follow an established pattern. A large purchase overseas on a credit card with a pattern of U.S. use only for moderate amounts would trigger an alert, for example.

As hackers become more sophisticated with new attack vectors, whether it is new ransomware or distributed denial of service (DDoS) attacks, ML will do a much better job than humans in detecting these unknown threats.

Limitations of ML in cybersecurity
While ML provides many benefits, its value depends on the data used to train it. The more that can be used to train the ML model, the better it is at detecting fraud and cyber threats. But acquiring this data raises overall cybersecurity system design expenses. The model also needs constant maintenance and tuning to sustain peak performance and meet the specific needs of users. And while ML can do many of the tasks, it still requires some human involvement, so it’s essential to understand both cybersecurity and how well ML functions.

While ML is effective in fending off many of the cyberattacks, it is not a panacea. “The specific type of artificial intelligence typically referenced in this context is machine learning (ML), which is the development of algorithms that can ingest large volumes of training data, then generalize and make meaningful observations and decisions based on novel data,” said Scott Register, vice president of security solutions at Keysight Technologies. “With the right algorithms and training, AI/ML can be used to pinpoint cyberattacks which might otherwise be difficult to detect.”

However, no one — at least in the commercial space — has delivered a product that can detect very subtle cyberattacks with complete accuracy. “The algorithms are getting better all the time, so it’s highly probable that we’ll soon have commercial products that can detect and respond to attacks,” Register said. “We must keep in mind, however, that attackers don’t sit still, and they’re well-funded and patient. They employ ‘offensive AI,’ which means they use the same types of techniques and algorithms to generate attacks which are unlikely to be detected.”

ML implementation considerations
For any ML implementation, a strong cyber defense system is essential, but there’s no such thing as a completely secure design. Instead, security is a dynamic and ongoing process that requires constant fine-tuning and improvement against ever-changing cyberattacks. Implementing ML requires a clear security roadmap, which should define requirements. It also requires implementing a good cybersecurity process, which secures individual hardware and software components, as well as some type of system testing.

“One of the things we advise is to start with threat modeling to identify a set of critical design assets to protect from an adversary under confidentiality or integrity,” said Jason Oberg, CTO at Cycuity. “From there, you can define a set of very succinct, secure requirements for the assets. All of this work is typically done at the architecture level. We do provide education, training and guidance to our customers, because at that level, if you don’t have succinct security requirements defined, then it’s really hard to verify or check something in the design. What often happens is customers will say, ‘I want to have a secure chip.’ But it’s not as easy as just pressing a button and getting a green check mark that confirms the chip is now secure.”

To be successful, engineering teams must start at the architectural stages and define the security requirements. “Once that is done, they can start actually writing the RTL,” Oberg said. “There are tools available to provide assurances these security requirements are being met, and run within the existing simulation and emulation environments to help validate the security requirements, and help identify any unknown design weaknesses. Generally, this helps hardware and verification engineers increase their productivity and build confidence that the system is indeed meeting the security requirements.”

Figure 1: A cybersecurity model includes multiple stages, progressing from the very basic to in-depth. It is important for organizations to know what stages their cyber defense system are. Source: Cycuity

Fig. 1: A cybersecurity model includes multiple stages, progressing from the very basic to in-depth. It is important for organizations to know what stages their cyber defense system are. Source: Cycuity

Steve Garrison, senior vice president, marketing of Stellar Cyber, noted that if cyber threats were uncovered during the detection process, so many data files may be generated that they will be difficult for humans to sort through. Graphical displays can speed up the process and reduce the overall mean time to detection (MTTD) and mean time to response (MTTR).

Figure 2: Using graphical displays  would reduce the overall meantime to detection (MTTD) and meantime to response (MTTR). Source: Stellar Cyber

Fig. 2: Using graphical displays  would reduce the overall meantime to detection (MTTD) and meantime to response (MTTR). Source: Stellar Cyber

Testing is essential
Another important stage in the design process is testing, whereby each system design requires a vigorous attack simulation tool to weed out the basic oversights to ensure it meets the predefined standard.

“First, if you want to understand how defensive systems will function in the real world, it’s important to test them under conditions, which are as realistic as possible,” Keysight’s Register said. “The network environment should have the same amount of traffic, mix of applications, speeds, behavioral characteristics, and timing as the real world. For example, the timing of a sudden uptick in email and social media traffic corresponds to the time when people open up their laptops at work. The attack traffic needs to be as realistic as possible as well – hackers try hard not to be noticed, often preferring ‘low and slow’ attacks, which may take hours or days to complete, making detection much more difficult. The same obfuscation techniques, encryption, and decoy traffic employed by threat actors needs to be simulated as accurately as possible.”

Further, due to mistaken assumptions during testing, defensive systems often perform great in the lab, yet fail spectacularly in production networks.  “Afterwards we hear, for example, ‘I didn’t think hackers would encrypt their malware,’ or ‘Internal e-mails weren’t checked for malicious attachments, only those from external senders,’” Register explained. “Also, in security testing, currency is key. Attacks and obfuscation techniques are constantly evolving. If a security system is tested against stale attacks, then the value of that testing is limited. The offensive tools should be kept as up to date as possible to ensure the most effective performance against the tools a system is likely to encounter in the wild.”

Semiconductor security
Almost all system designs depend on semiconductors, so it is important to ensure that any and all chips, firmware, FPGAs, and SoCs are secure – including those that perform ML functionality.

“Semiconductor security is a constantly evolving problem and requires an adaptable solution, said Jayson Bethurem, vice president marketing and business development at Flex Logix. “Fixed solutions with current cryptography that are implemented today will inevitably be challenged in the future. Hackers today have more time, resources, training, and motivation to disrupt technology. With technology increasing in every facet of our lives, defending against this presents a real challenge. We also have to consider upcoming threats, namely quantum computing.”

Many predict that quantum computing will be able to crack current cryptography solutions in the next few years. “Fortunately, semiconductor manufacturers have solutions that can enable cryptography agility, which can dynamically adapt to evolving threats,” Bethurem said. “This includes both updating hardware accelerated cryptography algorithms and obfuscating them, an approach that increases root of trust and protects valuable IP secrets. Advanced solutions like these also involve devices randomly creating their own encryption keys, making it harder for algorithms to crack encryption codes.”

Advances in AI/ML algorithms can adapt to new threats and reduce latency of algorithm updates from manufacturers. This is particularly useful with reconfigurable eFPGA IP, which can be implemented into any semiconductor device to thwart all current and future threats and optimized to run AI/ML-based cryptography solutions. The result is a combination of high-performance processing, scalability, and low-latency attack response.

Chips that support AI/ML algorithms need not only computing power, but also accelerators for those algorithms. In addition, all of this needs to happen without exceeding a tight power budget.

“More AI/ML systems run at tiny edges rather than at the core,” said Detlef Houdeau, senior director of design system architecture at Infineon Technologies. “AI/ML systems don’t need any bigger computer and/or cloud. For instance, a Raspberry Pi for a robot in production can have more than 3 AI/ML algorithms working in parallel. A smartphone has more than 10 AI/ML functions in the phone, and downloading new apps brings new AI/ML algorithms into the device. A pacemaker can have 2 AI/ML algorithms. Security chips, meanwhile, need a security architecture as well as accelerators for encryption. Combining an AI/ML accelerator with an encryption accelerator in the same chip could increase the performance in microcontroller units, and at the same time foster more security at the edge. The next generation of microelectronics could show this combination.”

After developers have gone through design reviews and the systems have run vigorous tests, it helps to have third-party certification and/or credentials to ensure the systems are indeed secure from a third-party independent viewpoint.

“As AI, and recently generative AI, continue to transform all markets, there will be new attack vectors to mitigate against,” said Arm’s Maidment. “We expect to see networks become smarter in the way they monitor traffic and behaviors. The use of AI/ML allows network-based monitoring at scale to allow potential unexpected or rogue behavior to be identified and isolated. Automating network monitoring based on AI/ML will allow an extra layer of defense as networks scale out and establish effectively a ‘zero trust’ approach. With this approach, analysis at scale can be tuned to look at particular threat vectors depending on the use case.”

With an increase in AI/ML adoption at the edge, a lot of this is taking place on the CPU. “Whether it is handling workloads in their entirety, or in combination with a co-processor like a GPU or NPU, how applications are deployed across the compute resources needs to be secure and managed centrally within the edge AI/ML device,” Maidment said. “Building edge AI/ML devices based on a hardware root of trust is essential. It is critical to have privileged access control of what code is allowed to run where using a trusted memory management architecture. Arm continually invests in security, and the Armv9 architecture offers a number of new security features. Alongside architecture improvements, we continue to work in partnership with the industry on our ecosystem security framework and certification scheme, PSA Certified, which is based on a certified hardware RoT. This hardware base helps to improve the security of systems and fulfill the consumer expectation that as devices scale, they remain secure.”

Outlook
It is important to understand that threat actors will continue to evolve attacks using AI/ML. Experts suggest that to counter such attacks, organizations, institutions, and government agencies will have to continually improve defense strategies and capabilities, including AI/ML deployment.

AI/ML can be used as weapon from an attacker for industrial espionage and/or industrial sabotage, and stopping incursions will require a broad range of cyberattack prevention and detection tools, including AI/ML functionality for anomaly detection. But in general, hackers are almost always one step ahead.

According to Register, “the recurring cycle is: 1) hackers come out with a new tool or technology that lets them attack systems or evade detection more effectively; 2) those attacks cause enough economic damage that the industry responds and develops effective countermeasures; 3) the no-longer-new hacker tools are still employed effectively, but against targets that haven’t bothered to update their defenses; 4) hackers develop new offensive tools that are effective against the defensive techniques of high-value targets, and the cycle starts anew.”

Related Reading
Securing Chip Manufacturing Against Growing Cyber Threats
Suppliers are the number one risk, but reducing attacks requires industry-wide collaboration.
Data Center Security Issues Widen
The number and breadth of hardware targets is increasing, but older attack vectors are not going away. Hackers are becoming more sophisticated, and they have a big advantage.

The post Using AI/ML To Combat Cyberattacks appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Overcoming Chiplet Integration Challenges With AdaptabilityJayson Bethurem
    Chiplets are exploding in popularity due to key benefits such as lower cost, lower power, higher performance and greater flexibility to meet specific market requirements. More importantly, chiplets can reduce time-to-market, thus decreasing time-to-revenue! Heterogeneous and modular SoC design can accelerate innovation and adaptation for many companies. What’s not to like about chiplets? Well, as chiplets come to fruition, we are starting to realize many of the complications of chiplet designs.
     

Overcoming Chiplet Integration Challenges With Adaptability

9. Květen 2024 v 09:06

Chiplets are exploding in popularity due to key benefits such as lower cost, lower power, higher performance and greater flexibility to meet specific market requirements. More importantly, chiplets can reduce time-to-market, thus decreasing time-to-revenue! Heterogeneous and modular SoC design can accelerate innovation and adaptation for many companies. What’s not to like about chiplets? Well, as chiplets come to fruition, we are starting to realize many of the complications of chiplet designs.

Fig. 1: Expanded chiplet system view.

The interface challenge

The primary concept of chiplets is integration of ICs (Integrated Circuits) from multiple companies. However, many of these did not fully consider interoperation with other ICs. That’s partially based on a lack of interconnect standards around chiplets. Moreover, ICs have their own computational and bandwidth requirements. This is further complicated as competing interfaces standards vie for adoption as shown in table 1.

Standard d Throughput Density Max Delay
Advanced Interface Bus (Intel, AIB) 2 Gbps 504 Gbps/mm 5 ns
Bandwidth Engine 10.3 Gbps N/A 2.4 ns
BoW (Bunch of Wires) 16 Gbps 1280 Gbps/mm 5 ns
HBM3 (JEDEC) 4.8 Gbps N/A N/A
Infinity Fabric (AMD) 10.6 Gbps N/A 9 ns
Lipincon (TSMC) 2.8 Gbps 536 Gbps/mm 14ns
Multi-Die I/O (Intel) 5.4 Gbps 1600 Gbps/mm N/A
XSR/USR (Rambus) 112 Gbps N/A N/A
UCIe 32 Gbps 1350 Gbps/mm 2 ns

Table 1: Chiplet interconnect options.

Most chiplet interconnects are dominated by UCIe (Universal Chiplet Interconnect) and the unimaginatively named BoW (Bunch of Wires). UCIe introduced the 1.0 spec, and as with any first edition of a specification, it is inevitable updates will follow. UCIe 1.1 fixes several holes and gaps in 1.0. It addresses gray areas, missing definitions, ECNs and more. And it is very likely not the last update, as UCIe’s vision is to grow up the stack – adding additional protocol layers on top of the system layers.

Because of newness and expected evolution of UCIe and BoW protocols, designing them in is risky. Additionally, there will always be a place for multiple die-to-die interfaces, beyond UCIe. Specific use cases and designs will inherently be matched to different metrics leading for many designs to fall back to proprietary interfaces.

As you can see, there are many choices, and many of these have tradeoffs. Integration into a chiplet with a variety of these protocols would greatly benefit from adaptability via data/protocol adaptation that can easily be enabled with embedded programmable logic, or eFPGA. A lightweight protocol shim implemented in eFPGA IP can not only reformat data but also buffer data to maximize internal processing. Finally, consider that data between ICs in a chiplet can be globally asynchronous – another easy task resolved with eFPGA IP with FIFO synchronizers.

The security challenge

Beyond the interfaces, security is another emerging challenge. A few factors of chiplets must be cautiously considered:

  • Varying ICs from unknown and possibly unreputable manufacturers
  • IC can contain internal IP from additional third-party sources
  • Each IC may receive and introduce external data into the system

Naturally, this begs for attestation and provenance to ensure vendor confidence. As such, root of trust generally starts with the supply chain and auditing all vendors. However, it only takes one failed component, the least secure component, to jeopardize the entire system.

Root of trust suddenly becomes an issue and uncovers another issue. Which IC, or ICs, in the chiplet manage root of trust? As we’ve seen time and time again, security threats evolve at an alarming rate. But chiplets have an opportunity here. Again, embedded FPGAs have the flexible nature to adapt, thus thwarting these evolving security threats. eFPGA IP can also physically disable unused interfaces – minimizing surface attack vectors.

Adaptable cryptography cores can perform a variety of tasks with high performance in eFPGA IP. These tasks include authentication/digital signing, key generation, encapsulation/decapsulation, random number generation and much more. Further, post-quantum security cores that run very efficiently on eFPGA are becoming available. Figure 2 shows a ML Kyber Encapsulation Module from Xiphera that fits into only four Flex Logix EFLX tiles, efficiently packed at 98% utilization with a throughput of over 2 Gbps.

Fig. 2: ML-KEM IP core from Xiphera implemented on Flex Logix EFLX eFPGA IP.

Managing all data communication within a chiplet seems daunting; however, it is feasible. Designers have the choice of implementing eFPGA on every IC in the chiplet for adaptable data signage. Or standalone on the interposer, where system designers can define a secure enclave in which all data is authenticated and encrypted by an independent IC with eFPGA. eFPGA can also process streaming data at a very high rate. And in most cases can keep up with line rate, as seen with programmable data planes in SmartNICs.

eFPGA can add another critical security benefit. Every instance of eFPGA in the chiplet offers the ability to obfuscate critical algorithms, cryptography and protocols. This enables manufacturers to protect design secrets by not only programming these features in a controlled environment, but also adapting these as threats evolve.

The validation problem

Again, the absence of fully defined industry standards presents integration challenges. Conventional methods of qualification, testing, and validation become increasingly more complex. Yet this becomes another opportunity for eFPGA IP. It can be configured as an in-system diagnostic tool that provides testing, debugging and observability. Not only during IC bring up, but also during run time – eliminating finger pointing between independent companies.

The reconfigurability solution

While we’ve discussed a few different chiplet issues and solutions with adaptable eFPGA, it is important to realize that a singular instance of this IP can perform all these functions in a chiplet, as eFPGA IP is completely reconfigurable. It can be time-sliced and uniquely configured differently during specific operational phases of the chiplet. As mentioned in the examples above, during IC bring up it can provide insightful debug visibility into the system. During boot, it can enable secure boot and attested firmware updates to all ICs in the chiplet. During run time, it can perform cryptographic functions as well independently manage a secure enclave environment. eFPGA is also perfect for any other software acceleration your applications need, as its heavily parallel and pipelined nature is perfect for complex signal processing tasks. Lastly, during an RMA process it can also investigate and determine system failures. This is just a short list of the features eFPGA IP can enable in a chiplet.

Customizable for the perfect solution

Flex Logix EFLX IP delivers excellent PPA (Power, Performance and Area) and is available on the most advanced nodes, including Intel 18A and TSMC 7nm and 5nm. Furthermore, Flex Logix eFPGA IP is scalable – enabling you to choose the best balance of programmable logic, embedded memory and signal processing resources.

Fig. 3: Scalable Flex Logix eFPGA IP.

Want to learn more about Flex Logix IP? Contact us at [email protected] or visit our website https://flex-logix.com.

The post Overcoming Chiplet Integration Challenges With Adaptability appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Software-Defined Vehicle Momentum GrowsAnn Mutschler
    Experts at the Table: The automotive ecosystem is undergoing a transformation toward software-defined vehicles, spurring new architectures with more software. Semiconductor Engineering sat down to discuss the impact of these changes with Suraj Gajendra, vice president of products and solutions in Arm‘s automotive line of business; Chuck Alpert, R&D automotive fellow at Cadence; Steve Spadoni, zone controller and power distribution application manager at Infineon; Rebeca Delgado, chief techno
     

Software-Defined Vehicle Momentum Grows

9. Květen 2024 v 09:06

Experts at the Table: The automotive ecosystem is undergoing a transformation toward software-defined vehicles, spurring new architectures with more software. Semiconductor Engineering sat down to discuss the impact of these changes with Suraj Gajendra, vice president of products and solutions in Arm‘s automotive line of business; Chuck Alpert, R&D automotive fellow at Cadence; Steve Spadoni, zone controller and power distribution application manager at Infineon; Rebeca Delgado, chief technology officer and principal AI engineer at Intel Automotive; Cyril Clocher, senior director in the automotive product line for high-performance computing at Renesas; David Fritz, vice president, hybrid and virtual systems at Siemens EDA; and Marc Serughetti, senior director, systems design group at Synopsys. What follows are excerpts of that discussion.

L-R: Arm’s Gajendra, Cadence’s Alpert, Infineon’s Spadoni, Intel’s Delgado, Renesas’ Clocher, Siemens’ Fritz, Synopsys’ Serughetti.

L-R: Arm’s Gajendra, Cadence’s Alpert, Infineon’s Spadoni, Intel’s Delgado, Renesas’ Clocher, Siemens’ Fritz, Synopsys’ Serughetti.

SE: The automotive ecosystem is undergoing a technology evolution the likes of which has not been seen, including the move to software-defined vehicles. To set a baseline for this discussion, what is your definition of an SDV?

Gajendra: A software-defined vehicle is a concept, a trend, an idea, where the whole ecosystem can drive new capabilities and new user experiences into the car, even after it rolls out of the showroom or dealership. It’s a pretty loaded concept. There’s a lot of infrastructure that needs to come together, such as software development in the cloud, seamless deployment of that software development onto the car, the whole deployment of over-the-air updates, and the connectivity. In short, the concept of a software-defined vehicle is expecting a world where we can drive new experiences, new capabilities, and new features into the car throughout its lifetime.

Alpert: In thinking about what SDV means, one example is the battery — especially in an EV. I’m not talking about the technology of the battery that’s evolved, but rather the idea that in the past when you wanted to charge your car in your garage and you were worried about starting a fire, you’d think, ‘No, don’t do that because your whole house could burn down.’ The idea is that in the past, maybe we might put a temperature sensor on the battery, but now we actually have software that can monitor it. It might even have AI to predict if the battery is reaching some state that might cause a fire in the future. You also might have something that connects to the power grid and learns when is a good time to charge, because it’s a low-usage period so it’s cheaper. This is just one part of the car, but you can imagine a whole bunch of software that you want to put on top of it in order to connect to the universe. You need a software-defined vehicle platform in order for this, or in all the other parts of your car, to communicate with the world and provide the best user experience.

Spadoni: Infineon’s definition of a software-defined vehicle is a redefining of architecture — specifically, electrical and electronic architecture, feature allocation, and the entire topology of the vehicle, from power generation and storage to power distribution and high compute. It really means new electrical architectures, and it has consequences for the business model of every OEM and Tier 1 involved. It’s a major change to previous methodologies in the last 30 years.

Delgado: Software-defined vehicle is not just over-the-air updates. It’s truly a new methodology and a new philosophy for how to architect every ingredient of the vehicle to continue to deliver value over time, in which the value is very tightly attached to the software that delivers the user experience. Ultimately, this architecture must enable the different practices on how to deliver this new value over time. What’s very interesting is that these practices of moving to software-defined architecture has been done by many other industries already. Intel has a ton of heritage, and actually helped those industries transform. That transformation is truly what we’re observing here. It’s an incredible opportunity, and possibly a crisis if not done right.

Clocher: To apply an analogy here, the car is the new smartphone. But for us, it’s more than that. I’ve heard about the platform, yes, and it’s the major architecture evolution that we’ll see in the next decade. For us at Renesas, it will be a journey that will take time to enhance the user experience, to generate new revenue streams for the industry as it moves from decentralized to centralized classic compute with zonal architecture. We can apply all those buzzwords to a software-defined vehicle. Those platform will need big computers and heavy complex hardware solutions and this will generate evolutions, upgrades to the car during its entire lifetime, but underneath we know — at least at Renesas, and certainly at some other players and silicon vendors — that this will need a huge amount of hardware resources to manage what we have in mind to deploy this platform.

Fritz: I see software-defined vehicles a bit differently than what’s been mentioned so far. For many years, you’d have the hardware team doing their design, and the software team doing their design, and it all needs to come together. There’s an English natural language discussion about what needs to happen, and as we all know, that never really goes terribly well. In automotive that becomes an integration storm, and it is a nightmare. With the new compute requirements that have been mentioned already, that just compounds the issue. So the way I see this is that we tend, as people who have an engineering background, to dive into how we’re going to do things. We hear ‘software-defined vehicle,’ we immediately think about how to do that. There’s not a lot of thought about why it needs to be done, and what needs to happen. We jump into the ‘how’ too early, and a lot of the discussion here is exemplary of that kind of approach. When I’m looking at software-defined vehicles, I’m looking at why it’s important that the software needs to run effectively on a piece of hardware. And for that hardware, why is it important for it to actually operate properly on the software? Then you can decide how to put together a new methodology that’s going to bring those things together. In the past, it’s been called hardware/software co-design. There have been attempts many times, and as has been mentioned, other industries have made this transition. What’s unique about automotive is that it’s not just one transition that needs to happen. It’s hundreds or thousands of transitions. The ecosystem needs to be turned upside down, which we’re seeing happen right now, and you need to bring all that together. It really is a methodology where you need the tooling, you need the processes, you need the thinking, you need the organizations to change so that they can make this transition in a realistic way. SDV is a huge transition. It is a way for the automotive industry to morph into something that has longevity and can meet customer expectations, which it really hasn’t met for some time now.

Serughetti: At the end of the day, if we look starting at the top from our perspective, SDV is a means to bring and enhance the car experience for the customer. That’s the end result that the OEMs look at, but they look at it from the perspective of how that improves the OEM efficiencies, and how that creates new business opportunities. The way we look at it, and what’s important, is the impact it has on the industry, the impact on the processes, on the methodologies, on the people, on the ecosystem, on the technology. It’s really a transformation of the automotive market that is going to fundamentally change how the industry moves forward and bring the OEM into a world in which they are really looking at how they become efficient in delivering cars, how they bring new features, but at the same time, how they evolve their business as well.

SE: As you’ve all described, SDV requires many inter-dependencies, and the entire ecosystem has to have an understanding of the ‘why,’ which should then lead back to laying out the plan for how to get there. Where does the ecosystem stand today in terms of realizing SDV?

Fritz: OEMs have decided in the last few years that they’ve got to take control of their own destiny. They cannot simply take what the suppliers provide. They need a methodology — like this whole SDV concept, and any tooling necessary to provide that — to push down into their suppliers, such that, ‘Here’s what I need. If you can’t do this for me, I will go find someone that will.’ This is not the old ecosystem that bubbled up from the IP to the Tier 2s, to the Tier 1s, and then to the OEMs, which gave them limited choices to go from. So when I say, “Turn the ecosystem upside down,” that’s what is happening. But every OEM has their own ecosystem, and they’re not all in the same place. Even region-to-region, they can be very different.

Delgado: This is a critical discussion, and effectively where the industry has to eventually settle. The magnitude of the transformation of the ecosystem includes roles in the technology evolution. The silicon content is expected to quadruple over the next few years in the vehicle for defining the in-cabin experience of the end user. At the end of the day, the complexity of the transition of roles is of such magnitude that the proprietary, fragmented, and broken approaches that David articulated are really not going to enable the industry to transform at the speed it requires to deliver and meet the experiences. But more than anything, they are not going to address the actual technology changes necessary to implement and allow for this value delivery mechanism. At the end of the day, this is where Intel really believes collaboration is key, and anybody who wants to participate in this ecosystem must provide scalability — also known as top-to-bottom support of the different product lines that our OEMs and Tier 1s are having to support, versus a broken-up approach on these ever-evolving higher performance and higher performance compute needs. It has to be future-proof, because you’re going to launch the vehicle eventually. So certain hardware has to be future-proofed to a certain affordability envelope, and there has to be a strategy around that. And then the ecosystem and that collaboration must be able to deliver that aggregation. It has to be done with certain anchoring technology that will allow us to deliver that performance. Collaboration is key in the sense that these technologies cannot be single-handedly owned, developed, let alone owned, defined, developed, and integrated by OEMs in silos with a proprietary end-to-end architecture definition. There obviously will be differentiations on the actual implementation, but the technologies at large have to have a sense of reuse, particularly from other verticals that have already done software-defined transformations and then tuned in the right ways toward the automotive requirements.

Spadoni: There are probably a wide variety of implementations. At Infineon, we partner with OEMs and Tier 1s and we see different approaches. For example, General Motors has more of a modular approach that emulates what happened in in the mobile phone space. It seems that Ford has a more pragmatic approach, along with Stellantis, but all of them are facing very similar challenges in that affordability has become a big problem. There are multiple generations of implementations that are going to occur, and you’ll see a striving toward how to pay for this extra hardware. It leads to tradeoffs in implementations of other systems that have to have savings in order for them to afford these vehicles. No one ever goes into a dealership and says, ‘Give me a software-defined vehicle.’ Everyone’s looking for value, and you can see it now with volumes going down. There’s a saturation of people buying at the high level. The OEMs want to get more sales, which means they’ll have to go to the lower-cost-value vehicles, and that’s going to affect the electrical and electronic architectures and the software-defined vehicle.

Clocher: What we’re seeing I would summarize as the impact on the ecosystem. We’re moving to an OEM-centric ecosystem. One size does not fit all, meaning OEMs will have their different tastes, their different definitions of levels of integration they want to have in their software-defined vehicle — especially given more complex tasks that we all have to do, rather than the challenge we have to solve, because we’re not talking about a common umbrella of software-defined vehicle. But it really does mean different implementations and different meanings for OEM A from OEM B. I would fully agree with David and Steve that we are far from having a common understanding of, at least, the market itself. And that’s fine, because this will bring differentiation, and ultimately that’s why a customer will go to Dealership A versus Dealership B. This is what the industry wants to see — continue to differentiate, continue to add value to the ultimate product, which is the car.

Serughetti: The important point in all this is, of course, you’re breaking the model that exists today. That’s one of the big challenges. We used to have Tier 1s that were building boxes, and delivering software. This was a complete black box. When it would go to integration, there were all sorts of problems. And now you’re going to break this? The challenge for the OEM is how they do this. They want to control software, but are they equipped to do this today? We see the problems today that some of the legacy OEMs have in setting up their software organizations, the challenges of CARIAD and all such organizations that are trying to do this. It’s not easy to change those companies. Of course, the new entrants don’t have this problem because they are coming from a brand new design versus the ones that deal with legacy. So for the OEM, it’s about how to take control of the software. What does that mean in terms of the processes, in terms of agile development, digital twins, and all of these technologies everybody’s talking about? The other side is, ‘It’s all nice, this software,’ but this software runs on all the companies that are delivering hardware, and that becomes essential to it. You can have the best software, but if your hardware is not there to support performance, power, and all of those aspects, you’re not going to be successful. So the ecosystem is evolving how hardware, software, and all of this comes together. The OEM wants to be the central point. That’s what we’re talking about in terms of the process methodology aspects that are making this transition evolve.

Gajendra: Where are we in this journey? How far have we come? And where are we going? Going back to the point that David mentioned earlier about supply chain evolving and the supply chain turned upside down, five years ago, if we sat here in this sort of a panel and discussed software-defined vehicles, the conversation would have been entirely different. It would have been stuck with the traditional supply chain that we’ve seen for the last 35 or 40 years in the automotive industry. There are fundamentally two aspects here. The supply chain is evolving, and the infrastructure that we, as a community — this team, for example, and many others in the community — are trying to enable is going to be key to making our EDA partners happy. The use of virtual platforms today in the cloud to try and shift left and develop and validate some of these technologies and software wasn’t even there five years ago, so we’ve come a long way. We’ve made a lot of progress together as an industry. Yes, we have a long way to go until we actually have a truly software-defined vehicle. We can go and ask for a software-defined vehicle in the dealership. But the changes we are seeing in terms of all sorts of technology providers trying to make sure that the technology that we eventually will have in the hardware is provided in some sort of virtual form, be it fast models or whatever it is in the cloud, for the vast majority of software ecosystem in automotive this is a big change. I was at Embedded World, and the amount of virtual platforms and the demos that people were actually showing — silicon partners like we have here, Intel, Renesas, Infineon, EDA companies — pointed to a strong movement of, ‘Let’s build the infrastructure that we can build, and then provide that infrastructure to the OEMs to take it from there.’ There is a lot of work going on. Together we will make the infrastructure across the board, be it virtual platform or others, richer and more capable.

Alpert: For sure, OEMs have to control their own destiny. In the past, they would do it by differentiating maybe because they had better engine performance, or some other feature. But going forward, the differentiation is going to be their software. Whoever can make software that will provide additional value, and brand it, that’s going to be the differentiator and that’s the trend. In terms of how you get there, a shared ecosystem is important. SOAFEE is a potential way that, together with virtual platforms, you can provide a shared ecosystem for development, but still allow everyone to differentiate and plug-and-play. That’s one reason we’re working closely with Arm on trying to have a reference design specifically for this purpose. But again, we’re not saying, ‘This is the design you use. This is how you do it.’ That’s not it. The point is, let’s start somewhere, and then people can start swapping out pieces and doing different things. As long as OEMs can plug-and-play, then they can still differentiate. But they don’t have to invent everything themselves, which would be too costly.

Related Reading
Software-Defined Vehicles Ready To Roll
New approach could have big effects on cost, safety, security, and time to market.

The post Software-Defined Vehicle Momentum Grows appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Enhancing HMI Security: How To Protect ICS Environments From Cyber ThreatsJim Montgomery
    HMIs (Human Machine Interfaces) can be broadly defined as just about anything that allows humans to interface with their machines, and so are found throughout the technical world. In OT environments, operators use various HMIs to interact with industrial control systems in order to direct and monitor the operational systems. And wherever humans and machines intersect, security problems can ensue. Protecting HMI in cybersecurity plans, particularly in OT/ICS environments, can be a challenge, as H
     

Enhancing HMI Security: How To Protect ICS Environments From Cyber Threats

9. Květen 2024 v 09:05

HMIs (Human Machine Interfaces) can be broadly defined as just about anything that allows humans to interface with their machines, and so are found throughout the technical world. In OT environments, operators use various HMIs to interact with industrial control systems in order to direct and monitor the operational systems. And wherever humans and machines intersect, security problems can ensue.

Protecting HMI in cybersecurity plans, particularly in OT/ICS environments, can be a challenge, as HMIs offer a variety of vulnerabilities that threat actors can exploit to achieve any number of goals, from extortion to sabotage.

Consider the sort of OT environments HMIs are found in, including water and power utilities, manufacturing facilities, chemical production, oil and gas infrastructure, smart buildings, hospitals, and more. The HMIs in these environments offer bad actors a range of attack vectors through which they can enter and begin to wreak havoc, either financial, physical, or both.

What’s the relationship between HMI and SCADA?

SCADA (supervisory control and data acquisition) systems are used to acquire and analyze data and control industrial systems. Because of the role SCADA plays in these settings — generally overseeing the control of hugely complex, expensive, and dangerous-if-misused industrial equipment, processes, and facilities — they are extremely attractive to threat actors.

Unfortunately, the HMIs that operators use to interface with these systems may contain a number of vulnerabilities that are among the most highly exploitable and frequently breached vectors for attacks against SCADA systems.

Once an attacker gains access, they can seize from operators the ability to control the system. They can cause machinery to malfunction and suffer irreparable damage; they can taint products, steal information, and extort ransom. Even beyond ransom demands, the cost of production stoppages, lost sales, equipment replacement, and reputational damage can swallow some companies and create shortages in the market. Attacks can also cause equipment to perform in ways that threaten human life and safety.

Three types of HMIs in ICS that are vulnerable to attack

HMI security has to account for a range of “vulnerability options” available for exploitation by bad actors, such as keyboards, touch screens, and tablets, as well as more sophisticated interface points. Among the more frequently attacked are the Graphical User Interface and mobile and remote access.

Graphical User Interface

Attackers can use the Graphical User Interface or GUI to gain complete access to the system and manipulate it at will. They can often gain access by exploiting misconfigured access controls or bugs and other vulnerabilities that exist in a lot of software, including GUI software. If the system is web- or network-connected, their work is easier, especially if introducing malware is a goal. Once in, they can also move laterally, exploring or compromising interconnected systems and widening the attack.

Mobile and remote access

Even before COVID-19, mobile and remote access techniques were already being incorporated into managing a growing number of OT networks. When the pandemic hit hard, remote access often became a necessity. As the crisis faded, however, mobile and remote access became even more entrenched.

Remote access points are especially vulnerable. For one, remote access software can contain its own security vulnerabilities, like unpatched flaws and bugs or misconfigurations. Attackers may find openings in VPNs (virtual private networks) or RDP (remote desktop protocol) and use these holes to slip past security measures and carry out their mission.

Access controls

Attackers can compromise access control mechanisms to acquire the same permissions and privileges as authorized users, and once they gain access, they can do pretty much anything they want regarding system operations and data access. Access can be gained in many of the usual ways, such as an outdated VPN or stolen or purchased credentials. (Stolen or other credentials are readily available through online markets.)

The initial attack may just be a toe in the network while reconnaissance for holes in the access control system is conducted. Weak passwords, unnecessary access rights, and the usual misconfigurations and software vulnerabilities are all an attacker needs. As further walls are breached, attackers can then escalate their level of privilege to do whatever a legitimate user can do.

Understanding attack techniques in ICS HMI cybersecurity

Code injection

When attackers insert or inject malicious code into a software program or system, that’s code injection, and it can give the attacker access to core system functions. The resulting mayhem can include manipulation of control software, leading to shutdowns, equipment damage, and dangerous, even life-threatening situations if system changes result in hazardous chemical releases, changed formulas, explosions, or the misbehavior of large, heavy machinery. Code injections can corrupt, delete, or steal data and may result in compliance failure and fines in certain situations.

Malware virus infection

Malware can enter a network through various access points in addition to HMIs, even ones no one would ever expect, such as manufacturer-provided software updates or factory-fresh physical assets added to the production environment. A technician connecting a laptop or an employee plugging in a flash drive without knowing it’s infected will work just as well. As the walls between IT and OT thin, that attack surface widens as well. Once in the network, the attacker can escalate privileges, look around a bit, and see what’s worth doing or stealing. When enough has been learned, the attacker executes the malicious code, which can include ransomware or spyware. As in other attacks, operations can be interfered with, sometimes dangerously so.

Data tampering

Data tampering simply means that data is altered without authorization, including data used to operate, control, and monitor industrial systems. Attackers gain access through vulnerabilities in the system software or HMI devices or through passageways between IT and OT. Once in, they can explore the system to give themselves even greater access to more sensitive areas, where they can steal valuable and confidential system data, interrupt operations, compromise equipment, and damage the company’s business interests and competitive advantage.

Memory corruption

Memory corruption can happen in any computer network and may not represent anything nefarious. Yet memory corruption has also been used as an attack technique that can be deployed against OT networks and is thus potentially extremely damaging since data controls machinery, processes, formulas, and other essential functions. Attackers find software vulnerabilities in HMI or other access points through which the memory of an application or system can be reached and corrupted. This can lead to crashes, data leakage, denial of services (DoS), and even attacker takeovers of ICS and SCADA systems.

Spear phishing

Spear phishing attacks are generally launched against IT networks, which can then be used to open a corridor to the OT network. Spear phishing is basically a more targeted version of phishing attacks, in which an attacker will impersonate a legitimate, trusted source via email or web page, for example. In 2014, attackers targeted a German steel mill with an email suspected of carrying malicious code. They then used access to the business network to get to the SCADA/ICS network, where they modified the PLCs (programmable logic controllers) and took over the furnace’s operations. The physical damage they inflicted forced the plant to shut down.

DoS and DDoS attacks

Denial of Service (DoS) and Distributed Denial of Service (DDoS) work by overwhelming HMI points with excessive traffic or requests so they are unable to handle authorized control and monitoring functions. In 2016, some particularly vicious malware dubbed Industroyer (also Crashoveride) was deployed in an attack against Ukraine’s power grid and blacked out a substantial section of Kyiv. Industroyer was developed specifically to attack ICS and SCADA systems. The multipronged attack began by exploiting vulnerabilities in digital substation relays. A timer regulating the attack executed a distributed denial-of-service (DDoS) attack on every protection relay on the network that used any of four specific communication protocols. Simultaneously, it deleted all MicroSCADA-related files from the workstations’ hard drives. As the relays stopped functioning, lights went out across the city.

Exploiting remote access

The growing use of remote access to HMI systems during and after COVID-19 has provided threat actors with a wealth of newly available attack vectors. Less-than-airtight remote access security protocols make them very enticing for ICS-specific malware. HAVEX malware, for example, uses a remote access trojan (RAT) downloaded from OT vendor websites. The RAT can then scan for devices on the ports commonly used OT assets, collect information, and send it back to the attacker’s command and control server. A long-term attack used just such a method to gain remote access to energy networks in the U.S. and internationally, during which data thieves collected and “exfiltrated” (stole) enterprise and ICS-related data.

Credential theft

Obtaining unauthorized credentials is not all that difficult these days, with a robust online marketplace making it easier than ever. Phishing and spear phishing, malware, weak passwords, and vulnerabilities or misconfigurations that grant access to places where unencrypted credentials are all sources. With credentials in hand, attackers can move past security, including MFA (multifactor authentication), conduct reconnaissance, and give themselves whatever level of privilege they need to complete whatever their mission is. Or they simply persist and observe, learning all they can before finally acting against the ICS or SCADA system.

Zero-day attacks

Zero-day attacks got their name because they’re generally carried out against a previously existing yet unknown vulnerability; the vendor has zero days to fix it because the attack is already underway. Vulnerabilities that are completely unknown to either the software developer or the cybersecurity community exist throughout the software world, including in OT networks and their HMIs. Unsuspected and thus unpatched, they give fast-moving threat actors the opportunity to carry out a zero-day attack without resistance. The 2010 Stuxnet attack against Iran’s nuclear program used zero-day vulnerabilities in Windows to access the network and spread, eventually destroying the centrifuges. One thousand machines sustained physical damage.

Best practices for enhancing HMI security

Network segmentation for isolation

Network segmentation should be a core defense in securing industrial networks. Segmentation creates an environment that’s naturally resistant to intruders. Many of the attack techniques described above give attackers the ability to move laterally through the network. Segmenting the network prevents this lateral movement, limiting the attack radius and potential for damage. As OT networks become more connected to the world and the line between IT and OT continues to blur, network segmentation can segregate HMI systems from other parts of the network and the outside world. It can also segment defined zones within the OT network from each other so attacks can be contained.

Software and firmware updates

Software and firmware updates are recommended in all cybersecurity situations, but installing patches and updates in OT networks is easier said than done. OT networks prioritize continuous operations. There are compatibility issues, unpatchable legacy systems, and other roadblocks. The solution is virtual patching. Virtual patching is achieved by identifying all vulnerabilities within an OT network and applying a security mechanism such as a physical IPS (intrusion prevention system) or firewall. Rules are created, traffic is inspected and filtered, and attacks can be blocked and investigated.

Employee training on cybersecurity awareness

The more employees know about network operations, vulnerabilities, and cyberattack methods, the more they can do to help protect the network. Since few organizations have the internal staff to provide the necessary training, third-party training partners can be a viable solution. In any event, all employees should be trained in a company’s written policies, the general threat landscape, security best practices, how to handle physical assets like flash drives or laptops, how to recognize an attack, and what the company’s response protocol is. Specific training should be provided for employees who work remotely.

The evolving HMI security threat landscape

Concrete predictions about future threats and responses are hard to make, but the HMI security threat landscape will most likely evolve much the same way the entire security landscape will, with one major addition.

Air-gapped environments are going away

For a long time, many OT networks were air-gapped off from the world, physically and digitally isolated from the risks of contamination. Data and malware transfer alike required physical media, but inconvenience was safety. As OT networks continue to merge with the connected world, that kind of protection is going away. Remote work is becoming more prevalent, and the very connected IoT (Internet of Things) is now all over the automated factory floor. If wireless access points are left hanging from equipment, no one gives it a thought, except threat actors looking for a way in. (This is where basic employee training might help.)

Threat actors are innovators

Threat actors are becoming increasingly sophisticated. They devote much more time and thought to innovative ways to penetrate HMI and other OT network points than the people who operate them. AI and machine learning techniques are further empowering bad actors.

The statistics bear this out, especially as IT and OT networks continue to converge. In a study on 2023 OT/ICS cybersecurity activities, 76% of organizations were moving toward converged networks, and 97% reported IT security incidents also affected OT environments. Nearly half (47%) of businesses reported OT/ICS ransomware attacks, and 76% had significant concerns about state-sponsored actors.

On the positive side, however, pressure from regulators, insurance companies, and boards of directors is pushing organizations to think and act on cybersecurity for HMI points and throughout the network far more aggressively than many currently do. According to the study, 68% of organizations were increasing their budgets, 38% had dedicated OT security teams, and 77% had achieved a level-3 maturity in OT/ICS security.

Complete OT security

Cybersecurity in industrial environments presents challenges far different than those in IT networks. TXOne specializes in OT cybersecurity, with OT-native solutions designed for the equipment, environment, and day-to-day realities of industrial settings.

The post Enhancing HMI Security: How To Protect ICS Environments From Cyber Threats appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Earning Digital TrustNathalie Bijnens
    The internet of things (IoT) has been growing at a fast pace. In 2023, there were already double the number of internet-connected devices – 16 billion – than people on the planet. However, many of these devices are not properly secured. The high volume of insecure devices being deployed is presenting hackers with more opportunities than ever before. Governments around the world are realizing that additional security standards for IoT devices are needed to address the growing and important role o
     

Earning Digital Trust

9. Květen 2024 v 09:04

The internet of things (IoT) has been growing at a fast pace. In 2023, there were already double the number of internet-connected devices – 16 billion – than people on the planet. However, many of these devices are not properly secured. The high volume of insecure devices being deployed is presenting hackers with more opportunities than ever before. Governments around the world are realizing that additional security standards for IoT devices are needed to address the growing and important role of the billions of connected devices we rely on every day. The EU Cyber Resilience Act and the IoT Cybersecurity Improvement Act in the United States are driving improved security practices as well as an increased sense of urgency.

Digital trust is critical for the continued success of the IoT. This means that security, privacy, and reliability are becoming top concerns. IoT devices are always connected and can be deployed in any environment, which means that they can be attacked via the internet as well as physically in the field. Whether it is a remote attacker getting access to a baby monitor or camera inside your house, or someone physically tampering with sensors that are part of a critical infrastructure, IoT devices need to have proper security in place.

This is even more salient when one considers that each IoT device is part of a multi-party supply chain and is used in systems that contain many other devices. All these devices need to be trusted and communicate in a secure way to maintain the privacy of their data. It is critical to ensure that there are no backdoors left open by any link in the supply chain, or when devices are updated in the field. Any weak link exposes more than just the device in question to security breaches; it exposes its entire system – and the IoT itself – to attacks.

A foundation of trust starts in the hardware

To secure the IoT, each piece of silicon in the supply chain needs to be trusted. The best way to achieve this is by using a hardware-based root of trust (RoT) for every device. An RoT is typically defined as “the set of implicitly trusted functions that the rest of the system or device can use to ensure security.” The core of an RoT consists of an identity and cryptographic keys rooted in the hardware of a device. This establishes a unique, immutable, and unclonable identity to authenticate a device in the IoT network. It establishes the anchor point for the chain of trust, and powers critical system security use cases over the entire lifecycle of a device.

Protecting every device on the IoT with a hardware-based RoT can appear to be an unreachable goal. There are so many types of systems and devices and so many different semiconductor and device manufacturers, each with their own complex supply chain. Many of these chips and devices are high-volume/low-cost and therefore have strict constraints on additional manufacturing or supply chain costs for security. The PSA Certified 2023 Security Report indicates that 72% of tech decision makers are interested in the development of an industry-led set of guidelines to make reaching the goal of a secure IoT more attainable.

Security frameworks and certifications speed-up the process and build confidence

One important industry-led effort in standardizing IoT security that has been widely adopted is PSA Certified. PSA stands for Platform Security Architecture and PSA Certified is a global partnership addressing security challenges and uniting the technology ecosystem under a common security baseline, providing an easy-to consume and comprehensive methodology for the lab-validated assurance of device security. PSA Certified has been adopted by the full supply chain from silicon providers, software vendors, original equipment manufacturers (OEMs), IP providers, governments, content service providers (CSPs), insurance vendors and other third-party schemes. PSA Certified was the winner of the IoT Global Awards “Ecosystem of the year” in 2021.

PSA Certified lab-based evaluations (PSA Certified Level 2 and above) have a choice of evaluation methodologies, including the rigorous SESIP-based methodology (Security Evaluation Standard for IoT Platforms from GlobalPlatform), an optimized security evaluation methodology, designed for connected devices. PSA Certified recognizes that a myriad of different regulations and certification frameworks create an added layer of complexity for the silicon providers, OEMs, software vendors, developers, and service providers tasked with demonstrating the security capability of their products. The goal of the program is to provide a flexible and efficient security evaluation method needed to address the unique complexities and challenges of the evolving digital ecosystem and to drive consistency across device certification schemes to bring greater trust.

The PSA Certified framework recognizes the importance of a hardware RoT for every connected device. It currently provides incremental levels of certified assurance, ranging from a baseline Level 1 (application of best-practice security principles) to a more advanced Level 3 (validated protection against substantial hardware and software attacks).

PSA Certified RoT component

Among the certifications available, PSA Certified offers a PSA Certified RoT Component certification program, which targets separate RoT IP components, such as physical unclonable functions (PUFs), which use unclonable properties of silicon to create a robust trust (or security) anchor. As shown in figure 1, the PSA-RoT Certification includes three levels of security testing. These component-level certifications from PSA Certified validate specific security functional requirements (SFRs) provided by an RoT component and enable their reuse in a fast-track evaluation of a system integration using this component.

Fig. 1: PSA Certified establishes a chain of trust that begins with a PSA-RoT.

A proven RoT IP solution, now PSA Certified

Synopsys PUF IP is a secure key generation and storage solution that enables device manufacturers and designers to secure their products with internally generated unclonable identities and device-unique cryptographic keys. It uses the inherently random start-up values of SRAM as a physical unclonable function (PUF), which generates the entropy required for a strong hardware root of trust.

This root key created by Synopsys PUF IP is never stored, but rather recreated from the PUF upon each use, so there is never a key to be discovered by attackers. The root key is the basis for key management capabilities that enable each member of the supply chain to create its own secret keys, bound to the specific device, to protect their IP/communications without revealing these keys to any other member of the supply chain.

Synopsys PUF IP offers robust PUF-based physical security, with the following properties:

  • No secrets/keys at rest​ (no secrets stored in any memory)
    • prevents any attack on an unpowered device​
    • keys are only present when used​, limiting the window of opportunity for attacks
  • Hardware entropy source/root of trust​
    • no dependence on third parties​ (no key injection from outside)
    • no dependence on security of external components or other internal modules​
    • no dependence on software-based security​
  • Technology-independent, fully digital standard-logic CMOS IP
    • all fabs and technology nodes
    • small footprint
    • re-use in new platforms/deployments
  • Built-in error resilience​ due to advanced error-correction

The Synopsys PUF technology has been field-proven over more than a decade of deployment on over 750 million chips. And now, the Synopsys PUF has achieved the milestone of becoming the world’s first IP solution to be awarded “PSA Certified Level 3 RoT Component.” This certifies that the IP includes substantial protection against both software and hardware attacks (including side-channel and fault injection attacks) and is qualified as a trusted component in a system that requires PSA Level 3 certification.

Fault detection and other countermeasures

In addition to its PUF-related protection against physical attacks, all Synopsys PUF IP products have several built-in physical countermeasures. These include both systemic security features (such as data format validation, data authentication, key use restrictions, built in self-tests (BIST), and heath checks) as well as more specific countermeasures (such as data masking and dummy cycles) that protect against specific attacks.

The PSA Certified Synopsys PUF IP goes even one step further. It validates all inputs through integrity checks and error detection. It continuously asserts that everything runs as intended, flags any observed faults, and ensures security. Additionally, the PSA Certified Synopsys PUF IP provides hardware and software handholds to the user which assist in checking that all data is correctly transferred into and out of the PUF IP. The Synopsys PUF IP driver also supports fault detection and reporting.

Advantages of PUFs over traditional key injection and storage methods

For end-product developers, PUF IP has many advantages over traditional approaches for key management. These traditional approaches typically require key injection (provisioning secret keys into a device) and some form of non-volatile memory (NVM), such as embedded Flash memory or one-time programmable storage (OTP), where the programmed key is stored and where it needs to be protected from being extracted, overwritten, or changed. Unlike these traditional key injection solutions, Synopsys PUF IP does not require sensitive key handling by third parties, since PUF-based keys are created within the device itself. In addition, Synopsys PUF IP offers more flexibility than traditional solutions, as a virtually unlimited number of PUF-based keys can be created. And keys protected by the PUF can be added at any time in the lifecycle rather than only during manufacturing.

In terms of key storage, Synopsys PUF IP offers higher protection against physical attacks than storing keys in some form of NVM. PUF-based root keys are not stored on the device, but they are reconstructed upon each use, so there is nothing for attackers to find on the chip. Instead of storing keys in NVM, Synopsys PUF IP stores only (non-sensitive) helper data and encrypted keys in NVM on- or off-chip. The traditional approach of storing keys on the device in NVM is more vulnerable to physical attacks.

Finally, Synopsys PUF IP provides more portability. Since the Synopsys PUF IP is based on standard SRAM memory cells, it offers a process- and fab agnostic solution for key storage that scales to the most advanced technology nodes.

Conclusion

The large and steady increase in devices connected to the IoT also increases the need for digital trust and privacy. This requires flexible and efficient IoT security solutions that are standardized to streamline implementation and certification across the multiple players involved in the creation and deployment of IoT devices. The PSA Certified framework offers an easy-to-consume and comprehensive methodology for the lab-validated assurance of device security.

Synopsys PUF IP, which has been deployed in over 750 million chips, is the first-ever IP solution to be awarded “PSA Certified Level 3 RoT Component.” This certifies that the IP includes substantial protection against hardware and software attacks. Synopsys PUF IP offers IoT device makers a robust PUF-based security anchor with trusted industry-standard certification and offers the perfect balance between strong security, high flexibility, and low cost.

The post Earning Digital Trust appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Optimize Power For RF/μW Hybrid And Digital Phased ArraysKimia Azad
    Field-programmable gate arrays (FPGAs) are a critical component of both digital and hybrid phased array technology. Powering FPGAs for aerospace and defense applications comes with its own set of challenges, especially because these applications require higher reliability than many industrial or consumer technologies. This blog post will provide a brief history on beamforming and beam-steering technologies for defense applications, a guide on powering defense and space FPGAs, and a reflection on
     

Optimize Power For RF/μW Hybrid And Digital Phased Arrays

9. Květen 2024 v 09:03

Field-programmable gate arrays (FPGAs) are a critical component of both digital and hybrid phased array technology. Powering FPGAs for aerospace and defense applications comes with its own set of challenges, especially because these applications require higher reliability than many industrial or consumer technologies.

This blog post will provide a brief history on beamforming and beam-steering technologies for defense applications, a guide on powering defense and space FPGAs, and a reflection on the future of A&D communications systems.

Background

Active electronically scanned array (AESA) and phased array systems are more affordable and available than ever, namely in full digital and hybrid configuration. These systems can cover a wide RF/uW frequency spectrum and are suitable for use in RADAR and other military communications systems. AESA and phased array systems also boast advanced capabilities in beamforming and beam-steering technologies.

The emergence of 5G communications and commercial space data communications, coupled with advancements in semiconductor efficiency, has enabled companies to deploy innovative phased-array designs. However, roadblocks in efficiently powering these systems given watts consumed and dissipated, plus size and geometry constraints, can complicate development for designers.

Recent developments

Despite digital phased array technologies providing improved performance across a large part of the RF/uW frequency spectrum, their deployment is hindered by various factors. Cost barriers, power consumption, thermal constraints, latency concerns, and efficiency losses in the amplification and gain stages are some key hurdles.

Hybrid phased array is an ideal solution for meeting system level requirements without compromising on cost or power losses. This technology allows for less power consumption and reduced thermal concerns, paving the way for cost-saving solutions. The “digitizers” of hybrid phased array, such as FPGAs, are fewer and farther away from the antenna.

Powering FPGAs

FPGAs are valuable in their ability to perform extremely fast calculations in support of signal isolations, Fast Fourier Transformers (FFTs), I/Q data extractions, and in functions that form and steer radio-frequency beams. However, a key challenge in working with FPGAs lies in the necessity for consistent, sequenced power delivery.

Conclusion and future considerations

Complexities and challenges associated with deploying digital assets should not overlook industry advancements in powering FPGAs within phased array systems. With a focused approach, thermal management challenges can be mitigated, and performance optimized. Existing power solutions provide a strong and mature foundation for the on-going development and deployment of next-generation phased array systems within military and space applications. For a full guide on FPGA power considerations in aerospace and defense applications, take a look at our feature in Military Embedded Systems magazine.

The post Optimize Power For RF/μW Hybrid And Digital Phased Arrays appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • UCIe And Automotive Electronics: Pioneering The Chiplet RevolutionVinod Khera
    The automotive industry stands at the brink of a profound transformation fueled by the relentless march of technological innovation. Gone are the days of the traditional, one-size-fits-all system-on-chip (SoC) design framework. Today, we are witnessing a paradigm shift towards a more modular approach that utilizes diverse chiplets, each optimized for specific functionalities. This evolution promises to enhance the automotive system’s flexibility and efficiency and revolutionize how vehicles are
     

UCIe And Automotive Electronics: Pioneering The Chiplet Revolution

9. Květen 2024 v 09:02

The automotive industry stands at the brink of a profound transformation fueled by the relentless march of technological innovation. Gone are the days of the traditional, one-size-fits-all system-on-chip (SoC) design framework. Today, we are witnessing a paradigm shift towards a more modular approach that utilizes diverse chiplets, each optimized for specific functionalities. This evolution promises to enhance the automotive system’s flexibility and efficiency and revolutionize how vehicles are designed, built, and operated.

At the heart of this transformation lies the Universal Chiplet Interconnect Express (UCIe), a groundbreaking standard introduced in March 2022. UCIe is designed to drastically simplify the integration process across different chiplets from various manufacturers by standardizing die-to-die connections. This initiative caters to a critical need within the industry for a modular and scalable semiconductor architecture, thus setting the stage for unparalleled innovation in automotive electronics.

Understanding the core benefit of UCIe

UCIe isn’t merely about facilitating smoother communication between chiplets; it’s a visionary standard that ensures interoperability, reduces design complexity and costs, and, crucially, supports the seamless incorporation of chiplets into cohesive packages. Developed through the collaborative efforts of the UCIe consortium, this standard enables the use of chiplets from different vendors, thereby fostering a highly customizable and scalable solution. This mainly benefits the automotive sector, where high performance, reliability, and efficiency demand is paramount.

The pivotal role of UCIe in automotive electronics

Implementing UCIe within automotive electronics unlocks a plethora of advantages. Foremost among these is the ability to design more compact, powerful, and energy-efficient electronic systems. UCIe offers savings in energy per bit compared to other serial or parallel interfaces.

Given the increasing complexity and functionality of modern vehicles — which can be likened to data centers on wheels — the modular design principle of UCIe is invaluable. It facilitates the addition of new functionalities and ensures that automotive electronics can seamlessly adapt to and incorporate emerging technologies and standards.

Secondly, the adoption of UCIe marks a significant stride toward sustainable electronic design within the automotive industry. By promoting the reuse and integration of chiplets across different platforms and vehicle models, UCIe significantly mitigates electronic waste and streamlines the lifecycle management of electronic components. This benefits manufacturers regarding cost and efficiency and aligns with broader industry trends focused on sustainability and environmental stewardship.

Offerings and solutions

Cadence offers a range of Intellectual Properties (IPs), Electronic Design Automation (EDA) tools, and 3D Integrated Circuits (ICs) tailored for chiplets.

Since the beginning of UCIe, Cadence has engaged with over 100 customers and enables automotive solutions with UCIe and its working groups. The implementation features for automotive protect the mainband, and Cadence also adds protection for the sideband, particularly for the automotive industry. UCIe is being developed across multiple foundries, process nodes, and standard and advanced packaging enablement types.

Cadence has a history of chiplet experience with an interface IP portfolio, including our chiplet die-to-die interconnects, including the UCIe IP, and proprietary 40G UltraLink D2D PHY that enables SoC providers to deliver more customized solutions that offer higher performance and yields while also shortening development cycles and reducing costs through greater IP reuse. Cadence’s UCIe PHY solutions facilitate high-speed communication and interoperability among chiplets. Additionally, the Cadence Simulation VIP for UCIe ensures that systems using UCIe are reliable and performant, addressing one of the industry’s key concerns.

The Cadence UCIe PHY is a high-bandwidth, low-power, and low-latency die-to-die solution that enables multi-die system in package integration for high-performance compute, AI/ML, 5G, automotive and networking applications. The UCIe physical layer includes link initialization, training, power management states, lane mapping, lane reversal, and scrambling. The UCIe controller includes the die-to-die adapter layer and the protocol layer. The adapter layer ensures reliable transfer through link state management, protocol, and flit formats parameter negotiation. The UCIe architecture supports multiple standard protocols such as PCIe, CXL and streaming raw mode.

For interface IP, we usually create various test chips using different process technologies and measure the silicon in the lab to prove that our controller and PHY IPs fully comply with the standard. Hence, we designed a test chip with seven chiplets connected via UCIe over different interconnect distances. To learn more, click here.

The Cadence Verification IP (VIP) for Universal Chiplet Interconnect Express (UCIe) is designed for easy integration in test benches at the IP, system-on-chip (SoC), and system level. The VIP for UCIe runs on all simulators and supports SystemVerilog and the widely adopted Universal Verification Methodology (UVM). This enables verification teams to reduce the time spent on environment development and redirect it to cover a larger verification space, accelerate verification closure, and ensure end-product quality. With a layered architecture and powerful callback mechanism, verification engineers can verify UCIe features at each functional layer (PHY, D2D, Protocol) and create highly targeted designs while using the latest design methodologies for random testing to cover a larger verification space. The VIP for UCIe can be used as a standalone stack or layered with PCIe VIP.

Challenges and future prospects

The transition to chiplet-based designs and the widespread adoption of the UCIe standard are not without their challenges. Critical concerns include functional safety, reliability, quality, security, thermal management, and mechanical stress. Hence, ensuring dependable high-speed interconnects between chiplets necessitates ongoing research and development. Successful implementation also hinges on industry-wide collaboration and establishing a robust chiplet ecosystem that supports the UCIe standard.

UCIe stands as a pioneering innovation poised to redefine automotive electronics. By offering a scalable and flexible framework, UCIe promises to enhance in-vehicle electronic systems’ performance, efficiency, and versatility and marks a significant milestone in the automotive industry’s evolution. As we look to the future, the impact of UCIe in the automotive sector is poised to be profound, turning vehicles into dynamic platforms that continuously adapt to technological advancements and user needs. The realization of modular, customizable designs underscored by efficiency, performance, and innovation heralds the dawn of a new era in semiconductor development, one where the possibilities are as boundless as the technological horizon.

The post UCIe And Automotive Electronics: Pioneering The Chiplet Revolution appeared first on Semiconductor Engineering.

❌
❌