FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • A Generic Approach For Fuzzing Arbitrary HypervisorsTechnical Paper Link
    A technical paper titled “HYPERPILL: Fuzzing for Hypervisor-bugs by Leveraging the Hardware Virtualization Interface” was presented at the August 2024 USENIX Security Symposium by researchers at EPFL, Boston University, and Zhejiang University. Abstract: “The security guarantees of cloud computing depend on the isolation guarantees of the underlying hypervisors. Prior works have presented effective methods for automatically identifying vulnerabilities in hypervisors. However, these approaches ar
     

A Generic Approach For Fuzzing Arbitrary Hypervisors

A technical paper titled “HYPERPILL: Fuzzing for Hypervisor-bugs by Leveraging the Hardware Virtualization Interface” was presented at the August 2024 USENIX Security Symposium by researchers at EPFL, Boston University, and Zhejiang University.

Abstract:

“The security guarantees of cloud computing depend on the isolation guarantees of the underlying hypervisors. Prior works have presented effective methods for automatically identifying vulnerabilities in hypervisors. However, these approaches are limited in scope. For instance, their implementation is typically hypervisor-specific and limited by requirements for detailed grammars, access to source-code, and assumptions about hypervisor behaviors. In practice, complex closed-source and recent open-source hypervisors are often not suitable for off-the-shelf fuzzing techniques.

HYPERPILL introduces a generic approach for fuzzing arbitrary hypervisors. HYPERPILL leverages the insight that although hypervisor implementations are diverse, all hypervisors rely on the identical underlying hardware-virtualization interface to manage virtual-machines. To take advantage of the hardware-virtualization interface, HYPERPILL makes a snapshot of the hypervisor, inspects the snapshotted hardware state to enumerate the hypervisor’s input-spaces, and leverages feedback-guided snapshot-fuzzing within an emulated environment to identify vulnerabilities in arbitrary hypervisors. In our evaluation, we found that beyond being the first hypervisor-fuzzer capable of identifying vulnerabilities in arbitrary hypervisors across all major attack-surfaces (i.e., PIO/MMIO/Hypercalls/DMA), HYPERPILL also outperforms state-of-the-art approaches that rely on access to source-code, due to the granularity of feedback provided by HYPERPILL’s emulation-based approach. In terms of coverage, HYPERPILL outperformed past fuzzers for 10/12 QEMU devices, without the API hooking or source-code instrumentation techniques required by prior works. HYPERPILL identified 26 new bugs in recent versions of QEMU, Hyper-V, and macOS Virtualization Framework across four device-categories.”

Find the technical paper here. Published August 2024. Distinguished Paper Award Winner.

Bulekov, Alexander, Qiang Liu, Manuel Egele, and Mathias Payer. “HYPERPILL: Fuzzing for Hypervisor-bugs by Leveraging the Hardware Virtualization Interface.” In 33rd USENIX Security Symposium (USENIX Security 24). 2024.

Further Reading
SRAM Security Concerns Grow
Volatile memory threat increases as chips are disaggregated into chiplets, making it easier to isolate memory and slow data degradation.

The post A Generic Approach For Fuzzing Arbitrary Hypervisors appeared first on Semiconductor Engineering.

Uncovering A Significant Residual Attack Surface For Cross-Privilege Spectre-V2 Attacks

A technical paper titled “InSpectre Gadget: Inspecting the Residual Attack Surface of Cross-privilege Spectre v2” was presented at the August 2024 USENIX Security Symposium by researchers at Vrije Universiteit Amsterdam.

Abstract:

“Spectre v2 is one of the most severe transient execution vulnerabilities, as it allows an unprivileged attacker to lure a privileged (e.g., kernel) victim into speculatively jumping to a chosen gadget, which then leaks data back to the attacker. Spectre v2 is hard to eradicate. Even on last-generation Intel CPUs, security hinges on the unavailability of exploitable gadgets. Nonetheless, with (i) deployed mitigations—eIBRS, no-eBPF, (Fine)IBT—all aimed at hindering many usable gadgets, (ii) existing exploits relying on now-privileged features (eBPF), and (iii) recent Linux kernel gadget analysis studies reporting no exploitable gadgets, the common belief is that there is no residual attack surface of practical concern.

In this paper, we challenge this belief and uncover a significant residual attack surface for cross-privilege Spectre-v2 attacks. To this end, we present InSpectre Gadget, a new gadget analysis tool for in-depth inspection of Spectre gadgets. Unlike existing tools, ours performs generic constraint analysis and models knowledge of advanced exploitation techniques to accurately reason over gadget exploitability in an automated fashion. We show that our tool can not only uncover new (unconventionally) exploitable gadgets in the Linux kernel, but that those gadgets are sufficient to bypass all deployed Intel mitigations. As a demonstration, we present the first native Spectre-v2 exploit against the Linux kernel on last-generation Intel CPUs, based on the recent BHI variant and able to leak arbitrary kernel memory at 3.5 kB/sec. We also present a number of gadgets and exploitation techniques to bypass the recent FineIBT mitigation, along with a case study on a 13th Gen Intel CPU that can leak kernel memory at 18 bytes/sec.”

Find the technical paper here. Published August 2024. Distinguished Paper Award Winner.  Find additional information here on VU Amsterdam’s site.

Wiebing, Sander, Alvise de Faveri Tron, Herbert Bos, and Cristiano Giuffrida. “InSpectre Gadget: Inspecting the residual attack surface of cross-privilege Spectre v2.” In USENIX Security. 2024.

Further Reading
Defining Chip Threat Models To Identify Security Risks
Not every device has the same requirements, and even the best security needs to adapt.

The post Uncovering A Significant Residual Attack Surface For Cross-Privilege Spectre-V2 Attacks appeared first on Semiconductor Engineering.

Data Memory-Dependent Prefetchers Pose SW Security Threat By Breaking Cryptographic Implementations

A technical paper titled “GoFetch: Breaking Constant-Time Cryptographic Implementations Using Data Memory-Dependent Prefetchers” was presented at the August 2024 USENIX Security Symposium by researchers at University of Illinois Urbana-Champaign, University of Texas at Austin, Georgia Institute of Technology, University of California Berkeley, University of Washington, and Carnegie Mellon University.

Abstract:

“Microarchitectural side-channel attacks have shaken the foundations of modern processor design. The cornerstone defense against these attacks has been to ensure that security-critical programs do not use secret-dependent data as addresses. Put simply: do not pass secrets as addresses to, e.g., data memory instructions. Yet, the discovery of data memory-dependent prefetchers (DMPs)—which turn program data into addresses directly from within the memory system—calls into question whether this approach will continue to remain secure.

This paper shows that the security threat from DMPs is significantly worse than previously thought and demonstrates the first end-to-end attacks on security-critical software using the Apple m-series DMP. Undergirding our attacks is a new understanding of how DMPs behave which shows, among other things, that the Apple DMP will activate on behalf of any victim program and attempt to “leak” any cached data that resembles a pointer. From this understanding, we design a new type of chosen-input attack that uses the DMP to perform end-to-end key extraction on popular constant-time implementations of classical (OpenSSL Diffie-Hellman Key Exchange, Go RSA decryption) and post-quantum cryptography (CRYSTALS-Kyber and CRYSTALS-Dilithium).”

Find the technical paper here. Published August 2024.

Chen, Boru, Yingchen Wang, Pradyumna Shome, Christopher W. Fletcher, David Kohlbrenner, Riccardo Paccagnella, and Daniel Genkin. “GoFetch: Breaking constant-time cryptographic implementations using data memory-dependent prefetchers.” In Proc. USENIX Secur. Symp, pp. 1-21. 2024.

Further Reading
Chip Security Now Depends On Widening Supply Chain
How tighter HW-SW integration and increasing government involvement are changing the security landscape for chips and systems.

 

The post Data Memory-Dependent Prefetchers Pose SW Security Threat By Breaking Cryptographic Implementations appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • A New Low-Cost HW-Counterbased RowHammer Mitigation TechniqueTechnical Paper Link
    A technical paper titled “ABACuS: All-Bank Activation Counters for Scalable and Low Overhead RowHammer Mitigation” was presented at the August 2024 USENIX Security Symposium by researchers at ETH Zurich. Abstract: “We introduce ABACuS, a new low-cost hardware-counterbased RowHammer mitigation technique that performance-, energy-, and area-efficiently scales with worsening RowHammer vulnerability. We observe that both benign workloads and RowHammer attacks tend to access DRAM rows with the sa
     

A New Low-Cost HW-Counterbased RowHammer Mitigation Technique

A technical paper titled “ABACuS: All-Bank Activation Counters for Scalable and Low Overhead RowHammer Mitigation” was presented at the August 2024 USENIX Security Symposium by researchers at ETH Zurich.

Abstract:

“We introduce ABACuS, a new low-cost hardware-counterbased RowHammer mitigation technique that performance-, energy-, and area-efficiently scales with worsening RowHammer vulnerability. We observe that both benign workloads and RowHammer attacks tend to access DRAM rows with the same row address in multiple DRAM banks at around the same time. Based on this observation, ABACuS’s key idea is to use a single shared row activation counter to track activations to the rows with the same row address in all DRAM banks. Unlike state-of-the-art RowHammer mitigation mechanisms that implement a separate row activation counter for each DRAM bank, ABACuS implements fewer counters (e.g., only one) to track an equal number of aggressor rows.

Our comprehensive evaluations show that ABACuS securely prevents RowHammer bitflips at low performance/energy overhead and low area cost. We compare ABACuS to four state-of-the-art mitigation mechanisms. At a nearfuture RowHammer threshold of 1000, ABACuS incurs only 0.58% (0.77%) performance and 1.66% (2.12%) DRAM energy overheads, averaged across 62 single-core (8-core) workloads, requiring only 9.47 KiB of storage per DRAM rank. At the RowHammer threshold of 1000, the best prior lowarea-cost mitigation mechanism incurs 1.80% higher average performance overhead than ABACuS, while ABACuS requires 2.50× smaller chip area to implement. At a future RowHammer threshold of 125, ABACuS performs very similarly to (within 0.38% of the performance of) the best prior performance- and energy-efficient RowHammer mitigation mechanism while requiring 22.72× smaller chip area. We show that ABACuS’s performance scales well with the number of DRAM banks. At the RowHammer threshold of 125, ABACuS incurs 1.58%, 1.50%, and 2.60% performance overheads for 16-, 32-, and 64-bank systems across all single-core workloads, respectively. ABACuS is freely and openly available at https://github.com/CMU-SAFARI/ABACuS.”

Find the technical paper here.

Olgun, Ataberk, Yahya Can Tugrul, Nisa Bostanci, Ismail Emir Yuksel, Haocong Luo, Steve Rhyner, Abdullah Giray Yaglikci, Geraldo F. Oliveira, and Onur Mutlu. “Abacus: All-bank activation counters for scalable and low overhead rowhammer mitigation.” In USENIX Security. 2024.

Further Reading
Securing DRAM Against Evolving Rowhammer Threats
A multi-layered, system-level approach is crucial to DRAM protection.

The post A New Low-Cost HW-Counterbased RowHammer Mitigation Technique appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Heterogeneity Of 3DICs As A Security VulnerabilityTechnical Paper Link
    A new technical paper titled “Harnessing Heterogeneity for Targeted Attacks on 3-D ICs” was published by Drexel University. Abstract “As 3-D integrated circuits (ICs) increasingly pervade the microelectronics industry, the integration of heterogeneous components presents a unique challenge from a security perspective. To this end, an attack on a victim die of a multi-tiered heterogeneous 3-D IC is proposed and evaluated. By utilizing on-chip inductive circuits and transistors with low voltage th
     

Heterogeneity Of 3DICs As A Security Vulnerability

A new technical paper titled “Harnessing Heterogeneity for Targeted Attacks on 3-D ICs” was published by Drexel University.

Abstract
“As 3-D integrated circuits (ICs) increasingly pervade the microelectronics industry, the integration of heterogeneous components presents a unique challenge from a security perspective. To this end, an attack on a victim die of a multi-tiered heterogeneous 3-D IC is proposed and evaluated. By utilizing on-chip inductive circuits and transistors with low voltage threshold (LVT), a die based on CMOS technology is proposed that includes a sensor to monitor the electromagnetic (EM) emissions from the normal function of a victim die, without requiring physical probing. The adversarial circuit is self-powered through the use of thermocouples that supply the generated current to circuits that sense EM emissions. Therefore, the integration of disparate technologies in a single 3-D circuit allows for a stealthy, wireless, and non-invasive side-channel attack. A thin-film thermo-electric generator (TEG) is developed that produces a 115 mV voltage source, which is amplified 5 × through a voltage booster to provide power to the adversarial circuit. An on-chip inductor is also developed as a component of a sensing array, which detects changes to the magnetic field induced by the computational activity of the victim die. In addition, the challenges associated with detecting and mitigating such attacks are discussed, highlighting the limitations of existing security mechanisms in addressing the multifaceted nature of vulnerabilities due to the heterogeneity of 3-D ICs.”

Find the technical paper here. Published June 2024.

Alec Aversa and Ioannis Savidis. 2024. Harnessing Heterogeneity for Targeted Attacks on 3-D ICs. In Proceedings of the Great Lakes Symposium on VLSI 2024 (GLSVLSI ’24). Association for Computing Machinery, New York, NY, USA, 246–251. https://doi.org/10.1145/3649476.3660385.

The post Heterogeneity Of 3DICs As A Security Vulnerability appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Heterogeneity Of 3DICs As A Security VulnerabilityTechnical Paper Link
    A new technical paper titled “Harnessing Heterogeneity for Targeted Attacks on 3-D ICs” was published by Drexel University. Abstract “As 3-D integrated circuits (ICs) increasingly pervade the microelectronics industry, the integration of heterogeneous components presents a unique challenge from a security perspective. To this end, an attack on a victim die of a multi-tiered heterogeneous 3-D IC is proposed and evaluated. By utilizing on-chip inductive circuits and transistors with low voltage th
     

Heterogeneity Of 3DICs As A Security Vulnerability

A new technical paper titled “Harnessing Heterogeneity for Targeted Attacks on 3-D ICs” was published by Drexel University.

Abstract
“As 3-D integrated circuits (ICs) increasingly pervade the microelectronics industry, the integration of heterogeneous components presents a unique challenge from a security perspective. To this end, an attack on a victim die of a multi-tiered heterogeneous 3-D IC is proposed and evaluated. By utilizing on-chip inductive circuits and transistors with low voltage threshold (LVT), a die based on CMOS technology is proposed that includes a sensor to monitor the electromagnetic (EM) emissions from the normal function of a victim die, without requiring physical probing. The adversarial circuit is self-powered through the use of thermocouples that supply the generated current to circuits that sense EM emissions. Therefore, the integration of disparate technologies in a single 3-D circuit allows for a stealthy, wireless, and non-invasive side-channel attack. A thin-film thermo-electric generator (TEG) is developed that produces a 115 mV voltage source, which is amplified 5 × through a voltage booster to provide power to the adversarial circuit. An on-chip inductor is also developed as a component of a sensing array, which detects changes to the magnetic field induced by the computational activity of the victim die. In addition, the challenges associated with detecting and mitigating such attacks are discussed, highlighting the limitations of existing security mechanisms in addressing the multifaceted nature of vulnerabilities due to the heterogeneity of 3-D ICs.”

Find the technical paper here. Published June 2024.

Alec Aversa and Ioannis Savidis. 2024. Harnessing Heterogeneity for Targeted Attacks on 3-D ICs. In Proceedings of the Great Lakes Symposium on VLSI 2024 (GLSVLSI ’24). Association for Computing Machinery, New York, NY, USA, 246–251. https://doi.org/10.1145/3649476.3660385.

The post Heterogeneity Of 3DICs As A Security Vulnerability appeared first on Semiconductor Engineering.

Secure Low-Cost In-DRAM Trackers For Mitigating Rowhammer (Georgia Tech, Google, Nvidia)

A new technical paper titled “MINT: Securely Mitigating Rowhammer with a Minimalist In-DRAM Tracker” was published by researchers at Georgia Tech, Google, and Nvidia.

Abstract
“This paper investigates secure low-cost in-DRAM trackers for mitigating Rowhammer (RH). In-DRAM solutions have the advantage that they can solve the RH problem within the DRAM chip, without relying on other parts of the system. However, in-DRAM mitigation suffers from two key challenges: First, the mitigations are synchronized with refresh, which means we cannot mitigate at arbitrary times. Second, the SRAM area available for aggressor tracking is severely limited, to only a few bytes. Existing low-cost in-DRAM trackers (such as TRR) have been broken by well-crafted access patterns, whereas prior counter-based schemes require impractical overheads of hundreds or thousands of entries per bank. The goal of our paper is to develop an ultra low-cost secure in-DRAM tracker.

Our solution is based on a simple observation: if only one row can be mitigated at refresh, then we should ideally need to track only one row. We propose a Minimalist In-DRAM Tracker (MINT), which provides secure mitigation with just a single entry. At each refresh, MINT probabilistically decides which activation in the upcoming interval will be selected for mitigation at the next refresh. MINT provides guaranteed protection against classic single and double-sided attacks. We also derive the minimum RH threshold (MinTRH) tolerated by MINT across all patterns. MINT has a MinTRH of 1482 which can be lowered to 356 with RFM. The MinTRH of MINT is lower than a prior counter-based design with 677 entries per bank, and is within 2x of the MinTRH of an idealized design that stores one-counter-per-row. We also analyze the impact of refresh postponement on the MinTRH of low-cost in-DRAM trackers, and propose an efficient solution to make such trackers compatible with refresh postponement.”

Find the technical paper here. Preprint published July 2024.

Qureshi, Moinuddin, Salman Qazi, and Aamer Jaleel. “MINT: Securely Mitigating Rowhammer with a Minimalist In-DRAM Tracker.” arXiv preprint arXiv:2407.16038 (2024).

The post Secure Low-Cost In-DRAM Trackers For Mitigating Rowhammer (Georgia Tech, Google, Nvidia) appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • NeuroHammer Attacks on ReRAM-Based MemoriesTechnical Paper Link
    A new technical paper titled “NVM-Flip: Non-Volatile-Memory BitFlips on the System Level” was published by researchers at Ruhr-University Bochum, University of Duisburg-Essen, and Robert Bosch. Abstract “Emerging non-volatile memories (NVMs) are promising candidates to substitute conventional memories due to their low access latency, high integration density, and non-volatility. These superior properties stem from the memristor representing the centerpiece of each memory cell and is branded as t
     

NeuroHammer Attacks on ReRAM-Based Memories

21. Červen 2024 v 18:32

A new technical paper titled “NVM-Flip: Non-Volatile-Memory BitFlips on the System Level” was published by researchers at Ruhr-University Bochum, University of Duisburg-Essen, and Robert Bosch.

Abstract
“Emerging non-volatile memories (NVMs) are promising candidates to substitute conventional memories due to their low access latency, high integration density, and non-volatility. These superior properties stem from the memristor representing the centerpiece of each memory cell and is branded as the fourth fundamental circuit element. Memristors encode information in the form of its resistance by altering the physical characteristics of their filament. Hence, each memristor can store multiple bits increasing the memory density and positioning it as a potential candidate to replace DRAM and SRAM-based memories, such as caches.

However, new security risks arise with the benefits of these emerging technologies, like the recent NeuroHammer attack, which allows adversaries to deliberately flip bits in ReRAMs. While NeuroHammer has been shown to flip single bits within memristive crossbar arrays, the system-level impact remains unclear. Considering the significance of the Rowhammer attack on conventional DRAMs, NeuroHammer can potentially cause crucial damage to applications taking advantage of emerging memory technologies.

To answer this question, we introduce NVgem5, a versatile system-level simulator based on gem5. NVgem5 is capable of injecting bit-flips in eNVMs originating from NeuroHammer. Our experiments evaluate the impact of the NeuroHammer attack on main and cache memories. In particular, we demonstrate a single-bit fault attack on cache memories leaking the secret key used during the computation of RSA signatures. Our findings highlight the need for improved hardware security measures to mitigate the risk of hardware-level attacks in computing systems based on eNVMs.”

Find the technical paper here. Published June 2024.

Felix Staudigl, Jan Philipp Thoma, Christian Niesler, Karl Sturm, Rebecca Pelke, Dominik Germek, Jan Moritz Joseph, Tim Güneysu, Lucas Davi, and Rainer Leupers. 2024. NVM-Flip: Non-Volatile-Memory BitFlips on the System Level. In Proceedings of the 2024 ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS ’24). Association for Computing Machinery, New York, NY, USA, 11–20. https://doi.org/10.1145/3643650.3658606

The post NeuroHammer Attacks on ReRAM-Based Memories appeared first on Semiconductor Engineering.

DRAM Microarchitectures And Their Impacts On Activate-Induced Bitflips Such As RowHammer 

19. Květen 2024 v 22:57

A technical paper titled “DRAMScope: Uncovering DRAM Microarchitecture and Characteristics by Issuing Memory Commands” was published by researchers at Seoul National University and University of Illinois at Urbana-Champaign.

Abstract:

“The demand for precise information on DRAM microarchitectures and error characteristics has surged, driven by the need to explore processing in memory, enhance reliability, and mitigate security vulnerability. Nonetheless, DRAM manufacturers have disclosed only a limited amount of information, making it difficult to find specific information on their DRAM microarchitectures. This paper addresses this gap by presenting more rigorous findings on the microarchitectures of commodity DRAM chips and their impacts on the characteristics of activate-induced bitflips (AIBs), such as RowHammer and RowPress. The previous studies have also attempted to understand the DRAM microarchitectures and associated behaviors, but we have found some of their results to be misled by inaccurate address mapping and internal data swizzling, or lack of a deeper understanding of the modern DRAM cell structure. For accurate and efficient reverse-engineering, we use three tools: AIBs, retention time test, and RowCopy, which can be cross-validated. With these three tools, we first take a macroscopic view of modern DRAM chips to uncover the size, structure, and operation of their subarrays, memory array tiles (MATs), and rows. Then, we analyze AIB characteristics based on the microscopic view of the DRAM microarchitecture, such as 6F^2 cell layout, through which we rectify misunderstandings regarding AIBs and discover a new data pattern that accelerates AIBs. Lastly, based on our findings at both macroscopic and microscopic levels, we identify previously unknown AIB vulnerabilities and propose a simple yet effective protection solution.”

Find the technical paper here. Published May 2024.

Nam, Hwayong, Seungmin Baek, Minbok Wi, Michael Jaemin Kim, Jaehyun Park, Chihun Song, Nam Sung Kim, and Jung Ho Ahn. “DRAMScope: Uncovering DRAM Microarchitecture and Characteristics by Issuing Memory Commands.” arXiv preprint arXiv:2405.02499 (2024).

Related Reading
Securing DRAM Against Evolving Rowhammer Threats
A multi-layered, system-level approach is crucial to DRAM protection.

 

The post DRAM Microarchitectures And Their Impacts On Activate-Induced Bitflips Such As RowHammer  appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Using AI/ML To Combat CyberattacksJohn Koon
    Machine learning is being used by hackers to find weaknesses in chips and systems, but it also is starting to be used to prevent breaches by pinpointing hardware and software design flaws. To make this work, machine learning (ML) must be trained to identify vulnerabilities, both in hardware and software. With proper training, ML can detect cyber threats and prevent them from accessing critical data. As ML encounters additional cyberattack scenarios, it can learn and adapt, helping to build a mor
     

Using AI/ML To Combat Cyberattacks

Od: John Koon
9. Květen 2024 v 09:07

Machine learning is being used by hackers to find weaknesses in chips and systems, but it also is starting to be used to prevent breaches by pinpointing hardware and software design flaws.

To make this work, machine learning (ML) must be trained to identify vulnerabilities, both in hardware and software. With proper training, ML can detect cyber threats and prevent them from accessing critical data. As ML encounters additional cyberattack scenarios, it can learn and adapt, helping to build a more sophisticated defense system that includes hardware, software, and how they interface with larger systems. It also can automate many cyber defense tasks with minimum human intervention, which saves time, effort, and money.

ML is capable of sifting through large volumes of data much faster than humans. Potentially, it can reduce or remove human errors, lower costs, and boost cyber defense capability and overall efficiency. It also can perform such tasks as connection authentication, system design, vulnerability detection, and most important, threat detection through pattern and behavioral analysis.

“AI/ML is finding many roles protecting and enhancing security for digital devices and services,” said David Maidment, senior director of market development at Arm. “However, it is also being used as a tool for increasingly sophisticated attacks by threat actors. AI/ML is essentially a tool tuned for very advanced pattern recognition across vast data sets. Examples of how AI/ML can enhance security include network-based monitoring to spot rogue behaviors at scale, code analysis to look for vulnerabilities on new and legacy software, and automating the deployment of software to keep devices up-to-date and secure.”

This means that while AI/ML can be used as a force for good, inevitably bad actors will use it to increase the sophistication and scale of attacks. “Building devices and services based on security best practices, having a hardware-protected root of trust (RoT), and an industry-wide methodology to standardize and measure security are all essential,” Maidment said. “The focus on security, including the rapid growth of AI/ML, is certainly driving industry and government discussions as we work on solutions to maximize AI/ML’s benefits and minimize any potential harmful impact.”

Zero trust is a fundamental requirement when it comes to cybersecurity. Before a user or device is allowed to connect to the network or server, requests have to be authenticated to make sure they are legitimate and authorized. ML will enhance the authentication process, including password management, phishing prevention, and malware detection.

Areas that bad actors look to exploit are software design vulnerabilities and weak points in systems and networks. Once hackers uncover these vulnerabilities, they can be used as a point of entrance to the network or systems. ML can detect these vulnerabilities and alert administrators.

Taking a proactive approach by doing threat detection is essential in cyber defense. ML pattern and behavioral analysis strengths support this strategy. When ML detects unusual behavior in data traffic flow or patterns, it sends an alert about abnormal behavior to the administrator. This is similar to the banking industry’s practice of watching for credit card use that does not follow an established pattern. A large purchase overseas on a credit card with a pattern of U.S. use only for moderate amounts would trigger an alert, for example.

As hackers become more sophisticated with new attack vectors, whether it is new ransomware or distributed denial of service (DDoS) attacks, ML will do a much better job than humans in detecting these unknown threats.

Limitations of ML in cybersecurity
While ML provides many benefits, its value depends on the data used to train it. The more that can be used to train the ML model, the better it is at detecting fraud and cyber threats. But acquiring this data raises overall cybersecurity system design expenses. The model also needs constant maintenance and tuning to sustain peak performance and meet the specific needs of users. And while ML can do many of the tasks, it still requires some human involvement, so it’s essential to understand both cybersecurity and how well ML functions.

While ML is effective in fending off many of the cyberattacks, it is not a panacea. “The specific type of artificial intelligence typically referenced in this context is machine learning (ML), which is the development of algorithms that can ingest large volumes of training data, then generalize and make meaningful observations and decisions based on novel data,” said Scott Register, vice president of security solutions at Keysight Technologies. “With the right algorithms and training, AI/ML can be used to pinpoint cyberattacks which might otherwise be difficult to detect.”

However, no one — at least in the commercial space — has delivered a product that can detect very subtle cyberattacks with complete accuracy. “The algorithms are getting better all the time, so it’s highly probable that we’ll soon have commercial products that can detect and respond to attacks,” Register said. “We must keep in mind, however, that attackers don’t sit still, and they’re well-funded and patient. They employ ‘offensive AI,’ which means they use the same types of techniques and algorithms to generate attacks which are unlikely to be detected.”

ML implementation considerations
For any ML implementation, a strong cyber defense system is essential, but there’s no such thing as a completely secure design. Instead, security is a dynamic and ongoing process that requires constant fine-tuning and improvement against ever-changing cyberattacks. Implementing ML requires a clear security roadmap, which should define requirements. It also requires implementing a good cybersecurity process, which secures individual hardware and software components, as well as some type of system testing.

“One of the things we advise is to start with threat modeling to identify a set of critical design assets to protect from an adversary under confidentiality or integrity,” said Jason Oberg, CTO at Cycuity. “From there, you can define a set of very succinct, secure requirements for the assets. All of this work is typically done at the architecture level. We do provide education, training and guidance to our customers, because at that level, if you don’t have succinct security requirements defined, then it’s really hard to verify or check something in the design. What often happens is customers will say, ‘I want to have a secure chip.’ But it’s not as easy as just pressing a button and getting a green check mark that confirms the chip is now secure.”

To be successful, engineering teams must start at the architectural stages and define the security requirements. “Once that is done, they can start actually writing the RTL,” Oberg said. “There are tools available to provide assurances these security requirements are being met, and run within the existing simulation and emulation environments to help validate the security requirements, and help identify any unknown design weaknesses. Generally, this helps hardware and verification engineers increase their productivity and build confidence that the system is indeed meeting the security requirements.”

Figure 1: A cybersecurity model includes multiple stages, progressing from the very basic to in-depth. It is important for organizations to know what stages their cyber defense system are. Source: Cycuity

Fig. 1: A cybersecurity model includes multiple stages, progressing from the very basic to in-depth. It is important for organizations to know what stages their cyber defense system are. Source: Cycuity

Steve Garrison, senior vice president, marketing of Stellar Cyber, noted that if cyber threats were uncovered during the detection process, so many data files may be generated that they will be difficult for humans to sort through. Graphical displays can speed up the process and reduce the overall mean time to detection (MTTD) and mean time to response (MTTR).

Figure 2: Using graphical displays  would reduce the overall meantime to detection (MTTD) and meantime to response (MTTR). Source: Stellar Cyber

Fig. 2: Using graphical displays  would reduce the overall meantime to detection (MTTD) and meantime to response (MTTR). Source: Stellar Cyber

Testing is essential
Another important stage in the design process is testing, whereby each system design requires a vigorous attack simulation tool to weed out the basic oversights to ensure it meets the predefined standard.

“First, if you want to understand how defensive systems will function in the real world, it’s important to test them under conditions, which are as realistic as possible,” Keysight’s Register said. “The network environment should have the same amount of traffic, mix of applications, speeds, behavioral characteristics, and timing as the real world. For example, the timing of a sudden uptick in email and social media traffic corresponds to the time when people open up their laptops at work. The attack traffic needs to be as realistic as possible as well – hackers try hard not to be noticed, often preferring ‘low and slow’ attacks, which may take hours or days to complete, making detection much more difficult. The same obfuscation techniques, encryption, and decoy traffic employed by threat actors needs to be simulated as accurately as possible.”

Further, due to mistaken assumptions during testing, defensive systems often perform great in the lab, yet fail spectacularly in production networks.  “Afterwards we hear, for example, ‘I didn’t think hackers would encrypt their malware,’ or ‘Internal e-mails weren’t checked for malicious attachments, only those from external senders,’” Register explained. “Also, in security testing, currency is key. Attacks and obfuscation techniques are constantly evolving. If a security system is tested against stale attacks, then the value of that testing is limited. The offensive tools should be kept as up to date as possible to ensure the most effective performance against the tools a system is likely to encounter in the wild.”

Semiconductor security
Almost all system designs depend on semiconductors, so it is important to ensure that any and all chips, firmware, FPGAs, and SoCs are secure – including those that perform ML functionality.

“Semiconductor security is a constantly evolving problem and requires an adaptable solution, said Jayson Bethurem, vice president marketing and business development at Flex Logix. “Fixed solutions with current cryptography that are implemented today will inevitably be challenged in the future. Hackers today have more time, resources, training, and motivation to disrupt technology. With technology increasing in every facet of our lives, defending against this presents a real challenge. We also have to consider upcoming threats, namely quantum computing.”

Many predict that quantum computing will be able to crack current cryptography solutions in the next few years. “Fortunately, semiconductor manufacturers have solutions that can enable cryptography agility, which can dynamically adapt to evolving threats,” Bethurem said. “This includes both updating hardware accelerated cryptography algorithms and obfuscating them, an approach that increases root of trust and protects valuable IP secrets. Advanced solutions like these also involve devices randomly creating their own encryption keys, making it harder for algorithms to crack encryption codes.”

Advances in AI/ML algorithms can adapt to new threats and reduce latency of algorithm updates from manufacturers. This is particularly useful with reconfigurable eFPGA IP, which can be implemented into any semiconductor device to thwart all current and future threats and optimized to run AI/ML-based cryptography solutions. The result is a combination of high-performance processing, scalability, and low-latency attack response.

Chips that support AI/ML algorithms need not only computing power, but also accelerators for those algorithms. In addition, all of this needs to happen without exceeding a tight power budget.

“More AI/ML systems run at tiny edges rather than at the core,” said Detlef Houdeau, senior director of design system architecture at Infineon Technologies. “AI/ML systems don’t need any bigger computer and/or cloud. For instance, a Raspberry Pi for a robot in production can have more than 3 AI/ML algorithms working in parallel. A smartphone has more than 10 AI/ML functions in the phone, and downloading new apps brings new AI/ML algorithms into the device. A pacemaker can have 2 AI/ML algorithms. Security chips, meanwhile, need a security architecture as well as accelerators for encryption. Combining an AI/ML accelerator with an encryption accelerator in the same chip could increase the performance in microcontroller units, and at the same time foster more security at the edge. The next generation of microelectronics could show this combination.”

After developers have gone through design reviews and the systems have run vigorous tests, it helps to have third-party certification and/or credentials to ensure the systems are indeed secure from a third-party independent viewpoint.

“As AI, and recently generative AI, continue to transform all markets, there will be new attack vectors to mitigate against,” said Arm’s Maidment. “We expect to see networks become smarter in the way they monitor traffic and behaviors. The use of AI/ML allows network-based monitoring at scale to allow potential unexpected or rogue behavior to be identified and isolated. Automating network monitoring based on AI/ML will allow an extra layer of defense as networks scale out and establish effectively a ‘zero trust’ approach. With this approach, analysis at scale can be tuned to look at particular threat vectors depending on the use case.”

With an increase in AI/ML adoption at the edge, a lot of this is taking place on the CPU. “Whether it is handling workloads in their entirety, or in combination with a co-processor like a GPU or NPU, how applications are deployed across the compute resources needs to be secure and managed centrally within the edge AI/ML device,” Maidment said. “Building edge AI/ML devices based on a hardware root of trust is essential. It is critical to have privileged access control of what code is allowed to run where using a trusted memory management architecture. Arm continually invests in security, and the Armv9 architecture offers a number of new security features. Alongside architecture improvements, we continue to work in partnership with the industry on our ecosystem security framework and certification scheme, PSA Certified, which is based on a certified hardware RoT. This hardware base helps to improve the security of systems and fulfill the consumer expectation that as devices scale, they remain secure.”

Outlook
It is important to understand that threat actors will continue to evolve attacks using AI/ML. Experts suggest that to counter such attacks, organizations, institutions, and government agencies will have to continually improve defense strategies and capabilities, including AI/ML deployment.

AI/ML can be used as weapon from an attacker for industrial espionage and/or industrial sabotage, and stopping incursions will require a broad range of cyberattack prevention and detection tools, including AI/ML functionality for anomaly detection. But in general, hackers are almost always one step ahead.

According to Register, “the recurring cycle is: 1) hackers come out with a new tool or technology that lets them attack systems or evade detection more effectively; 2) those attacks cause enough economic damage that the industry responds and develops effective countermeasures; 3) the no-longer-new hacker tools are still employed effectively, but against targets that haven’t bothered to update their defenses; 4) hackers develop new offensive tools that are effective against the defensive techniques of high-value targets, and the cycle starts anew.”

Related Reading
Securing Chip Manufacturing Against Growing Cyber Threats
Suppliers are the number one risk, but reducing attacks requires industry-wide collaboration.
Data Center Security Issues Widen
The number and breadth of hardware targets is increasing, but older attack vectors are not going away. Hackers are becoming more sophisticated, and they have a big advantage.

The post Using AI/ML To Combat Cyberattacks appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Using AI/ML To Combat CyberattacksJohn Koon
    Machine learning is being used by hackers to find weaknesses in chips and systems, but it also is starting to be used to prevent breaches by pinpointing hardware and software design flaws. To make this work, machine learning (ML) must be trained to identify vulnerabilities, both in hardware and software. With proper training, ML can detect cyber threats and prevent them from accessing critical data. As ML encounters additional cyberattack scenarios, it can learn and adapt, helping to build a mor
     

Using AI/ML To Combat Cyberattacks

Od: John Koon
9. Květen 2024 v 09:07

Machine learning is being used by hackers to find weaknesses in chips and systems, but it also is starting to be used to prevent breaches by pinpointing hardware and software design flaws.

To make this work, machine learning (ML) must be trained to identify vulnerabilities, both in hardware and software. With proper training, ML can detect cyber threats and prevent them from accessing critical data. As ML encounters additional cyberattack scenarios, it can learn and adapt, helping to build a more sophisticated defense system that includes hardware, software, and how they interface with larger systems. It also can automate many cyber defense tasks with minimum human intervention, which saves time, effort, and money.

ML is capable of sifting through large volumes of data much faster than humans. Potentially, it can reduce or remove human errors, lower costs, and boost cyber defense capability and overall efficiency. It also can perform such tasks as connection authentication, system design, vulnerability detection, and most important, threat detection through pattern and behavioral analysis.

“AI/ML is finding many roles protecting and enhancing security for digital devices and services,” said David Maidment, senior director of market development at Arm. “However, it is also being used as a tool for increasingly sophisticated attacks by threat actors. AI/ML is essentially a tool tuned for very advanced pattern recognition across vast data sets. Examples of how AI/ML can enhance security include network-based monitoring to spot rogue behaviors at scale, code analysis to look for vulnerabilities on new and legacy software, and automating the deployment of software to keep devices up-to-date and secure.”

This means that while AI/ML can be used as a force for good, inevitably bad actors will use it to increase the sophistication and scale of attacks. “Building devices and services based on security best practices, having a hardware-protected root of trust (RoT), and an industry-wide methodology to standardize and measure security are all essential,” Maidment said. “The focus on security, including the rapid growth of AI/ML, is certainly driving industry and government discussions as we work on solutions to maximize AI/ML’s benefits and minimize any potential harmful impact.”

Zero trust is a fundamental requirement when it comes to cybersecurity. Before a user or device is allowed to connect to the network or server, requests have to be authenticated to make sure they are legitimate and authorized. ML will enhance the authentication process, including password management, phishing prevention, and malware detection.

Areas that bad actors look to exploit are software design vulnerabilities and weak points in systems and networks. Once hackers uncover these vulnerabilities, they can be used as a point of entrance to the network or systems. ML can detect these vulnerabilities and alert administrators.

Taking a proactive approach by doing threat detection is essential in cyber defense. ML pattern and behavioral analysis strengths support this strategy. When ML detects unusual behavior in data traffic flow or patterns, it sends an alert about abnormal behavior to the administrator. This is similar to the banking industry’s practice of watching for credit card use that does not follow an established pattern. A large purchase overseas on a credit card with a pattern of U.S. use only for moderate amounts would trigger an alert, for example.

As hackers become more sophisticated with new attack vectors, whether it is new ransomware or distributed denial of service (DDoS) attacks, ML will do a much better job than humans in detecting these unknown threats.

Limitations of ML in cybersecurity
While ML provides many benefits, its value depends on the data used to train it. The more that can be used to train the ML model, the better it is at detecting fraud and cyber threats. But acquiring this data raises overall cybersecurity system design expenses. The model also needs constant maintenance and tuning to sustain peak performance and meet the specific needs of users. And while ML can do many of the tasks, it still requires some human involvement, so it’s essential to understand both cybersecurity and how well ML functions.

While ML is effective in fending off many of the cyberattacks, it is not a panacea. “The specific type of artificial intelligence typically referenced in this context is machine learning (ML), which is the development of algorithms that can ingest large volumes of training data, then generalize and make meaningful observations and decisions based on novel data,” said Scott Register, vice president of security solutions at Keysight Technologies. “With the right algorithms and training, AI/ML can be used to pinpoint cyberattacks which might otherwise be difficult to detect.”

However, no one — at least in the commercial space — has delivered a product that can detect very subtle cyberattacks with complete accuracy. “The algorithms are getting better all the time, so it’s highly probable that we’ll soon have commercial products that can detect and respond to attacks,” Register said. “We must keep in mind, however, that attackers don’t sit still, and they’re well-funded and patient. They employ ‘offensive AI,’ which means they use the same types of techniques and algorithms to generate attacks which are unlikely to be detected.”

ML implementation considerations
For any ML implementation, a strong cyber defense system is essential, but there’s no such thing as a completely secure design. Instead, security is a dynamic and ongoing process that requires constant fine-tuning and improvement against ever-changing cyberattacks. Implementing ML requires a clear security roadmap, which should define requirements. It also requires implementing a good cybersecurity process, which secures individual hardware and software components, as well as some type of system testing.

“One of the things we advise is to start with threat modeling to identify a set of critical design assets to protect from an adversary under confidentiality or integrity,” said Jason Oberg, CTO at Cycuity. “From there, you can define a set of very succinct, secure requirements for the assets. All of this work is typically done at the architecture level. We do provide education, training and guidance to our customers, because at that level, if you don’t have succinct security requirements defined, then it’s really hard to verify or check something in the design. What often happens is customers will say, ‘I want to have a secure chip.’ But it’s not as easy as just pressing a button and getting a green check mark that confirms the chip is now secure.”

To be successful, engineering teams must start at the architectural stages and define the security requirements. “Once that is done, they can start actually writing the RTL,” Oberg said. “There are tools available to provide assurances these security requirements are being met, and run within the existing simulation and emulation environments to help validate the security requirements, and help identify any unknown design weaknesses. Generally, this helps hardware and verification engineers increase their productivity and build confidence that the system is indeed meeting the security requirements.”

Figure 1: A cybersecurity model includes multiple stages, progressing from the very basic to in-depth. It is important for organizations to know what stages their cyber defense system are. Source: Cycuity

Fig. 1: A cybersecurity model includes multiple stages, progressing from the very basic to in-depth. It is important for organizations to know what stages their cyber defense system are. Source: Cycuity

Steve Garrison, senior vice president, marketing of Stellar Cyber, noted that if cyber threats were uncovered during the detection process, so many data files may be generated that they will be difficult for humans to sort through. Graphical displays can speed up the process and reduce the overall mean time to detection (MTTD) and mean time to response (MTTR).

Figure 2: Using graphical displays  would reduce the overall meantime to detection (MTTD) and meantime to response (MTTR). Source: Stellar Cyber

Fig. 2: Using graphical displays  would reduce the overall meantime to detection (MTTD) and meantime to response (MTTR). Source: Stellar Cyber

Testing is essential
Another important stage in the design process is testing, whereby each system design requires a vigorous attack simulation tool to weed out the basic oversights to ensure it meets the predefined standard.

“First, if you want to understand how defensive systems will function in the real world, it’s important to test them under conditions, which are as realistic as possible,” Keysight’s Register said. “The network environment should have the same amount of traffic, mix of applications, speeds, behavioral characteristics, and timing as the real world. For example, the timing of a sudden uptick in email and social media traffic corresponds to the time when people open up their laptops at work. The attack traffic needs to be as realistic as possible as well – hackers try hard not to be noticed, often preferring ‘low and slow’ attacks, which may take hours or days to complete, making detection much more difficult. The same obfuscation techniques, encryption, and decoy traffic employed by threat actors needs to be simulated as accurately as possible.”

Further, due to mistaken assumptions during testing, defensive systems often perform great in the lab, yet fail spectacularly in production networks.  “Afterwards we hear, for example, ‘I didn’t think hackers would encrypt their malware,’ or ‘Internal e-mails weren’t checked for malicious attachments, only those from external senders,’” Register explained. “Also, in security testing, currency is key. Attacks and obfuscation techniques are constantly evolving. If a security system is tested against stale attacks, then the value of that testing is limited. The offensive tools should be kept as up to date as possible to ensure the most effective performance against the tools a system is likely to encounter in the wild.”

Semiconductor security
Almost all system designs depend on semiconductors, so it is important to ensure that any and all chips, firmware, FPGAs, and SoCs are secure – including those that perform ML functionality.

“Semiconductor security is a constantly evolving problem and requires an adaptable solution, said Jayson Bethurem, vice president marketing and business development at Flex Logix. “Fixed solutions with current cryptography that are implemented today will inevitably be challenged in the future. Hackers today have more time, resources, training, and motivation to disrupt technology. With technology increasing in every facet of our lives, defending against this presents a real challenge. We also have to consider upcoming threats, namely quantum computing.”

Many predict that quantum computing will be able to crack current cryptography solutions in the next few years. “Fortunately, semiconductor manufacturers have solutions that can enable cryptography agility, which can dynamically adapt to evolving threats,” Bethurem said. “This includes both updating hardware accelerated cryptography algorithms and obfuscating them, an approach that increases root of trust and protects valuable IP secrets. Advanced solutions like these also involve devices randomly creating their own encryption keys, making it harder for algorithms to crack encryption codes.”

Advances in AI/ML algorithms can adapt to new threats and reduce latency of algorithm updates from manufacturers. This is particularly useful with reconfigurable eFPGA IP, which can be implemented into any semiconductor device to thwart all current and future threats and optimized to run AI/ML-based cryptography solutions. The result is a combination of high-performance processing, scalability, and low-latency attack response.

Chips that support AI/ML algorithms need not only computing power, but also accelerators for those algorithms. In addition, all of this needs to happen without exceeding a tight power budget.

“More AI/ML systems run at tiny edges rather than at the core,” said Detlef Houdeau, senior director of design system architecture at Infineon Technologies. “AI/ML systems don’t need any bigger computer and/or cloud. For instance, a Raspberry Pi for a robot in production can have more than 3 AI/ML algorithms working in parallel. A smartphone has more than 10 AI/ML functions in the phone, and downloading new apps brings new AI/ML algorithms into the device. A pacemaker can have 2 AI/ML algorithms. Security chips, meanwhile, need a security architecture as well as accelerators for encryption. Combining an AI/ML accelerator with an encryption accelerator in the same chip could increase the performance in microcontroller units, and at the same time foster more security at the edge. The next generation of microelectronics could show this combination.”

After developers have gone through design reviews and the systems have run vigorous tests, it helps to have third-party certification and/or credentials to ensure the systems are indeed secure from a third-party independent viewpoint.

“As AI, and recently generative AI, continue to transform all markets, there will be new attack vectors to mitigate against,” said Arm’s Maidment. “We expect to see networks become smarter in the way they monitor traffic and behaviors. The use of AI/ML allows network-based monitoring at scale to allow potential unexpected or rogue behavior to be identified and isolated. Automating network monitoring based on AI/ML will allow an extra layer of defense as networks scale out and establish effectively a ‘zero trust’ approach. With this approach, analysis at scale can be tuned to look at particular threat vectors depending on the use case.”

With an increase in AI/ML adoption at the edge, a lot of this is taking place on the CPU. “Whether it is handling workloads in their entirety, or in combination with a co-processor like a GPU or NPU, how applications are deployed across the compute resources needs to be secure and managed centrally within the edge AI/ML device,” Maidment said. “Building edge AI/ML devices based on a hardware root of trust is essential. It is critical to have privileged access control of what code is allowed to run where using a trusted memory management architecture. Arm continually invests in security, and the Armv9 architecture offers a number of new security features. Alongside architecture improvements, we continue to work in partnership with the industry on our ecosystem security framework and certification scheme, PSA Certified, which is based on a certified hardware RoT. This hardware base helps to improve the security of systems and fulfill the consumer expectation that as devices scale, they remain secure.”

Outlook
It is important to understand that threat actors will continue to evolve attacks using AI/ML. Experts suggest that to counter such attacks, organizations, institutions, and government agencies will have to continually improve defense strategies and capabilities, including AI/ML deployment.

AI/ML can be used as weapon from an attacker for industrial espionage and/or industrial sabotage, and stopping incursions will require a broad range of cyberattack prevention and detection tools, including AI/ML functionality for anomaly detection. But in general, hackers are almost always one step ahead.

According to Register, “the recurring cycle is: 1) hackers come out with a new tool or technology that lets them attack systems or evade detection more effectively; 2) those attacks cause enough economic damage that the industry responds and develops effective countermeasures; 3) the no-longer-new hacker tools are still employed effectively, but against targets that haven’t bothered to update their defenses; 4) hackers develop new offensive tools that are effective against the defensive techniques of high-value targets, and the cycle starts anew.”

Related Reading
Securing Chip Manufacturing Against Growing Cyber Threats
Suppliers are the number one risk, but reducing attacks requires industry-wide collaboration.
Data Center Security Issues Widen
The number and breadth of hardware targets is increasing, but older attack vectors are not going away. Hackers are becoming more sophisticated, and they have a big advantage.

The post Using AI/ML To Combat Cyberattacks appeared first on Semiconductor Engineering.

K-Fault Resistant Partitioning To Assess Redundancy-Based HW Countermeasures To Fault Injections

A technical paper titled “Fault-Resistant Partitioning of Secure CPUs for System Co-Verification against Faults” was published by researchers at Université Paris-Saclay, Graz University of Technology, lowRISC, University Grenoble Alpes, Thales, and Sorbonne University.

Abstract:

“To assess the robustness of CPU-based systems against fault injection attacks, it is necessary to analyze the consequences of the fault propagation resulting from the intricate interaction between the software and the processor. However, current formal methodologies that combine both hardware and software aspects experience scalability issues, primarily due to the use of bounded verification techniques. This work formalizes the notion of k-fault resistant partitioning as an inductive solution to this fault propagation problem when assessing redundancy-based hardware countermeasures to fault injections. Proven security guarantees can then reduce the remaining hardware attack surface to consider in a combined analysis with the software, enabling a full co-verification methodology. As a result, we formally verify the robustness of the hardware lockstep countermeasure of the OpenTitan secure element to single bit-flip injections. Besides that, we demonstrate that previously intractable problems, such as analyzing the robustness of OpenTitan running a secure boot process, can now be solved by a co-verification methodology that leverages a k-fault resistant partitioning. We also report a potential exploitation of the register file vulnerability in two other software use cases. Finally, we provide a security fix for the register file, verify its robustness, and integrate it into the OpenTitan project.”

Find the technical paper here. Published 2024 (preprint).

Tollec, Simon, Vedad Hadžić, Pascal Nasahl, Mihail Asavoae, Roderick Bloem, Damien Couroussé, Karine Heydemann, Mathieu Jan, and Stefan Mangard. “Fault-Resistant Partitioning of Secure CPUs for System Co-Verification against Faults.” Cryptology ePrint Archive (2024).

Related Reading
RISC-V Micro-Architectural Verification
Verifying a processor is much more than making sure the instructions work, but the industry is building from a limited knowledge base and few dedicated tools.
New Concepts Required For Security Verification
Why it’s so difficult to ensure that hardware works correctly and is capable of detecting vulnerabilities that may show up in the field.

The post K-Fault Resistant Partitioning To Assess Redundancy-Based HW Countermeasures To Fault Injections appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Microarchitecture Vulnerabilities: Uncovering The Root Cause WeaknessesJason Oberg
    In early 2018, the tech industry was shocked by the discovery of hardware microarchitecture vulnerabilities that bypassed decades of work put into software and application security. Meltdown and Spectre exploited performance features in modern application processors to leak sensitive information about victim programs to an adversary. This leakage occurs through the hardware itself, meaning that malicious software can extract secret information from users even if software protections are in place
     

Microarchitecture Vulnerabilities: Uncovering The Root Cause Weaknesses

7. Březen 2024 v 09:05

In early 2018, the tech industry was shocked by the discovery of hardware microarchitecture vulnerabilities that bypassed decades of work put into software and application security. Meltdown and Spectre exploited performance features in modern application processors to leak sensitive information about victim programs to an adversary. This leakage occurs through the hardware itself, meaning that malicious software can extract secret information from users even if software protections are in place because the leakages happen below the view of software in hardware. Since these so-called transient execution vulnerabilities were first publicly disclosed, dozens of variants have been identified that all share a set of common root cause weaknesses, but the specifics of that commonality were not well understood broadly by the security community.

In early 2020, Intel Corporation, MITRE, Cycuity, and others set off to establish a set of common weaknesses for hardware to enable a more proactive approach to hardware security to reduce the risk of a hardware vulnerability in the future. The initial set of weaknesses, in the form of Common Weakness Enumerations (CWE), were broad and covered weaknesses beyond just transient execution vulnerabilities like Meltdown and Spectre. While this initial set of CWEs was extremely effective at covering the root causes across the entire hardware vulnerability landscape, the precise and specific coverage of transient execution vulnerabilities was still lacking. This was primarily because of the sheer complexity, volume, and cleverness of each of these vulnerabilities.

In the fall of 2022, technical leads from AMD, Arm, Intel (special kudos to Intel for initiating and leading the effort), Cycuity, and Riscure came together to dig into the details of publicly disclosed transient execution vulnerabilities to really understand their root cause and come up with a set of precise, yet comprehensive, root cause weaknesses expressed as CWEs to help the industry not only understand the root cause for these microarchitecture vulnerabilities but to help prevent future, unknown vulnerabilities from being discovered. The recent announcement of the four transient execution weaknesses was a result of this collaborative effort over the last year.

CWEs for microarchitecture vulnerabilities

To come up with these root cause weaknesses, we researched every known publicly disclosed microarchitecture vulnerability (Common Vulnerabilities and Exposures [CVEs]) to understand the exact characteristics of the vulnerabilities and what the root causes were. As a result of this, the following common weaknesses were discovered, with a brief summary provided in layman terms from my perspective:

CWE-1421: Exposure of Sensitive Information in Shared Microarchitectural Structures during Transient Execution

  • Potentially leaky microarchitectural resources are shared with an adversary. For example, sharing a CPU cache between victim and attacker programs has shown to result in timing side channels that can leak secrets about the victim.

CWE-1422: Exposure of Sensitive Information caused by Incorrect Data Forwarding during Transient Execution

  • The forwarding or “flow” of information within the microarchitecture can result in security violations. Often various events (speculation, page faults, etc.) will cause data to be incorrectly forwarded from one location of the processor to another (often to a leaky microarchitecture resource like the one listed in CWE-1421)

CWE-1423: Exposure of Sensitive Information caused by Shared Microarchitectural Predictor State that Influences Transient Execution

  • An attacker being able to affect or “poison” a microarchitecture predictor used within the processor. For example, branch prediction is commonly used to increase performance to speculatively fetch instructions based on the expected outcome of a branch in a program. If an adversary is able to affect the branch prediction itself, they can cause the victim to execute code in branches of their choosing.

CWE-1420: Exposure of Sensitive Information during Transient Execution

  • A general transient execution weakness if one of the other weaknesses above do not quite fit the need.

Within each of the CWEs listed above, you can find details about observed examples, or vulnerabilities, which are a result of these weaknesses. Some vulnerabilities, Spectre-V1, for example, requires the presence of CWE-1421, CWE-1422, and CWE-1423. While others, like Meltdown, only require CWE-1422 and CWE-1423.

Since detecting these weaknesses can be a daunting task, each of the CWEs outline a set of detection methods. One detection method that is highlighted in each CWE entry is the use of information flow to track the flow of information in the microarchitecture to ensure data is being handled securely. Information flow can be used for each of the CWEs as follows:

  • CWE-1421: information flow analysis can be used to ensure that secrets never end up in a shared microarchitectural resource.
  • CWE-1422: ensure that secret information is never improperly forwarded within the microarchitecture.
  • CWE-1423: ensure that an attacker can never affect or modify the predictor state in a way that is observable by the victim. In other words, information from the attacker should not flow to the predictor if that information can affect the integrity of the predictor for the victim.

Our Radix products use information flow at their core and we have already shown success in demonstrating Radix’s ability to detect Meltdown and Spectre. We look forward to continuing to work with the industry and our customers and partners to further advance the state of hardware security and reduce the risk of vulnerabilities being discovered in the future.

The post Microarchitecture Vulnerabilities: Uncovering The Root Cause Weaknesses appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Broad Impact From Accelerating Tech CyclesEd Sperling
    Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are ex
     

Broad Impact From Accelerating Tech Cycles

21. Únor 2024 v 09:01

Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are excerpts of that conversation. Panelists were chosen by GSA’s EMTECH Interest Group. To view part one of this discussion, click here.


L-R: Accenture’s Kurniawan; Renesas’ Vora; Expedera’s Karazuba; Arm’s Yanamadala.

SE: In the past, a lot of data center applications were for things like enterprise resource planning (ERP), and those were 10- or 15-year cycles. Cycles now are 1 or 2 years at most. With ChatGPT, that’s about six months. How do companies plan for this today?

Kurniawan: In the past, businesses were very focused on just the technology. But technology is everywhere today. ERP is there to support the business initiatives, and there is a very intimate relationship between technology and business at this point. So virtually all businesses are technology businesses. We advise clients before implementing their technologies to think first about, ‘What are your business initiatives? What’s the business strategy? What’s the business imperative for where you want to go? What’s your vision?’ And then, once you understand that and get alignment from the leaders, you can think about the technology. You kind of jump back and forth, because those are really two sides of the same coin. You cannot separate them anymore. And your vision encompasses everything you want to achieve in the future while providing room for flexibility and testing out the technology plan you want to put in place to see how that supports your business vision. With every challenge comes opportunity. Our job as a consultant is really to be able to see what’s happening out there, continuously scanning the market, and trying to get ahead of the curve to advise clients.

Yanamadala: The rapid evolution of advanced technologies like generative AI can present challenges to data centers due to the short technology cycles and demanding workloads. Some of the key challenges with advanced workloads include fluctuating resource needs, because they can demand bursts of high compute. That means static resource allocation will be inefficient in handling these demands. Additionally, the growing demand for heterogenous computing can also present additional challenges in deploying a flexible compute infrastructure. Data centers are adding flexibility through adoption of containerization and virtualization. Adopting hardware-agnostic software frameworks like TensorFlow and PyTorch also can help to facilitate switching between different computing architectures. So can the development of efficient hardware and specialized AI accelerators.

SE: A lot of technology advancements are incremental, but if you get enough of these incremental improvements they can be combined in ways most people never imagined. We’ve seen systems shrink from mainframes to PCs to smart phones, and now computing is happening just about everywhere. Are we at the on the cusp of moving beyond a box, which we’ve been tethered to since the start of computing, and particularly with AR/VR.

Vora: I find it fascinating that somebody could wear a pair of glasses, get immersed in that world, and get used to it. From a user experience perspective, it seems like an extreme shift. Although I do see some play in certain verticals, it’s not clear there will be mass consumerization or adoption of this technology.

Kurniawan: Right now, generative AI is getting a lot of attention. ChatGPT captured the attention of hundreds of millions of people in 60 days. That says something. You input a prompt and you get a response back. ChatGPT is super-intuitive. It’s a technology with potential for many killer use cases. AR/VR is promising technology with upside potential, but there’s still work that needs to be done to tie that technology to the use case. Virtual reality gaming is number one, for sure. But the path to leveraging that technology to enhance how we operate other stuff still needs more clarity. That said, we recently published a white paper talking about the build-outs around the globe, driven by the combination of public incentivies and private investments. Everywhere around the world, everybody wants to build up their manufacturing facilities. We conducted interviews with semiconductor experts, and touched on AR/VR when we asked what they did during COVID when the whole world shut down. Is AR/VR like a hammer looking for nails? The overall response we got was pretty positive. They said that AR/VR probably will be tremendously useful at some future date. But they like where the technologies are going. For example, there are constraints like heat dissipation and the size of the headset, but the belief is the technology will evolve. As it matures to become more user-centric, you might think about using an AR/VR device to control the operations of the equipment in a fab. But there is work needed from a value perspective — connectivity and processing, for example.

Karazuba: AR/VR in the past has largely been a victim of its own hype cycle. There’s a lot of promises people have made. We’ve spent a little bit of time with AR/VR folks. There’s certainly an acknowledgement that whatever success the Apple AR/VR headset has will largely set the tone for the next half decade for what the AR/VR market is. These folks are not undeterred by that. Are we at a point today where you can walk around all day with mixed reality? No. With a home gaming system, being tied to the wall is probably a small price to pay for the constant AC power and the performance advantages that will provide. This is going to take some time. The value proposition is there, but the timing may not be right today. We saw this with the watch and wearables. Now, everybody has one of these. But it took five to seven years before it really took off.

Vora: We’ve worn watches for decades, so it’s not something new. It’s just that what we wear now is different. But with AR/VR, we’ve never done that before. How do you suddenly expect massive change like that?

Karazuba: But most of us are wearing eyeglasses. If you have a form factor that is a version of what we have now, where information is just simply overlaid on what we’re seeing, it’s not that far of a jump for mixed reality or augmented reality. However, with virtual reality, I find it hard to believe that people are going to walk into a conference room with a bunch of other people and put a headset on.

Yanamadala: We’ve seen devices and sensors deployed practically everywhere. Platforms that offer high-performance computing, along with secure, power-efficient hardware and connectivity are available today, and they will make this trend possible. But untethered or ambient consumer experiences in the mass market will have their challenges. We will need to invest in substantial infrastructure to enable technology to operate invisibly in the background. So while consumer-facing technology deployments increasingly become untethered, the compute and connectivity infrastructure will still require connections for power and bandwidth.

SE: People have been sounding the alarm for hardware security for years, but with limited success. What’s changed today is that we have many more connected devices and more valuable data. Is the chip industry starting to take this seriously? Or is the problem now so immense and pervasive that anything we do is just going to be a drop in the bucket?

Yanamada: Security is fundamental from the chip level, and five years ago we saw an opportunity to proactively improve the quality of chip security. IoT was in its early stages, and each chip vendor had varied and fragmented approaches to security. They also rarely approached an independent evaluation lab to check the robustness of their security implementation. But with increasing connectivity and data becoming more valuable, hackers were paying close attention, and governments were considering what action to take to protect consumers. That’s why in 2019, we launched PSA Certified – to rally the ecosystem to be proactive with security best practices. It’s critically important that chip vendors, software platforms, OEMs, and CSPs can deploy and access standardized Root of Trust services. Security is complicated. You need the whole value chain to work together.

Vora: Security architectures, at least on the hardware side, have come a long way. We pretty much now have a semiconductor TPM-like [Trusted Platform Module] capability, with security capabilities built into even small microcontrollers. They have cryptographic engines, randomizers, and all sorts of security elements built in. The fundamental challenge with security is that just putting some security features on a chip and providing all the technology pieces won’t solve the security challenge. Security is more of a system challenge and a policy challenge. In many cases, people have to think about it within the context of the entire network. And then, it’s only as strong as the weakest link in the network. That piece of security is going to grow in complexity as we start seeing more complex use cases with AI coming into play with IoT. On the other side, though, as data handling of AI moves closer to the edge, we will start seeing more local inferencing and local data being worked on without the need to mindlessly transport data across layers of networks and across the cloud. We’re going to see some lower risk and improvements from a data-in-flight perspective, because of a lot of more localization of intelligence and compute happening at different layers of the edge. As we start moving more to the edge, AI starts getting more of a hold there. But as a whole, security will remain a challenge. The fundamental challenges with security have not changed. It’s just the context and the systems in which we will have to apply them are different.

Karazuba: The semiconductor industry is finally starting to understand the true nature of what security breaches could mean with the type of data we’re handling. Security is a day zero responsibility of anyone building a product, whether that product is a chip or a device, and security responsibilities proliferate across the entire lifecycle of the of any device, from the person who is architecting the chip, to the person designing the smartphone, to the carrier. I would argue that carrier responsibilities for security go as far as the stopping those robo calls that we all get, and the spam calls and phishing calls. The internet service providers have a responsibility to stop the phishing e-mails. That’s all part of security. Obviously, with banks and financial institutions, their security is generally pretty good. But it stretches the entire way, and in the security world, the weakest link is always the security profile of your device. We’re getting better. We always could be better. But I am more encouraged now than I’ve been at any point since I really started looking at security of devices. I’m more encouraged by the way chips are being designed, deployed, manufactured, and delivered to customers.

Kurniawan: There’s some certification for IoT devices before those are sent into the market to make sure there is some security standard they adhere to. But two key words I mentioned before, collaboration and flexibility, are applicable to security, as well. Collaboration involves where you see the rest of the system, including other components in the technology set, going to evolve in the future. And flexibility is required, because security is a moving target. It needs to evolve because as you upgrade your system, your software, a vulnerability will move, as well. You need flexibility and security-minded thinking infused into your chip design.

Related Reading
Preparing For An AI-Driven Future In Chips (part 1 of above roundtable)
Designs need to be flexible enough to handle an onslaught of continuous and rapid changes, but secure enough to protect data.

The post Broad Impact From Accelerating Tech Cycles appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Broad Impact For Accelerating Tech CyclesEd Sperling
    Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are ex
     

Broad Impact For Accelerating Tech Cycles

21. Únor 2024 v 09:01

Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are excerpts of that conversation. Panelists were chosen by GSA’s EMTECH Interest Group. To view part one of this discussion, click here.


L-R: Accenture’s Kurniawan; Renesas’ Vora; Expedera’s Karazuba; Arm’s Yanamadala.

SE: In the past, a lot of data center applications were for things like enterprise resource planning (ERP), and those were 10- or 15-year cycles. Cycles now are 1 or 2 years at most. With ChatGPT, that’s about six months. How do companies plan for this today?

Kurniawan: In the past, businesses were very focused on just the technology. But technology is everywhere today. ERP is there to support the business initiatives, and there is a very intimate relationship between technology and business at this point. So virtually all businesses are technology businesses. We advise clients before implementing their technologies to think first about, ‘What are your business initiatives? What’s the business strategy? What’s the business imperative for where you want to go? What’s your vision?’ And then, once you understand that and get alignment from the leaders, you can think about the technology. You kind of jump back and forth, because those are really two sides of the same coin. You cannot separate them anymore. And your vision encompasses everything you want to achieve in the future while providing room for flexibility and testing out the technology plan you want to put in place to see how that supports your business vision. With every challenge comes opportunity. Our job as a consultant is really to be able to see what’s happening out there, continuously scanning the market, and trying to get ahead of the curve to advise clients.

Yanamadala: The rapid evolution of advanced technologies like generative AI can present challenges to data centers due to the short technology cycles and demanding workloads. Some of the key challenges with advanced workloads include fluctuating resource needs, because they can demand bursts of high compute. That means static resource allocation will be inefficient in handling these demands. Additionally, the growing demand for heterogenous computing can also present additional challenges in deploying a flexible compute infrastructure. Data centers are adding flexibility through adoption of containerization and virtualization. Adopting hardware-agnostic software frameworks like TensorFlow and PyTorch also can help to facilitate switching between different computing architectures. So can the development of efficient hardware and specialized AI accelerators.

SE: A lot of technology advancements are incremental, but if you get enough of these incremental improvements they can be combined in ways most people never imagined. We’ve seen systems shrink from mainframes to PCs to smart phones, and now computing is happening just about everywhere. Are we at the on the cusp of moving beyond a box, which we’ve been tethered to since the start of computing, and particularly with AR/VR.

Vora: I find it fascinating that somebody could wear a pair of glasses, get immersed in that world, and get used to it. From a user experience perspective, it seems like an extreme shift. Although I do see some play in certain verticals, it’s not clear there will be mass consumerization or adoption of this technology.

Kurniawan: Right now, generative AI is getting a lot of attention. ChatGPT captured the attention of hundreds of millions of people in 60 days. That says something. You input a prompt and you get a response back. ChatGPT is super-intuitive. It’s a technology with potential for many killer use cases. AR/VR is promising technology with upside potential, but there’s still work that needs to be done to tie that technology to the use case. Virtual reality gaming is number one, for sure. But the path to leveraging that technology to enhance how we operate other stuff still needs more clarity. That said, we recently published a white paper talking about the build-outs around the globe, driven by the combination of public incentivies and private investments. Everywhere around the world, everybody wants to build up their manufacturing facilities. We conducted interviews with semiconductor experts, and touched on AR/VR when we asked what they did during COVID when the whole world shut down. Is AR/VR like a hammer looking for nails? The overall response we got was pretty positive. They said that AR/VR probably will be tremendously useful at some future date. But they like where the technologies are going. For example, there are constraints like heat dissipation and the size of the headset, but the belief is the technology will evolve. As it matures to become more user-centric, you might think about using an AR/VR device to control the operations of the equipment in a fab. But there is work needed from a value perspective — connectivity and processing, for example.

Karazuba: AR/VR in the past has largely been a victim of its own hype cycle. There’s a lot of promises people have made. We’ve spent a little bit of time with AR/VR folks. There’s certainly an acknowledgement that whatever success the Apple AR/VR headset has will largely set the tone for the next half decade for what the AR/VR market is. These folks are not undeterred by that. Are we at a point today where you can walk around all day with mixed reality? No. With a home gaming system, being tied to the wall is probably a small price to pay for the constant AC power and the performance advantages that will provide. This is going to take some time. The value proposition is there, but the timing may not be right today. We saw this with the watch and wearables. Now, everybody has one of these. But it took five to seven years before it really took off.

Vora: We’ve worn watches for decades, so it’s not something new. It’s just that what we wear now is different. But with AR/VR, we’ve never done that before. How do you suddenly expect massive change like that?

Karazuba: But most of us are wearing eyeglasses. If you have a form factor that is a version of what we have now, where information is just simply overlaid on what we’re seeing, it’s not that far of a jump for mixed reality or augmented reality. However, with virtual reality, I find it hard to believe that people are going to walk into a conference room with a bunch of other people and put a headset on.

Yanamadala: We’ve seen devices and sensors deployed practically everywhere. Platforms that offer high-performance computing, along with secure, power-efficient hardware and connectivity are available today, and they will make this trend possible. But untethered or ambient consumer experiences in the mass market will have their challenges. We will need to invest in substantial infrastructure to enable technology to operate invisibly in the background. So while consumer-facing technology deployments increasingly become untethered, the compute and connectivity infrastructure will still require connections for power and bandwidth.

SE: People have been sounding the alarm for hardware security for years, but with limited success. What’s changed today is that we have many more connected devices and more valuable data. Is the chip industry starting to take this seriously? Or is the problem now so immense and pervasive that anything we do is just going to be a drop in the bucket?

Yanamada: Security is fundamental from the chip level, and five years ago we saw an opportunity to proactively improve the quality of chip security. IoT was in its early stages, and each chip vendor had varied and fragmented approaches to security. They also rarely approached an independent evaluation lab to check the robustness of their security implementation. But with increasing connectivity and data becoming more valuable, hackers were paying close attention, and governments were considering what action to take to protect consumers. That’s why in 2019, we launched PSA Certified – to rally the ecosystem to be proactive with security best practices. It’s critically important that chip vendors, software platforms, OEMs, and CSPs can deploy and access standardized Root of Trust services. Security is complicated. You need the whole value chain to work together.

Vora: Security architectures, at least on the hardware side, have come a long way. We pretty much now have a semiconductor TPM-like [Trusted Platform Module] capability, with security capabilities built into even small microcontrollers. They have cryptographic engines, randomizers, and all sorts of security elements built in. The fundamental challenge with security is that just putting some security features on a chip and providing all the technology pieces won’t solve the security challenge. Security is more of a system challenge and a policy challenge. In many cases, people have to think about it within the context of the entire network. And then, it’s only as strong as the weakest link in the network. That piece of security is going to grow in complexity as we start seeing more complex use cases with AI coming into play with IoT. On the other side, though, as data handling of AI moves closer to the edge, we will start seeing more local inferencing and local data being worked on without the need to mindlessly transport data across layers of networks and across the cloud. We’re going to see some lower risk and improvements from a data-in-flight perspective, because of a lot of more localization of intelligence and compute happening at different layers of the edge. As we start moving more to the edge, AI starts getting more of a hold there. But as a whole, security will remain a challenge. The fundamental challenges with security have not changed. It’s just the context and the systems in which we will have to apply them are different.

Karazuba: The semiconductor industry is finally starting to understand the true nature of what security breaches could mean with the type of data we’re handling. Security is a day zero responsibility of anyone building a product, whether that product is a chip or a device, and security responsibilities proliferate across the entire lifecycle of the of any device, from the person who is architecting the chip, to the person designing the smartphone, to the carrier. I would argue that carrier responsibilities for security go as far as the stopping those robo calls that we all get, and the spam calls and phishing calls. The internet service providers have a responsibility to stop the phishing e-mails. That’s all part of security. Obviously, with banks and financial institutions, their security is generally pretty good. But it stretches the entire way, and in the security world, the weakest link is always the security profile of your device. We’re getting better. We always could be better. But I am more encouraged now than I’ve been at any point since I really started looking at security of devices. I’m more encouraged by the way chips are being designed, deployed, manufactured, and delivered to customers.

Kurniawan: There’s some certification for IoT devices before those are sent into the market to make sure there is some security standard they adhere to. But two key words I mentioned before, collaboration and flexibility, are applicable to security, as well. Collaboration involves where you see the rest of the system, including other components in the technology set, going to evolve in the future. And flexibility is required, because security is a moving target. It needs to evolve because as you upgrade your system, your software, a vulnerability will move, as well. You need flexibility and security-minded thinking infused into your chip design.

Related Reading
Preparing For An AI-Driven Future In Chips (part 1 of above roundtable)
Designs need to be flexible enough to handle an onslaught of continuous and rapid changes, but secure enough to protect data.

The post Broad Impact For Accelerating Tech Cycles appeared first on Semiconductor Engineering.

❌
❌