FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál

A Practical Approach To Inline Memory Encryption And Confidential Computing For Enhanced Data Security

1. Srpen 2024 v 09:12

In today’s technology-driven landscape in which reducing TCO is top of mind, robust data protection is not merely an option but a necessity. As data, both personal and business-specific, is continuously exchanged, stored, and moved across various platforms and devices, the demand for a secure means of data aggregation and trust enhancement is escalating. Traditional data protection strategies of protecting data at rest and data in motion need to be complemented with protection of data in use. This is where the role of inline memory encryption (IME) becomes critical. It acts as a shield for data in use, thus underpinning confidential computing and ensuring data remains encrypted even when in use. This blog post will guide you through a practical approach to inline memory encryption and confidential computing for enhanced data security.

Understanding inline memory encryption and confidential computing

Inline memory encryption offers a smart answer to data security concerns, encrypting data for storage and decoding only during computation. This is the core concept of confidential computing. As modern applications on personal devices increasingly leverage cloud systems and services, data privacy and security become critical. Confidential computing and zero trust are suggested as solutions, providing data in use protection through hardware-based trusted execution environments. This strategy reduces trust reliance within any compute environment and shrinks hackers’ attack surface.

The security model of confidential computing demands data in use protection, in addition to traditional data at rest and data in motion protection. This usually pertains to data stored in off-chip memory such as DDR memory, regardless of it being volatile or non-volatile memory. Memory encryption suggests data should be encrypted before storage into memory, in either inline or look aside form. The performance demands of modern-day memories for encryption to align with the memory path have given rise to the term inline memory encryption.

There are multiple off-chip memory technologies, and modern NVMS and DDR memory performance requirements make inline encryption the most sensible solution. The XTS algorithm using AES or SM4 ciphers and the GCM algorithm are the commonly used cryptographic algorithms for memory encryption. While the XTS algorithm encrypts data solely for confidentiality, the GCM algorithm provides data encryption and data authentication, but requires extra memory space for metadata storage.

Inline memory encryption is utilized in many systems. When considering inline cipher performance for DDR, the memory performance required for different technologies is considered. For instance, LPDDR5 typically necessitates a data path bandwidth of 25 gigabytes per second. An AES operation involves 10 to 14 rounds of encryption rounds, implying that these 14 rounds must operate at the memory’s required bandwidth. This is achievable with correct pipelining in the crypto engine. Other considerations include minimizing read path latency, support for narrow bursts, memory specific features such as the number of outstanding transactions, data-hazard protection between READ and WRITE paths, and so on. Furthermore, side channel attack protections and data path integrity, a critical factor for robustness in advanced technology nodes, are additional concerns to be taken care without prohibitive PPA overhead.

Ensuring data security in AI and computational storage

The importance of data security is not limited to traditional computing areas but also extends to AI inference and training, which heavily rely on user data. Given the privacy issues and regulatory demands tied to user data, it’s essential to guarantee the encryption of this data, preventing unauthorized access. This necessitates the application of a trusted execution environment and data encryption whenever it’s transported outside this environment. With the advent of new algorithms that call for data sharing and model refinement among multiple parties, it’s crucial to maintain data privacy by implementing appropriate encryption algorithms.

Equally important is the rapidly evolving field of computational storage. The advent of new applications and increasing demands are pushing the boundaries of conventional storage architecture. Solutions such as flexible and composable storage, software-defined storage, and memory duplication and compression algorithms are being introduced to tackle these challenges. Yet, these solutions introduce security vulnerabilities as storage devices operate on raw disk data. To counter this, storage accelerators must be equipped with encryption and decryption capabilities and should manage operations at the storage nodes.

As our computing landscape continues to evolve, we need to address the escalating demand for robust data protection. Inline memory encryption emerges as a key solution, offering data in use protection for confidential computing, securing both personal and business data.

Rambus Inline Memory Encryption IP provide scalable, versatile, and high-performance inline memory encryption solutions that cater to a wide range of application requirements. The ICE-IP-338 is a FIPS certified inline cipher engine supporting AES and SM4, as well as XTS GCM modes of operation. Building on the ICE-IP-338 IP, the ICE-IP-339 provides essential AXI4 operation support, simplifying system integration for XTS operation, delivering confidentiality protection. The IME-IP-340 IP extends basic AXI4 support to narrow data access granularity, as well as AES GCM operations, delivering confidentiality and authentication. Finally, the most recent offering, the Rambus IME-IP-341 guarantees memory encryption with AES-XTS, while supporting the Arm v9 architecture specifications.

For more information, check out my recent IME webinar now available to watch on-demand.

The post A Practical Approach To Inline Memory Encryption And Confidential Computing For Enhanced Data Security appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • The Importance Of Memory Encryption For Protecting Data In UseEmma-Jane Crozier
    As systems-on-chips (SoCs) become increasingly complex, security functions must grow accordingly to protect the semiconductor devices themselves and the sensitive information residing on or passing through them. While a Root of Trust security solution built into the SoCs can protect the chip and data resident therein (data at rest), many other threats exist which target interception, theft or tampering with the valuable information in off-chip memory (data in use). Many isolation technologies ex
     

The Importance Of Memory Encryption For Protecting Data In Use

6. Červen 2024 v 09:06

As systems-on-chips (SoCs) become increasingly complex, security functions must grow accordingly to protect the semiconductor devices themselves and the sensitive information residing on or passing through them. While a Root of Trust security solution built into the SoCs can protect the chip and data resident therein (data at rest), many other threats exist which target interception, theft or tampering with the valuable information in off-chip memory (data in use).

Many isolation technologies exist for memory protection, however, with the discovery of the Meltdown and Spectre vulnerabilities in 2018, and attacks like row hammer targeting DRAM, security architects realize there are practical threats that can bypass these isolation technologies.

One of the techniques to prevent data being accessed across different guests/domains/zones/realms is memory encryption. With memory encryption in place, even if any of the isolation techniques have been compromised, the data being accessed is still protected by cryptography. To ensure the confidentiality of data, each user has their own protected key. Memory encryption can also prevent physical attacks like hardware bus probing on the DRAM bus interface. It can also prevent tampering with control plane information like the MPU/MMU control bits in DRAM and prevent the unauthorized movement of protected data within the DRAM.

Memory encryption technology must ensure confidentiality of the data. If a “lightweight” algorithm is used, there are no guarantees the data will be protected from mathematic cryptanalysts given that the amount of data used in memory encryption is typically huge. Well known, proven algorithms are either the NIST approved AES or OSCAA approved SM4 algorithms.

The recommended key length is also an important aspect defining the security strength. AES offers 128, 192 or 256-bit security, and SM4 offers 128-bit security. Advanced memory encryption technologies also involve integrity and protocol level anti-replay techniques for high-end use-cases. Proven hash algorithms like SHA-2, SHA-3, SM3 or (AES-)GHASH can be used for integrity protection purposes.

Once one or more of the cipher algorithms are selected, the choice of secure modes of operation must be made. Block Cipher algorithms need to be used in certain specific modes to encrypt bulk data larger than a single block of just 128 bits.

XTS mode, which stands for “XEX (Xor-Encrypt-Xor) with tweak and CTS (Cipher Text Stealing)” mode has been widely adopted for disk encryption. CTS is a clever technique which ensures the number of bytes in the encrypted payload is the same as the number of bytes in the plaintext payload. This is particularly important in storage to ensure the encrypted payload can fit in the same location as would the unencrypted version.

XTS/XEX uses two keys, one key for block encryption, and another key to process a “tweak.” The tweak ensures every block of memory is encrypted differently. Any changes in the plaintext result in a complete change of the ciphertext, preventing an attacker from obtaining any information about the plaintext.

While memory encryption is a critical aspect of security, there are many challenges to designing and implementing a secure memory encryption solution. Rambus is a leading provider of both memory and security technologies and understands the challenges from both the memory and security viewpoints. Rambus provides state-of-the-art Inline Memory Encryption (IME) IP that enables chip designers to build high-performance, secure, and scalable memory encryption solutions.

Additional information:

The post The Importance Of Memory Encryption For Protecting Data In Use appeared first on Semiconductor Engineering.

  • ✇Slashdot
  • Apple Introduces Standalone 'Passwords' AppBeauHD
    An anonymous reader quotes a report from MacRumors: iOS 18, iPadOS 18, and macOS Sequoia feature a new, dedicated Passwords app for faster access to important credentials. The Passwords app replaces iCloud Keychain, which is currently only accessible via a menu in Settings. Now, passwords are available directly via a standalone app for markedly quicker access, bringing it more in line with rival services. The Passwords app consolidates various credentials, including passwords, passkeys, and Wi-F
     

Apple Introduces Standalone 'Passwords' App

Od: BeauHD
11. Červen 2024 v 00:40
An anonymous reader quotes a report from MacRumors: iOS 18, iPadOS 18, and macOS Sequoia feature a new, dedicated Passwords app for faster access to important credentials. The Passwords app replaces iCloud Keychain, which is currently only accessible via a menu in Settings. Now, passwords are available directly via a standalone app for markedly quicker access, bringing it more in line with rival services. The Passwords app consolidates various credentials, including passwords, passkeys, and Wi-Fi passwords, into a single, easily accessible location. Users can filter and sort their accounts based on various criteria, such as recently created accounts, credential type, or membership in shared groups. Passwords is also compatible with Windows via the iCloud for Windows app, extending its utility to users who operate across different platforms. The developer beta versions of iOS 18, iPadOS 18, and macOS Sequoia are available today with official release to the public scheduled for the fall, providing an early look at the Passwords app.

Read more of this story at Slashdot.

  • ✇Semiconductor Engineering
  • SRAM Security Concerns GrowKaren Heyman
    SRAM security concerns are intensifying as a combination of new and existing techniques allow hackers to tap into data for longer periods of time after a device is powered down. This is particularly alarming as the leading edge of design shifts from planar SoCs to heterogeneous systems in package, such as those used in AI or edge processing, where chiplets frequently have their own memory hierarchy. Until now, most cybersecurity concerns involving volatile memory have focused on DRAM, because it
     

SRAM Security Concerns Grow

9. Květen 2024 v 09:08

SRAM security concerns are intensifying as a combination of new and existing techniques allow hackers to tap into data for longer periods of time after a device is powered down.

This is particularly alarming as the leading edge of design shifts from planar SoCs to heterogeneous systems in package, such as those used in AI or edge processing, where chiplets frequently have their own memory hierarchy. Until now, most cybersecurity concerns involving volatile memory have focused on DRAM, because it is often external and easier to attack. SRAM, in contrast, does not contain a component as obviously vulnerable as a heat-sensitive capacitor, and in the past it has been harder to pinpoint. But as SoCs are disaggregated and more features are added into devices, SRAM is becoming a much bigger security concern.

The attack scheme is well understood. Known as cold boot, it was first identified in 2008, and is essentially a variant of a side-channel attack. In a cold boot approach, an attacker dumps data from internal SRAM to an external device, and then restarts the system from the external device with some code modification. “Cold boot is primarily targeted at SRAM, with the two primary defenses being isolation and in-memory encryption,” said Vijay Seshadri, distinguished engineer at Cycuity.

Compared with network-based attacks, such as DRAM’s rowhammer, cold boot is relatively simple. It relies on physical proximity and a can of compressed air.

The vulnerability was first described by Edward Felton, director of Princeton University’s Center for Information Technology Policy, J. Alex Halderman, currently director of the Center for Computer Security & Society at the University of Michigan, and colleagues. The breakthrough in their research was based on the growing realization in the engineering research community that data does not vanish from memory the moment a device is turned off, which until then was a common assumption. Instead, data in both DRAM and SRAM has a brief “remanence.”[1]

Using a cold boot approach, data can be retrieved, especially if an attacker sprays the chip with compressed air, cooling it enough to slow the degradation of the data. As the researchers described their approach, “We obtained surface temperatures of approximately −50°C with a simple cooling technique — discharging inverted cans of ‘canned air’ duster spray directly onto the chips. At these temperatures, we typically found that fewer than 1% of bits decayed even after 10 minutes without power.”

Unfortunately, despite nearly 20 years of security research since the publication of the Halderman paper, the authors’ warning still holds true. “Though we discuss several strategies for mitigating these risks, we know of no simple remedy that would eliminate them.”

However unrealistic, there is one simple and obvious remedy to cold boot — never leave a device unattended. But given human behavior, it’s safer to assume that every device is vulnerable, from smart watches to servers, as well as automotive chips used for increasingly autonomous driving.

While the original research exclusively examined DRAM, within the last six years cold boot has proven to be one of the most serious vulnerabilities for SRAM. In 2018, researchers at Germany’s Technische Universität Darmstadt published a paper describing a cold boot attack method that is highly resistant to memory erasure techniques, and which can be used to manipulate the cryptographic keys produced by the SRAM physical unclonable function (PUF).

As with so many security issues, it’s been a cat-and-mouse game between remedies and counter-attacks. And because cold boot takes advantage of slowing down memory degradation, in 2022 Yang-Kyu Choi and colleagues at the Korea Advanced Institute of Science and Technology (KAIST), described a way to undo the slowdown with an ultra-fast data sanitization method that worked within 5 ns, using back bias to control the device parameters of CMOS.

Fig. 1: Asymmetric forward back-biasing scheme for permanent erasing. (a) All the data are reset to 1. (b) All the data are reset to 0. Whether all the data where reset to 1 or 0 is determined by the asymmetric forward back-biasing scheme. Source: KAIST/Creative Commons [2]

Fig. 1: Asymmetric forward back-biasing scheme for permanent erasing. (a) All the data are reset to 1. (b) All the data are reset to 0. Whether all the data where reset to 1 or 0 is determined by the asymmetric forward back-biasing scheme. Source: KAIST/Creative Commons [2]

Their paper, as well as others, have inspired new approaches to combating cold boot attacks.

“To mitigate the risk of unauthorized access from unknown devices, main devices, or servers, check the authenticated code and unique identity of each accessing device,” said Jongsin Yun, memory technologist at Siemens EDA. “SRAM PUF is one of the ways to securely identify each device. SRAM is made of two inverters cross-coupled to each other. Although each inverter is designed to be the same device, normally one part of the inverter has a somewhat stronger NMOS than the other due to inherent random dopant fluctuation. During the initial power-on process, SRAM data will be either data 1 or 0, depending on which side has a stronger device. In other words, the initial data state of the SRAM array at the power on is decided by this unique random process variation and most of the bits maintain this property for life. One can use this unique pattern as a fingerprint of a device. The SRAM PUF data is reconstructed with other coded data to form a cryptographic key. SRAM PUF is a great way to anchor its secure data into hardware. Hackers may use a DFT circuit to access the memory. To avoid insecurely reading the SRAM information through DFT, the security-critical design makes DFT force delete the data as an initial process of TEST mode.”

However, there can be instances where data may be required to be kept in a non-volatile memory (NVM). “Data is considered insecure if the NVM is located outside of the device,” said Yun. “Therefore, secured data needs to be stored within the device with write protection. One-time programmable (OTP) memory or fuses are good storage options to prevent malicious attackers from tampering with the modified information. OTP memory and fuses are used to store cryptographic keys, authentication information, and other critical settings for operation within the device. It is useful for anti-rollback, which prevents hackers from exploiting old vulnerabilities that have been fixed in newer versions.”

Chiplet vulnerabilities
Chiplets also could present another vector for attack, due to their complexity and interconnections. “A chiplet has memory, so it’s going to be attacked,” said Cycuity’s Seshadri. “Chiplets, in general, are going to exacerbate the problem, rather than keeping it status quo, because you’re going to have one chiplet talking to another. Could an attack on one chiplet have a side effect on another? There need to be standards to address this. In fact, they’re coming into play already. A chiplet provider has to say, ‘Here’s what I’ve done for security. Here’s what needs to be done when interfacing with another chiplet.”

Yun notes there is a further physical vulnerability for those working with chiplets and SiPs. “When multiple chiplets are connected to form a SiP, we have to trust data coming from an external chip, which creates further complications. Verification of the chiplet’s authenticity becomes very important for SiPs, as there is a risk of malicious counterfeit chiplets being connected to the package for hacking purposes. Detection of such counterfeit chiplets is imperative.”

These precautions also apply when working with DRAM. In all situations, Seshardi said, thinking about security has to go beyond device-level protection. “The onus of protecting DRAM is not just on the DRAM designer or the memory designer,” he said. “It has to be secured by design principles when you are developing. In addition, you have to look at this holistically and do it at a system level. You must consider all the other things that communicate with DRAM or that are placed near DRAM. You must look at a holistic solution, all the way from software down to things like the memory controller and then finally, the DRAM itself.”

Encryption as a backup
Data itself always must be encrypted as second layer of protection against known and novel attacks, so an organization’s assets will still be protected even if someone breaks in via cold boot or another method.

“The first and primary method of preventing a cold boot attack is limiting physical access to the systems, or physically modifying the systems case or hardware preventing an attacker’s access,” said Jim Montgomery, market development director, semiconductor at TXOne Networks. “The most effective programmatic defense against an attack is to ensure encryption of memory using either a hardware- or software-based approach. Utilizing memory encryption will ensure that regardless of trying to dump the memory, or physically removing the memory, the encryption keys will remain secure.”

Montgomery also points out that TXOne is working with the Semiconductor Manufacturing Cybersecurity Consortium (SMCC) to develop common criteria based upon SEMI E187 and E188 standards to assist DM’s and OEM’s to implement secure procedures for systems security and integrity, including controlling the physical environment.

What kind and how much encryption will depend on use cases, said Jun Kawaguchi, global marketing executive for Winbond. “Encryption strength for a traffic signal controller is going to be different from encryption for nuclear plants or medical devices, critical applications where you need much higher levels,” he said. “There are different strengths and costs to it.”

Another problem, in the post-quantum era, is that encryption itself may be vulnerable. To defend against those possibilities, researchers are developing post-quantum encryption schemes. One way to stay a step ahead is homomorphic encryption [HE], which will find a role in data sharing, since computations can be performed on encrypted data without first having to decrypt it.

Homomorphic encryption could be in widespread use as soon as the next few years, according to Ronen Levy, senior manager for IBM’s Cloud Security & Privacy Technologies Department, and Omri Soceanu, AI Security Group manager at IBM.  However, there are still challenges to be overcome.

“There are three main inhibitors for widespread adoption of homomorphic encryption — performance, consumability, and standardization,” according to Levy. “The main inhibitor, by far, is performance. Homomorphic encryption comes with some latency and storage overheads. FHE hardware acceleration will be critical to solving these issues, as well as algorithmic and cryptographic solutions, but without the necessary expertise it can be quite challenging.”

An additional issue is that most consumers of HE technology, such as data scientists and application developers, do not possess deep cryptographic skills, HE solutions that are designed for cryptographers can be impractical. A few HE solutions require algorithmic and cryptographic expertise that inhibit adoption by those who lack these skills.

Finally, there is a lack of standardization. “Homomorphic encryption is in the process of being standardized,” said Soceanu. “But until it is fully standardized, large organizations may be hesitant to adopt a cryptographic solution that has not been approved by standardization bodies.”

Once these issues are resolved, they predicted widespread use as soon as the next few years. “Performance is already practical for a variety of use cases, and as hardware solutions for homomorphic encryption become a reality, more use cases would become practical,” said Levy. “Consumability is addressed by creating more solutions, making it easier and hopefully as frictionless as possible to move analytics to homomorphic encryption. Additionally, standardization efforts are already in progress.”

A new attack and an old problem
Unfortunately, security never will be as simple as making users more aware of their surroundings. Otherwise, cold boot could be completely eliminated as a threat. Instead, it’s essential to keep up with conference talks and the published literature, as graduate students keep probing SRAM for vulnerabilities, hopefully one step ahead of genuine attackers.

For example, SRAM-related cold boot attacks originally targeted discrete SRAM. The reason is that it’s far more complicated to attack on-chip SRAM, which is isolated from external probing and has minimal intrinsic capacitance. However, in 2022, Jubayer Mahmod, then a graduate student at Virginia Tech and his advisor, associate professor Matthew Hicks, demonstrated what they dubbed “Volt Boot,” a new method that could penetrate on-chip SRAM. According to their paper, “Volt Boot leverages asymmetrical power states (e.g., on vs. off) to force SRAM state retention across power cycles, eliminating the need for traditional cold boot attack enablers, such as low-temperature or intrinsic data retention time…Unlike other forms of SRAM data retention attacks, Volt Boot retrieves data with 100% accuracy — without any complex post-processing.”

Conclusion
While scientists and engineers continue to identify vulnerabilities and develop security solutions, decisions about how much security to include in a design is an economic one. Cost vs. risk is a complex formula that depends on the end application, the impact of a breach, and the likelihood that an attack will occur.

“It’s like insurance,” said Kawaguchi. “Security engineers and people like us who are trying to promote security solutions get frustrated because, similar to insurance pitches, people respond with skepticism. ‘Why would I need it? That problem has never happened before.’ Engineers have a hard time convincing their managers to spend that extra dollar on the costs because of this ‘it-never-happened-before’ attitude. In the end, there are compromises. Yet ultimately, it’s going to cost manufacturers a lot of money when suddenly there’s a deluge of demands to fix this situation right away.”

References

  1. S. Skorobogatov, “Low temperature data remanence in static RAM”, Technical report UCAM-CL-TR-536, University of Cambridge Computer Laboratory, June 2002.
  2. Han, SJ., Han, JK., Yun, GJ. et al. Ultra-fast data sanitization of SRAM by back-biasing to resist a cold boot attack. Sci Rep 12, 35 (2022). https://doi.org/10.1038/s41598-021-03994-2

The post SRAM Security Concerns Grow appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Securing AI In The Data CenterBart Stevens
    AI has permeated virtually every aspect of our digital lives, from personalized recommendations on streaming platforms to advanced medical diagnostics. Behind the scenes of this AI revolution lies the data center, which houses the hardware, software, and networking infrastructure necessary for training and deploying AI models. Securing AI in the data center relies on data confidentiality, integrity, and authenticity throughout the AI lifecycle, from data preprocessing to model training and infer
     

Securing AI In The Data Center

9. Květen 2024 v 09:07

AI has permeated virtually every aspect of our digital lives, from personalized recommendations on streaming platforms to advanced medical diagnostics. Behind the scenes of this AI revolution lies the data center, which houses the hardware, software, and networking infrastructure necessary for training and deploying AI models. Securing AI in the data center relies on data confidentiality, integrity, and authenticity throughout the AI lifecycle, from data preprocessing to model training and inference deployment.

High-value datasets containing sensitive information, such as personal health records or financial transactions, must be shielded from unauthorized access. Robust encryption mechanisms, such as Advanced Encryption Standard (AES), coupled with secure key management practices, form the foundation of data confidentiality in the data center. The encryption key used must be unique and used in a secure environment. Encryption and decryption operations of data are constantly occurring and must be performed to prevent key leakage. Should a compromise arise, it should be possible to renew the key securely and re-encrypt data with the new key.

The encryption key used must also be securely stored in a location that unauthorized processes or individuals cannot access. The keys used must be protected from attempts to read them from the device or attempts to steal them using side-channel techniques such as SCA (Side-Channel Attacks) or FIA (Fault Injection Attacks). The multitenancy aspect of modern data centers calls for robust SCA protection of key data.

Hardware-level security plays a pivotal role in safeguarding AI within the data center, offering built-in protections against a wide range of threats. Trusted Platform Modules (TPMs), secure enclaves, and Hardware Security Modules (HSMs) provide secure storage and processing environments for sensitive data and cryptographic keys, shielding them from unauthorized access or tampering. By leveraging hardware-based security features, organizations can enhance the resilience of their AI infrastructure and mitigate the risk of attacks targeting software vulnerabilities.

Ideally, secure cryptographic processing is handled by a Root of Trust core. The AI service provider manages the Root of Trust firmware, but it can also load secure applications that customers can write to implement their own cryptographic key management and storage applications. The Root of Trust can be integrated in the host CPU that orchestrates the AI operations, decrypting the AI model and its specific parameters before those are fed to AI or network accelerators (GPUs or NPUs). It can also be directly integrated with the GPUs and NPUs to perform encryption/decryption at that level. These GPUs and NPUs may also select to store AI workloads and inference models in encrypted form in their local memory banks and decrypt the data on the fly when access is required. Dedicated on-the-fly, low latency in-line memory decryption engines based on the AES-XTS algorithm can keep up with the memory bandwidth, ensuring that the process is not slowed down.

AI training workloads are often distributed among dozens of devices connected via PCIe or high-speed networking technology such as 800G Ethernet. An efficient confidentiality and integrity protocol such as MACsec using the AES-GCM algorithm can protect the data in motion over high-speed Ethernet links. AES-GCM engines integrated with the server SoC and the PCIe acceleration boards ensure that traffic is authenticated and optionally encrypted.

Rambus offers a broad portfolio of security IP covering the key security elements needed to protect AI in the data center. Rambus Root of Trust IP cores ensure a secure boot protocol that protects the integrity of its firmware. This can be combined with Rambus inline memory encryption engines, as well as dedicated solutions for MACsec up to 800G.

Resources

The post Securing AI In The Data Center appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • SRAM Security Concerns GrowKaren Heyman
    SRAM security concerns are intensifying as a combination of new and existing techniques allow hackers to tap into data for longer periods of time after a device is powered down. This is particularly alarming as the leading edge of design shifts from planar SoCs to heterogeneous systems in package, such as those used in AI or edge processing, where chiplets frequently have their own memory hierarchy. Until now, most cybersecurity concerns involving volatile memory have focused on DRAM, because it
     

SRAM Security Concerns Grow

9. Květen 2024 v 09:08

SRAM security concerns are intensifying as a combination of new and existing techniques allow hackers to tap into data for longer periods of time after a device is powered down.

This is particularly alarming as the leading edge of design shifts from planar SoCs to heterogeneous systems in package, such as those used in AI or edge processing, where chiplets frequently have their own memory hierarchy. Until now, most cybersecurity concerns involving volatile memory have focused on DRAM, because it is often external and easier to attack. SRAM, in contrast, does not contain a component as obviously vulnerable as a heat-sensitive capacitor, and in the past it has been harder to pinpoint. But as SoCs are disaggregated and more features are added into devices, SRAM is becoming a much bigger security concern.

The attack scheme is well understood. Known as cold boot, it was first identified in 2008, and is essentially a variant of a side-channel attack. In a cold boot approach, an attacker dumps data from internal SRAM to an external device, and then restarts the system from the external device with some code modification. “Cold boot is primarily targeted at SRAM, with the two primary defenses being isolation and in-memory encryption,” said Vijay Seshadri, distinguished engineer at Cycuity.

Compared with network-based attacks, such as DRAM’s rowhammer, cold boot is relatively simple. It relies on physical proximity and a can of compressed air.

The vulnerability was first described by Edward Felton, director of Princeton University’s Center for Information Technology Policy, J. Alex Halderman, currently director of the Center for Computer Security & Society at the University of Michigan, and colleagues. The breakthrough in their research was based on the growing realization in the engineering research community that data does not vanish from memory the moment a device is turned off, which until then was a common assumption. Instead, data in both DRAM and SRAM has a brief “remanence.”[1]

Using a cold boot approach, data can be retrieved, especially if an attacker sprays the chip with compressed air, cooling it enough to slow the degradation of the data. As the researchers described their approach, “We obtained surface temperatures of approximately −50°C with a simple cooling technique — discharging inverted cans of ‘canned air’ duster spray directly onto the chips. At these temperatures, we typically found that fewer than 1% of bits decayed even after 10 minutes without power.”

Unfortunately, despite nearly 20 years of security research since the publication of the Halderman paper, the authors’ warning still holds true. “Though we discuss several strategies for mitigating these risks, we know of no simple remedy that would eliminate them.”

However unrealistic, there is one simple and obvious remedy to cold boot — never leave a device unattended. But given human behavior, it’s safer to assume that every device is vulnerable, from smart watches to servers, as well as automotive chips used for increasingly autonomous driving.

While the original research exclusively examined DRAM, within the last six years cold boot has proven to be one of the most serious vulnerabilities for SRAM. In 2018, researchers at Germany’s Technische Universität Darmstadt published a paper describing a cold boot attack method that is highly resistant to memory erasure techniques, and which can be used to manipulate the cryptographic keys produced by the SRAM physical unclonable function (PUF).

As with so many security issues, it’s been a cat-and-mouse game between remedies and counter-attacks. And because cold boot takes advantage of slowing down memory degradation, in 2022 Yang-Kyu Choi and colleagues at the Korea Advanced Institute of Science and Technology (KAIST), described a way to undo the slowdown with an ultra-fast data sanitization method that worked within 5 ns, using back bias to control the device parameters of CMOS.

Fig. 1: Asymmetric forward back-biasing scheme for permanent erasing. (a) All the data are reset to 1. (b) All the data are reset to 0. Whether all the data where reset to 1 or 0 is determined by the asymmetric forward back-biasing scheme. Source: KAIST/Creative Commons [2]

Fig. 1: Asymmetric forward back-biasing scheme for permanent erasing. (a) All the data are reset to 1. (b) All the data are reset to 0. Whether all the data where reset to 1 or 0 is determined by the asymmetric forward back-biasing scheme. Source: KAIST/Creative Commons [2]

Their paper, as well as others, have inspired new approaches to combating cold boot attacks.

“To mitigate the risk of unauthorized access from unknown devices, main devices, or servers, check the authenticated code and unique identity of each accessing device,” said Jongsin Yun, memory technologist at Siemens EDA. “SRAM PUF is one of the ways to securely identify each device. SRAM is made of two inverters cross-coupled to each other. Although each inverter is designed to be the same device, normally one part of the inverter has a somewhat stronger NMOS than the other due to inherent random dopant fluctuation. During the initial power-on process, SRAM data will be either data 1 or 0, depending on which side has a stronger device. In other words, the initial data state of the SRAM array at the power on is decided by this unique random process variation and most of the bits maintain this property for life. One can use this unique pattern as a fingerprint of a device. The SRAM PUF data is reconstructed with other coded data to form a cryptographic key. SRAM PUF is a great way to anchor its secure data into hardware. Hackers may use a DFT circuit to access the memory. To avoid insecurely reading the SRAM information through DFT, the security-critical design makes DFT force delete the data as an initial process of TEST mode.”

However, there can be instances where data may be required to be kept in a non-volatile memory (NVM). “Data is considered insecure if the NVM is located outside of the device,” said Yun. “Therefore, secured data needs to be stored within the device with write protection. One-time programmable (OTP) memory or fuses are good storage options to prevent malicious attackers from tampering with the modified information. OTP memory and fuses are used to store cryptographic keys, authentication information, and other critical settings for operation within the device. It is useful for anti-rollback, which prevents hackers from exploiting old vulnerabilities that have been fixed in newer versions.”

Chiplet vulnerabilities
Chiplets also could present another vector for attack, due to their complexity and interconnections. “A chiplet has memory, so it’s going to be attacked,” said Cycuity’s Seshadri. “Chiplets, in general, are going to exacerbate the problem, rather than keeping it status quo, because you’re going to have one chiplet talking to another. Could an attack on one chiplet have a side effect on another? There need to be standards to address this. In fact, they’re coming into play already. A chiplet provider has to say, ‘Here’s what I’ve done for security. Here’s what needs to be done when interfacing with another chiplet.”

Yun notes there is a further physical vulnerability for those working with chiplets and SiPs. “When multiple chiplets are connected to form a SiP, we have to trust data coming from an external chip, which creates further complications. Verification of the chiplet’s authenticity becomes very important for SiPs, as there is a risk of malicious counterfeit chiplets being connected to the package for hacking purposes. Detection of such counterfeit chiplets is imperative.”

These precautions also apply when working with DRAM. In all situations, Seshardi said, thinking about security has to go beyond device-level protection. “The onus of protecting DRAM is not just on the DRAM designer or the memory designer,” he said. “It has to be secured by design principles when you are developing. In addition, you have to look at this holistically and do it at a system level. You must consider all the other things that communicate with DRAM or that are placed near DRAM. You must look at a holistic solution, all the way from software down to things like the memory controller and then finally, the DRAM itself.”

Encryption as a backup
Data itself always must be encrypted as second layer of protection against known and novel attacks, so an organization’s assets will still be protected even if someone breaks in via cold boot or another method.

“The first and primary method of preventing a cold boot attack is limiting physical access to the systems, or physically modifying the systems case or hardware preventing an attacker’s access,” said Jim Montgomery, market development director, semiconductor at TXOne Networks. “The most effective programmatic defense against an attack is to ensure encryption of memory using either a hardware- or software-based approach. Utilizing memory encryption will ensure that regardless of trying to dump the memory, or physically removing the memory, the encryption keys will remain secure.”

Montgomery also points out that TXOne is working with the Semiconductor Manufacturing Cybersecurity Consortium (SMCC) to develop common criteria based upon SEMI E187 and E188 standards to assist DM’s and OEM’s to implement secure procedures for systems security and integrity, including controlling the physical environment.

What kind and how much encryption will depend on use cases, said Jun Kawaguchi, global marketing executive for Winbond. “Encryption strength for a traffic signal controller is going to be different from encryption for nuclear plants or medical devices, critical applications where you need much higher levels,” he said. “There are different strengths and costs to it.”

Another problem, in the post-quantum era, is that encryption itself may be vulnerable. To defend against those possibilities, researchers are developing post-quantum encryption schemes. One way to stay a step ahead is homomorphic encryption [HE], which will find a role in data sharing, since computations can be performed on encrypted data without first having to decrypt it.

Homomorphic encryption could be in widespread use as soon as the next few years, according to Ronen Levy, senior manager for IBM’s Cloud Security & Privacy Technologies Department, and Omri Soceanu, AI Security Group manager at IBM.  However, there are still challenges to be overcome.

“There are three main inhibitors for widespread adoption of homomorphic encryption — performance, consumability, and standardization,” according to Levy. “The main inhibitor, by far, is performance. Homomorphic encryption comes with some latency and storage overheads. FHE hardware acceleration will be critical to solving these issues, as well as algorithmic and cryptographic solutions, but without the necessary expertise it can be quite challenging.”

An additional issue is that most consumers of HE technology, such as data scientists and application developers, do not possess deep cryptographic skills, HE solutions that are designed for cryptographers can be impractical. A few HE solutions require algorithmic and cryptographic expertise that inhibit adoption by those who lack these skills.

Finally, there is a lack of standardization. “Homomorphic encryption is in the process of being standardized,” said Soceanu. “But until it is fully standardized, large organizations may be hesitant to adopt a cryptographic solution that has not been approved by standardization bodies.”

Once these issues are resolved, they predicted widespread use as soon as the next few years. “Performance is already practical for a variety of use cases, and as hardware solutions for homomorphic encryption become a reality, more use cases would become practical,” said Levy. “Consumability is addressed by creating more solutions, making it easier and hopefully as frictionless as possible to move analytics to homomorphic encryption. Additionally, standardization efforts are already in progress.”

A new attack and an old problem
Unfortunately, security never will be as simple as making users more aware of their surroundings. Otherwise, cold boot could be completely eliminated as a threat. Instead, it’s essential to keep up with conference talks and the published literature, as graduate students keep probing SRAM for vulnerabilities, hopefully one step ahead of genuine attackers.

For example, SRAM-related cold boot attacks originally targeted discrete SRAM. The reason is that it’s far more complicated to attack on-chip SRAM, which is isolated from external probing and has minimal intrinsic capacitance. However, in 2022, Jubayer Mahmod, then a graduate student at Virginia Tech and his advisor, associate professor Matthew Hicks, demonstrated what they dubbed “Volt Boot,” a new method that could penetrate on-chip SRAM. According to their paper, “Volt Boot leverages asymmetrical power states (e.g., on vs. off) to force SRAM state retention across power cycles, eliminating the need for traditional cold boot attack enablers, such as low-temperature or intrinsic data retention time…Unlike other forms of SRAM data retention attacks, Volt Boot retrieves data with 100% accuracy — without any complex post-processing.”

Conclusion
While scientists and engineers continue to identify vulnerabilities and develop security solutions, decisions about how much security to include in a design is an economic one. Cost vs. risk is a complex formula that depends on the end application, the impact of a breach, and the likelihood that an attack will occur.

“It’s like insurance,” said Kawaguchi. “Security engineers and people like us who are trying to promote security solutions get frustrated because, similar to insurance pitches, people respond with skepticism. ‘Why would I need it? That problem has never happened before.’ Engineers have a hard time convincing their managers to spend that extra dollar on the costs because of this ‘it-never-happened-before’ attitude. In the end, there are compromises. Yet ultimately, it’s going to cost manufacturers a lot of money when suddenly there’s a deluge of demands to fix this situation right away.”

References

  1. S. Skorobogatov, “Low temperature data remanence in static RAM”, Technical report UCAM-CL-TR-536, University of Cambridge Computer Laboratory, June 2002.
  2. Han, SJ., Han, JK., Yun, GJ. et al. Ultra-fast data sanitization of SRAM by back-biasing to resist a cold boot attack. Sci Rep 12, 35 (2022). https://doi.org/10.1038/s41598-021-03994-2

The post SRAM Security Concerns Grow appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Securing AI In The Data CenterBart Stevens
    AI has permeated virtually every aspect of our digital lives, from personalized recommendations on streaming platforms to advanced medical diagnostics. Behind the scenes of this AI revolution lies the data center, which houses the hardware, software, and networking infrastructure necessary for training and deploying AI models. Securing AI in the data center relies on data confidentiality, integrity, and authenticity throughout the AI lifecycle, from data preprocessing to model training and infer
     

Securing AI In The Data Center

9. Květen 2024 v 09:07

AI has permeated virtually every aspect of our digital lives, from personalized recommendations on streaming platforms to advanced medical diagnostics. Behind the scenes of this AI revolution lies the data center, which houses the hardware, software, and networking infrastructure necessary for training and deploying AI models. Securing AI in the data center relies on data confidentiality, integrity, and authenticity throughout the AI lifecycle, from data preprocessing to model training and inference deployment.

High-value datasets containing sensitive information, such as personal health records or financial transactions, must be shielded from unauthorized access. Robust encryption mechanisms, such as Advanced Encryption Standard (AES), coupled with secure key management practices, form the foundation of data confidentiality in the data center. The encryption key used must be unique and used in a secure environment. Encryption and decryption operations of data are constantly occurring and must be performed to prevent key leakage. Should a compromise arise, it should be possible to renew the key securely and re-encrypt data with the new key.

The encryption key used must also be securely stored in a location that unauthorized processes or individuals cannot access. The keys used must be protected from attempts to read them from the device or attempts to steal them using side-channel techniques such as SCA (Side-Channel Attacks) or FIA (Fault Injection Attacks). The multitenancy aspect of modern data centers calls for robust SCA protection of key data.

Hardware-level security plays a pivotal role in safeguarding AI within the data center, offering built-in protections against a wide range of threats. Trusted Platform Modules (TPMs), secure enclaves, and Hardware Security Modules (HSMs) provide secure storage and processing environments for sensitive data and cryptographic keys, shielding them from unauthorized access or tampering. By leveraging hardware-based security features, organizations can enhance the resilience of their AI infrastructure and mitigate the risk of attacks targeting software vulnerabilities.

Ideally, secure cryptographic processing is handled by a Root of Trust core. The AI service provider manages the Root of Trust firmware, but it can also load secure applications that customers can write to implement their own cryptographic key management and storage applications. The Root of Trust can be integrated in the host CPU that orchestrates the AI operations, decrypting the AI model and its specific parameters before those are fed to AI or network accelerators (GPUs or NPUs). It can also be directly integrated with the GPUs and NPUs to perform encryption/decryption at that level. These GPUs and NPUs may also select to store AI workloads and inference models in encrypted form in their local memory banks and decrypt the data on the fly when access is required. Dedicated on-the-fly, low latency in-line memory decryption engines based on the AES-XTS algorithm can keep up with the memory bandwidth, ensuring that the process is not slowed down.

AI training workloads are often distributed among dozens of devices connected via PCIe or high-speed networking technology such as 800G Ethernet. An efficient confidentiality and integrity protocol such as MACsec using the AES-GCM algorithm can protect the data in motion over high-speed Ethernet links. AES-GCM engines integrated with the server SoC and the PCIe acceleration boards ensure that traffic is authenticated and optionally encrypted.

Rambus offers a broad portfolio of security IP covering the key security elements needed to protect AI in the data center. Rambus Root of Trust IP cores ensure a secure boot protocol that protects the integrity of its firmware. This can be combined with Rambus inline memory encryption engines, as well as dedicated solutions for MACsec up to 800G.

Resources

The post Securing AI In The Data Center appeared first on Semiconductor Engineering.

  • ✇Latest
  • The Government Fears This Privacy ToolZach Weissmueller
    The Department of Justice indicted the creators of an application that helps people spend their bitcoins anonymously. They're accused of "conspiracy to commit money laundering." Why "conspiracy to commit" as opposed to just "money laundering"? Because they didn't hold anyone else's money or do anything illegal with it. They provided a privacy tool that may have enabled other people to do illegal things with their bitcoin. But that's not a crime,
     

The Government Fears This Privacy Tool

9. Květen 2024 v 16:45
Samourai Wallet logo in crosshairs | Illustration: Lex Villena

The Department of Justice indicted the creators of an application that helps people spend their bitcoins anonymously. They're accused of "conspiracy to commit money laundering." Why "conspiracy to commit" as opposed to just "money laundering"?

Because they didn't hold anyone else's money or do anything illegal with it. They provided a privacy tool that may have enabled other people to do illegal things with their bitcoin. But that's not a crime, just as selling someone a kitchen knife isn't a crime. The case against the creators of Samourai Wallet is an assault on our civil liberties and First Amendment rights.

What this tool does is offer what's known as a "coinjoin," a method for anonymizing bitcoin transactions by mixing them with other transactions, as the project's founder, Keonne Rodriguez, explained to Reason in 2022: 

"I think the best analogy for it is like smelting gold," he said. "You take your Bitcoin, you add it into [the conjoin protocol] Whirlpool, and Whirlpool smelts it into new pieces that are not associated to the original piece."

Smelting bars of gold would make it harder for the government to track. But if someone eventually uses a piece of that gold for an illegal purchase, should the creator of the smelting furnace go to prison? This is what the government is arguing. 

Cash is the payment technology used most by criminals, but it also happens to be essential for preserving the financial privacy of law-abiding citizens, as Human Rights Foundation chief strategy officer Alex Gladstein told Reason:

"The ATM model, it gives people the option to have freedom money," says Gladstein. "Yes, the government will know all the ins and outs of what flows are coming in and out, but they won't know what you do with it when you leave. And that allows us to preserve the privacy of cash, which I think is essential for a democratic society." 

The government's decision to indict Rodriguez and his partner William Lonergan Hill is also an attack on free speech because all they did was write open-source code and make it widely available. 

"It is an issue of a chilling effect on free speech," attorney Jerry Brito, who heads up the cryptocurrency nonprofit Coin Center, told Reason after the U.S. Treasury went after the creators of another piece of anonymizing software. "So, basically, anybody who is in any way associated with this tool…a neutral tool that can be used for good or for ill, these people are now being basically deplatformed."

Are we willing to trade away our constitutional rights for the promise of security? For many in power, there seems to be no limit to what they want us to trade away.

In the '90s, the FBI tried to ban online encryption because criminals and terrorists might use it to have secret conversations. Had they succeeded, there would be no internet privacy. E-commerce, which relies on securely sending credit card information, might never have existed.

Today, Elizabeth Warren mobilizes her "anti-crypto army" to take down bitcoin by exaggerating its utility to Hamas. The Biden administration tried to permanently record all transactions over $600, and Warren hopes to implement a Central Bank Digital Currency, which would allow the government near-total surveillance of our financial lives.  

Remember when the Canadian government ordered banks to freeze money headed to the trucker protests? Central Bank Digital Currencies would make such efforts far easier.

"We come from first principles here in the global struggle for human rights," says Gladstein. "The most important thing is that it's confiscation resistant and censorship resistant and parallel, and can be done outside of the government's control." 

The most important thing about bitcoin, and money like it, isn't its price. It's the check it places on the government's ability to devalue, censor, and surviel our money. Creators of open-source tools like Samourai Wallet should be celebrated, not threatened with a quarter-century in a federal prison.

 

Music Credits: "Intercept," by BXBRDVJA via Artlist; "You Need It,' by Moon via Artlist. Photo Credits: Graeme Sloan/Sipa USA/Newscom; Omar Ashtawy/APAImages / Polaris/Newscom; Paul Weaver/Sipa USA/Newscom; Envato Elements; Pexels; Emin Dzhafarov/Kommersant Photo / Polaris/Newscom; Anonymous / Universal Images Group/Newscom.

The post The Government Fears This Privacy Tool appeared first on Reason.com.

  • ✇Latest
  • Appeals Court Rules That Cops Can Physically Make You Unlock Your PhoneJoe Lancaster
    As we keep more and more personal data on our phones, iPhone and Android devices now have some of the most advanced encryption technology in existence to keep that information safe from prying eyes. The easiest way around that, of course, is for someone to gain access to your phone. This week, a federal court decided that police officers can make you unlock your phone, even by physically forcing you to press your thumb against it. In November 202
     

Appeals Court Rules That Cops Can Physically Make You Unlock Your Phone

19. Duben 2024 v 18:50
Woman holds a smartphone open to a screen that asks for her fingerprint authentication. | Prostockstudio | Dreamstime.com

As we keep more and more personal data on our phones, iPhone and Android devices now have some of the most advanced encryption technology in existence to keep that information safe from prying eyes. The easiest way around that, of course, is for someone to gain access to your phone.

This week, a federal court decided that police officers can make you unlock your phone, even by physically forcing you to press your thumb against it.

In November 2021, Jeremy Payne was pulled over by two California Highway Patrol (CHP) officers over his car's window tinting. When asked, Payne admitted that he was on parole, which the officers confirmed. After finding Payne's cellphone in the car, officers unlocked it by forcibly pressing his thumb against it as he sat handcuffed. (The officers claimed in their arrest report that Payne "reluctantly unlocked the cell phone" when asked, which Payne disputed; the government later accepted in court "that defendant's thumbprint was compelled.")

The officers searched through Payne's camera roll and found a video taken the same day, which appeared to show "several bags of blue pills (suspected to be fentanyl)." After checking the phone's map and finding what they suspected to be a home address, the officers drove there and used Payne's keys to enter and search the residence. Inside, they  found and seized more than 800 pills.

Payne was indicted for possession with intent to distribute fentanyl and cocaine.

In a motion to suppress, Payne's attorneys argued that by forcing him to unlock his phone, the officers "compelled a testimonial communication," violating both the Fourth Amendment's protection against unreasonable search and seizure and the Fifth Amendment's guarantee against self-incrimination. Even though the provisions of his parole required him to surrender any electronic devices and passcodes, "failure to comply could result in 'arrest pending further investigation' or confiscation of the device pending investigation," not the use of force to make him open the phone.

The district court denied the motion to suppress, and Payne pleaded guilty. In November 2022, he was sentenced to 12 years in prison. Notably, Payne had only served three years for the crime for which he was on parole—assault with a deadly weapon on a peace officer.

Payne appealed the denial of the motion to suppress. This week, in an opinion authored by Judge Richard Tallman, the U.S. Court of Appeals for the 9th Circuit ruled against Payne.

Searches "incident to arrest" are an accepted part of Fourth Amendment precedent. Further, Tallman wrote that as a parolee, Payne has "a significantly diminished expectation of privacy," and even though the conditions of his parole did not require him to "provide a biometric identifier," the distinction was insufficient to support throwing out the search altogether.

But Tallman went a step further in the Fifth Amendment analysis: "We hold that the compelled use of Payne's thumb to unlock his phone (which he had already identified
for the officers) required no cognitive exertion, placing it firmly in the same category as a blood draw or fingerprint taken at booking," he wrote. "The act itself merely provided CHP with access to a source of potential information."

From a practical standpoint, this is chilling. First of all, the Supreme Court ruled in 2016 that police needed a warrant before drawing a suspect's blood.

And one can argue that fingerprinting a suspect as they're arrested is part and parcel with establishing their identity. Nearly half of U.S. states require people to identify themselves to police if asked.

But forcibly gaining access to someone's phone provides more than just their identity—it's a window into their entire lives. Even cursory access to someone's phone can turn up travel history, banking information, and call and text logs—a treasure trove of potentially incriminating information, all of which would otherwise require a warrant.

When they drafted the Fourth Amendment, the Founders drew on the history of "writs of assistance," general warrants used by British authorities in the American colonies that allowed government agents to enter homes at will and look for anything disallowed. As a result, the Fourth Amendment requires search warrants based on probable cause and signed by a judge.

Tallman does note the peculiar circumstances of the case: "Our opinion should not be read to extend to all instances where a biometric is used to unlock an electronic device." But, he adds, "the outcome…may have been different had [the officer] required Payne to independently select the finger that he placed on the phone" instead of forcibly mashing Payne's thumb into it himself.

The post Appeals Court Rules That Cops Can Physically Make You Unlock Your Phone appeared first on Reason.com.

  • ✇Techdirt
  • Once Again, Ron Wyden Had To Stop Bad “Protect The Children” Internet Bills From Moving ForwardMike Masnick
    Senator Ron Wyden is a one-man defense for preventing horrible bills from moving forward in the Senate. Last month, he stopped Josh Hawley from moving a very problematic STOP CSAM bill from moving forward, and now he’s had to do it again. A (bipartisan) group of senators traipsed to the Senate floor Wednesday evening. They tried to skip the line and quickly move some bad bills forward by asking for unanimous consent. Unless someone’s there to object, it effectively moves the bill forward, ending
     

Once Again, Ron Wyden Had To Stop Bad “Protect The Children” Internet Bills From Moving Forward

7. Březen 2024 v 22:36

Senator Ron Wyden is a one-man defense for preventing horrible bills from moving forward in the Senate. Last month, he stopped Josh Hawley from moving a very problematic STOP CSAM bill from moving forward, and now he’s had to do it again.

A (bipartisan) group of senators traipsed to the Senate floor Wednesday evening. They tried to skip the line and quickly move some bad bills forward by asking for unanimous consent. Unless someone’s there to object, it effectively moves the bill forward, ending committee debate about it. Traditionally, this process is used for moving non-controversial bills, but lately it’s been used to grandstand about stupid bills.

Senator Lindsey Graham announced his intention to pull this kind of stunt on bills that he pretends are about “protecting the children” but which do no such thing in reality. Instead of it being just him, he rounded up a bunch of senators and they all pulled out the usual moral panic lines about two terrible bills: EARN IT and STOP CSAM. Both bills are designed to make it sound like good ideas and about protecting children, but the devil is very much in the detail, as both bills undermine end-to-end encryption while assuming that if you just put liability on websites, they’ll magically make child predators disappear.

And while both bills pretend not to attack encryption — and include some language about how they’re not intended to do so — both of them leave open the possibility that the use of end-to-end encryption will be used as evidence against websites for bad things done on those websites.

But, of course, as is the standard for the group of grandstanding senators, they present these bills as (1) perfect and (2) necessary to “protect the children.” The problem is that the bills are actually (1) ridiculously problematic and (2) will actually help bad people online in making end-to-end encryption a liability.

The bit of political theater kicked off with Graham having Senators Grassley, Cornyn, Durbin, Klobuchar, and Hawley talk on and on about the poor kids online. Notably, none of them really talked about how their bills worked (because that would reveal how the bills don’t really do what they pretend they do). Durbin whined about Section 230, misleadingly and mistakenly blaming it for the fact that bad people exist. Hawley did the thing that he loves doing, in which he does his mock “I’m a big bad Senator taking on those evil tech companies” schtick, while flat out lying about reality.

But Graham closed it out with the most misleading bit of all:

In 2024, here’s the state of play: the largest companies in America — social media outlets that make hundreds of billions of dollars a year — you can’t sue if they do damage to your family by using their product because of Section 230

This is a lie. It’s a flat out lie and Senator Graham and his staffers know this. All Section 230 says is that if there is content on these sites that violate the law, the liability goes after whoever created the content. If the features of the site itself “do damage,” then you can absolutely sue the company. But no one is actually complaining about the features. They’re complaining about content. And the liability on the content has to go to who created it.

The problem here is that Graham and all the other senators want to hold companies liable for the speech of users. And that is a very, very bad idea.

Now these platforms enrich our lives, but they destroy our lives.

These platforms are being used to bully children to death.

They’re being used to take sexual images and voluntarily and voluntarily obtain and sending them to the entire world. And there’s not a damn thing you can do about it. We had a lady come before the committee, a mother saying that her daughter was on a social media site that had an anti-bullying provisions. They complained three times about what was happening to her daughter. She killed herself. They went to court. They got kicked out by section 230.

I don’t know the details of this particular case, but first off, the platforms didn’t bully anyone. Other people did. Put the blame on the people actually causing the harm. Separately, and importantly, you can’t blame someone’s suicide on someone else when no one knows the real reasons. Otherwise, you actually encourage increased suicides, as it gives people an ultimate way to “get back” at someone.

Senator Wyden got up and, as he did last month, made it quite clear that we need to stop child sexual abuse and predators. He talked about his bill, which would actually help on these issues by giving law enforcement the resources it needs to go after the criminals, rather than the idea of the bills being pushed that simply blame social media companies for not magically making bad people disappear.

We’re talking about criminal issues, and Senator Wyden is looking to handle it by empowering law enforcement to deal with the criminals. Senators Graham, Durbin, Grassley, Cornyn, Klobuchar, and Hawley are looking to sue tech companies for not magically stopping criminals. One of those approaches makes sense for dealing with criminal activity. And yet it’s the other one that a bunch of senators have lined up behind.

And, of course, beyond the dangerous approach of EARN IT, it inherently undermines encryption, which makes kids (and everyone) less safe, as Wyden also pointed out.

Now, the specific reason I oppose EARN It is it will weaken the single strongest technology that protects children and families online. Something known as strong encryption.

It’s going to make it easier to punish sites that use encryption to secure private conversations and personal devices. This bill is designed to pressure communications and technology companies to scan users messages.

I, for one, don’t find that a particularly comforting idea.

Now, the sponsors of the bill have argued — and Senator Graham’s right, we’ve been talking about this a while — that their bills don’t harm encryption. And yet the bills allow courts to punish companies that offer strong encryption.

In fact, while it includes some they language about protecting encryption, it explicitly allows encryption to be used as evidence for various forms of liability. Prosecutors are going to be quick to argue that deploying encryption was evidence of a company’s negligence preventing the distribution of CSAM, for example.

The bill is also designed to encourage scanning of content on users phones or computers before information is sent over the Internet which has the same consequences as breaking encryption. That’s why a hundred civil society groups including the American Library Association — people then I think all of us have worked for — Human Rights Campaign, the list goes… Restore the Fourth. All of them oppose this bill because of its impact on essential security.

Weakening encryption is the single biggest gift you can give to these predators and these god-awful people who want to stalk and spy on kids. Sexual predators are gonna have a far easier time stealing photographs of kids, tracking their phones, and spying on their private messages once encryption is breached. It is very ironic that a bill that’s supposed to make kids safer would have the effect of threatening the privacy and security of all law-abiding Americans.

My alternative — and I want to be clear about this because I think Senator Graham has been sincere about saying that this is a horrible problem involving kids. We have a disagreement on the remedy. That’s what is at issue.

And what I want us to do is to focus our energy on giving law enforcement officials the tools they need to find and prosecute these monstrous criminals responsible for exploiting kids and spreading vile abuse materials online.

That can help prevent kids from becoming victims in the first place. So I have introduced to do this: the Invest in Child Safety Act to direct five billion dollars to do three specific things to deal with this very urgent problem.

Graham then gets up to respond and lies through his teeth:

There’s nothing in this bill about encryption. We say that this is not an encryption bill. The bill as written explicitly prohibits courts from treating encryption as an independent basis for liability.

We’re agnostic about that.

That’s not true. As Wyden said, the bill has some hand-wavey language about not treating encryption as an independent basis for liability, but it does explicitly allow for encryption to be one of the factors that can be used to show negligence by a platform, as long as you combine it with other factors.

Section (7)(A) is the hand-wavey bit saying you can’t use encryption as “an independent basis” to determine liability, but (7)(B) effectively wipes that out by saying nothing in that section about encryption “shall be construed to prohibit a court from considering evidence of actions or circumstances described in that subparagraph.” In other words, you just have to add a bit more, and then can say “and also, look, they use encryption!”

And another author of the bill, Senator Blumenthal, has flat out said that EARN IT is deliberately written to target encryption. He falsely claims that companies would “use encryption… as a ‘get out of jail free’ card.” So, Graham is lying when he says encryption isn’t a target of the bill. One of his co-authors on the bill admits otherwise.

Graham went on:

What we’re trying to do is hold these companies accountable by making sure they engage in best business practices. The EARN IT acts simply says for you to have liability protections, you have to prove that you’ve tried to protect children. You have to earn it. You’re just not given to you. You have to have the best business practices in place that voluntary commissions that lay out what would be the best way to harden these sites against sexually exploitation. If you do those things you get liability, it’s just not given to you forever. So this is not about encryption.

As to your idea. I’d love to talk to you about it. Let’s vote on both, but the bottom line here is there’s always a reason not to do anything that holds these people liable. That’s the bottom line. They’ll never agree to any bill that allows you to get them in court ever. If you’re waiting on these companies to give this body permission for the average person to sue you. It ain’t never going to happen.

So… all of that is wrong. First of all, the very original version of the EARN IT Act did have provisions to make company’s “earn” 230 protections by following best practices, but that’s been out of the bill for ages. The current version has no such thing.

The bill does set up a commission to create best practices, but (unlike the earlier versions of the bill) those best practice recommendations have no legal force or requirements. And there’s nothing in the bill that says if you follow them you get 230 protections, and if you don’t, you don’t.

Does Senator Graham even know which version of the bill he’s talking about?

Instead, the bill outright modifies Section 230 (before the Commission even researches best practices) and says that people can sue tech companies for the distribution of CSAM. This includes using the offering of encryption as evidence to support the claims that CSAM distribution was done because of “reckless” behavior by a platform.

Either Senator Graham doesn’t know what bill he’s talking about (even though it’s his own bill) or he doesn’t remember that he changed the bill to do something different than it used to try to do.

It’s ridiculous that Senator Wyden remains the only senator who sees this issue clearly and is willing to stand up and say so. He’s the only one who seems willing to block the bad bills while at the same time offering a bill that actually targets the criminals.

  • ✇IEEE Spectrum
  • Self-Destructing Circuits and More Security SchemesSamuel K. Moore
    Last week at the IEEE International Solid-State Circuits Conference (ISSCC), researchers introduced several technologies to fight even the sneakiest hack attacks. Engineers invented a way to detect a hacker placing a probe on the circuit board to attempt to read digital traffic in a computer. Other researchers invented new ways to obfuscate electromagnetic emissions radiating from an active processor that might reveal its secrets. Still other groups created new ways for chips to generate their o
     

Self-Destructing Circuits and More Security Schemes

28. Únor 2024 v 14:16


Last week at the IEEE International Solid-State Circuits Conference (ISSCC), researchers introduced several technologies to fight even the sneakiest hack attacks. Engineers invented a way to detect a hacker placing a probe on the circuit board to attempt to read digital traffic in a computer. Other researchers invented new ways to obfuscate electromagnetic emissions radiating from an active processor that might reveal its secrets. Still other groups created new ways for chips to generate their own unique digital fingerprints, ensuring their authenticity. And if even those are compromised, one team came up with a chip-fingerprint self-destruct scheme.

A Probe-Attack Alarm

Some of the most difficult-to-defend-against attacks involve when a hacker has physical access to a system’s circuit board and can put a probe at various points. A probe attack in the right place can not only steal critical information and monitor traffic. It can also take over the whole system.

“It can be a starting point of some dangerous attacks,” Mao Li, a student in Mingoo Seok’s lab at Columbia University, told engineers at ISSCC.

The Columbia team, which included Intel director of circuit technology research Vivek De, invented a circuit that’s attached to the printed-circuit-board traces that link a processor to its memory. Called PACTOR, the circuit periodically scans for the telltale sign of a probe being touched to the interconnect—a change in capacitance that can be as small as 0.5 picofarads. If it picks up that signal, it engages what Lao called a protection engine, logic that can guard against the attack by, for example, instructing the processor to encrypt its data traffic.

Triggering defenses rather than having those defenses constantly engaged could have benefits for a computer’s performance, Li contended. “In comparison to…always-on protection, the detection-driven protection incurs less delay and less energy overhead,” he said.

The initial circuit was sensitive to temperature, something a skilled attacker could exploit. At high temperatures, the circuit would put up false alarms, and below room temperature, it would miss real attacks. The team solved this by adding a temperature-sensing circuit that sets a different threshold for the probe-sensing circuit depending on which side of room temperature the system is on.

Electromagnetic Assault

“Security-critical circuit modules may leak sensitive information through side channels such as power and [electromagnetic] emission. And attackers may exploit these side channels to gain access to sensitive information,” said Sirish Oruganti a doctoral student at the University of Texas at Austin.

For, example, hackers aware of the timing of a key computation, SMA, in the AES encryption process can glean secrets from a chip. Oruganti and colleagues at UT Austin and at Intel came up with a new way to counter that theft by obscuring those signals.

One innovation was to take SMA and break it into four parallel steps. Then the timing of each substep was shifted slightly, blurring the side-channel signals. Another was to insert what Oruganti called tunable replica circuits. These are designed to mimic the observable side-channel signal of the SMAs. The tunable replica circuits operate for a realistic but random amount of time, obscuring the real signal from any eavesdropping attackers.

Using an electromagnetic scanner fine enough to discern signals from different parts of an IC, the Texas and Intel team was unable to crack the key in their test chip, even after 40 million attempts. It generally took only about 500 tries to grab the key from an unprotected version of the chip.

This Circuit Will Self-Destruct in…

Physically unclonable functions, or PUFs, exploit tiny differences in the electronic characteristics of individual transistors to create a unique code that can act like a digital fingerprint for each chip. A University of Vermont team led by Eric Hunt-Schroeder and involving Marvell Technology took their PUF a step farther. If it’s somehow compromised, this PUF can actually destroy itself. It’s extra-thorough at it, too; the system uses not one but two methods of circuit suicide.

Both stem from pumping up the voltage in the lines connecting to the encryption key’s bit-generating circuits. One effect is to boost in current in the circuit’s longest interconnects. That leads to electromigration, a phenomenon where current in very narrow interconnects literally blows metal atoms out of place, leading to voids and open circuits.

The second method relies on the increased voltage’s effect on a transistor’s gate dielectric, a tiny piece of insulation crucial to the ability to turn transistors on and off. In the advanced chipmaking technology that Hunt-Schroeder’s team uses, transistors are built to operate at less than 1 volt, but the self-destruct method subjects them to 2.5 V. Essentially, this accelerates an aging effect called time-dependent dielectric breakdown, which results in short circuits across the gate dielectric that kill the device.

Hunt-Schroeder was motivated to make these key-murdering circuits by reports that researchers had been able to clone SRAM-based PUFs using a scanning electron microscope, he said. Such a self-destruct system could also prevent counterfeit chips entering the market, Hunt-Schroeder said. “When you’re done with a part, it’s destroyed in a way that renders it useless.”

  • ✇Techdirt
  • European Human Rights Courts Rules That Encryption Backdoors Are Illegal Under European LawTim Cushing
    Well… this is an unexpected (and fun!) turn of events. The EU Commission has spent most of the last couple of years trying to talk EU members into voting in favor of weakened encryption, if not actual encryption backdoors. You know, for the children. On the table are things ranging from mandated client-side content scanning to the compelled breaking of encryption whenever law enforcement wants access to communications. These plans — including parallel efforts by the UK government (which is no lo
     

European Human Rights Courts Rules That Encryption Backdoors Are Illegal Under European Law

21. Únor 2024 v 19:42

Well… this is an unexpected (and fun!) turn of events. The EU Commission has spent most of the last couple of years trying to talk EU members into voting in favor of weakened encryption, if not actual encryption backdoors. You know, for the children.

On the table are things ranging from mandated client-side content scanning to the compelled breaking of encryption whenever law enforcement wants access to communications. These plans — including parallel efforts by the UK government (which is no longer an EU member) — have attracted more opposition than support, but that hasn’t stopped the commission from moving forward with these efforts, even when its own legal counsel has stated these mandates would violate EU laws.

While it’s possible (but extremely unwise) to blow off your own internal legal guidance to get with the encryption breaking, it’s much more difficult to ignore overriding external legal guidance that says what you’re trying to do is blatantly illegal. You can always hire more subservient lawyers if you don’t like what’s being said by the ones you have. But you can’t blow off the European Court of Human Rights quite as easily.

As Thomas Claburn reports for The Register, a long-running case involving (of all things) the Russian government’s attempt to force Telegram to decrypt communications has resulted in a loss that will be felt by all of the EU’s anti-encryption lawmakers.

The European Court of Human Rights (ECHR) has ruled that laws requiring crippled encryption and extensive data retention violate the European Convention on Human Rights – a decision that may derail European data surveillance legislation known as Chat Control.

The court issued a decision on Tuesday stating that “the contested legislation providing for the retention of all internet communications of all users, the security services’ direct access to the data stored without adequate safeguards against abuse and the requirement to decrypt encrypted communications, as applied to end-to-end encrypted communications, cannot be regarded as necessary in a democratic society.”

Ouch. Good luck pushing anti-encryption mandates when the court has declared them unnecessary in a democratic society. And, somehow, we have the Russian government to thank for this turn of events.

The case dates back to 2017, which is when Russia’s Federal Security Bureau (FSB) tried to force Telegram to engage in compelled decryption of Anton Podchasov’s communications. Podchasov challenged the order in Russia but the Russian court dismissed it. So, Podchasov brought the matter to the ECHR because — prior to its 2022 invasion of Ukraine — Russia was still part of the Council of Europe and (at least theoretically) subject to ECHR rulings.

Well, Russia may have exited the Council with its illegal invasion, but the courtroom challenge was still active. The final ruling — which will have zero effect on how Russia handles compelled decryption — is throwing a considerably sized wrench into the mechanations of anti-encryption legislators in the EU government.

The court concluded that the Russian law requiring Telegram “to decrypt end-to-end encrypted communications risks amounting to a requirement that providers of such services weaken the encryption mechanism for all users.” As such, the court considers that requirement disproportionate to legitimate law enforcement goals.

The EU Commission dropped its anti-encryption demands last summer following considerable pushback from EU member governments. But that doesn’t mean those desires aren’t still there, even if they’re dormant at the moment.

But this ruling will make it almost impossible to resurrect most of the EU Commission’s anti-encryption efforts. The court’s ruling makes it clear there’s no legally justifiable reason for breaking end-to-end encryption. And the ancillary stuff — like client-side scanning and extensive logging demands — is far less likely to receive a warm welcome from member states, not to mention EU courts, following this ruling (even as the European Court of Human Right is not a part of the EU, its judgments cover the EU members as well as other members in the Council of Europe).

Most of the stuff the EU Commission has been trying to enact over the past few years has been declared illegal. If it wants to do these things, it will have to change several other laws first. And that effort is far less likely to succeed, since changing these laws means breaking the law. You can always write illegal laws. You just can’t enforce them.

So, unless the EU Commission has the power to talk its members into backing its preferred brand of friendly fascism, it will just have to dial back its expectations. Sure, those who think any means can be justified by the ends will throw up their hands in despair and proclaim this is the beginning of a new criminal apocalypse. But for everyone else, this ruling means their communications will remain secure — both from EU government agencies as well as entities far more malicious.

  • ✇Latest
  • Government Is Snooping on Your PhoneJohn Stossel
    The government and private companies spy on us. My former employee, Naomi Brockwell, has become a privacy specialist. She advises people on how to protect their privacy. In my new video, she tells me I should delete most of my apps on my phone. I push back. I like that Google knows where I am and can recommend a "restaurant near me." I like that my Shell app lets me buy gas (almost) without getting out of the car. I don't like that government gat
     

Government Is Snooping on Your Phone

21. Únor 2024 v 06:30
John Stossel holds a cellphone in front of an enlarged smart phone screen | Stossel TV

The government and private companies spy on us.

My former employee, Naomi Brockwell, has become a privacy specialist. She advises people on how to protect their privacy.

In my new video, she tells me I should delete most of my apps on my phone.

I push back. I like that Google knows where I am and can recommend a "restaurant near me." I like that my Shell app lets me buy gas (almost) without getting out of the car.

I don't like that government gathers information about me via my phone, but so far, so what?

Brockwell tells me I'm being dumb because I don't know which government will get that data in the future.

Looking at my phone, she tells me, "You've given location permission, microphone permission. You have so many apps!"

She says I should delete most of them, starting with Google Chrome.

"This is a terrible app for privacy. Google Chrome is notorious for collecting every single thing that they can about you…[and] broadcasting that to thousands of people…auctioning off your eyeballs. It's not just advertisers collecting this information. Thousands of shell companies, shady companies of data brokers also collect it and in turn sell it."

Instead of Google, she recommends using a browser called Brave. It's just as good, she says, but it doesn't collect all the information that Chrome does. It's slightly faster, too, because it doesn't slow down to load ads.

Then she says, "Delete Google Maps."

"But I need Google Maps!"

"You don't." She replies, "You have an iPhone. You have Apple Maps…. Apple is better when it comes to privacy…. Apple at least tries to anonymize your data."

Instead of Gmail, she recommends more private alternatives, like Proton Mail or Tuta.

"There are many others." She points out, "The difference between them is that every email going into your inbox for Gmail is being analyzed, scanned, it's being added to a profile about you."

But I don't care. Nothing beats Google's convenience. It remembers my credit cards and passwords. It fills things in automatically. I tried Brave browser but, after a week, switched back to Google. I like that Google knows me.

Brockwell says that I could import my credit cards and passwords to Brave and autofill there, too.

"I do understand the trade-off," she adds. "But email is so personal. It's private correspondence about everything in your life. I think we should use companies that don't read our emails. Using those services is also a vote for privacy, giving a market signal that we think privacy is important. That's the only way we're going to get more privacy."

She also warns that even apps like WhatsApp, which I thought were private, aren't as private as we think.

"WhatsApp is end-to-end encrypted and better than standard SMS. But it collects a lot of data about you and shares it with its parent company, Facebook. It's nowhere near as private as an app like Signal."

She notices my Shell app and suggests I delete it.

Opening the app's "privacy nutrition label," something I never bother reading, she points out that I give Shell "your purchase history, your contact information, physical address, email address, your name, phone number, your product interaction, purchase history, search history, user ID, product interaction, crash data, performance data, precise location, course location."

The list goes on. No wonder I don't read it.

She says, "The first step before downloading an app, take a look at their permissions, see what information they're collecting."

I'm just not going to bother.

But she did convince me to delete some apps, pointing out that if I want the app later, I can always reinstall it.

"We think that we need an app for every interaction we do with a business. We don't realize what we give up as a result."

"They already have all my data. What's the point of going private now?" I ask.

"Privacy comes down to choice," She replies. "It's not that I want everything that I do to remain private. It's that I deserve to have the right to selectively reveal to the world what I want them to see. Currently, that's not the world."

COPYRIGHT 2023 BY JFS PRODUCTIONS INC.

The post Government Is Snooping on Your Phone appeared first on Reason.com.

❌
❌