FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

NIST Announces Post-Quantum Cryptography Standards



Today, almost all data on the Internet, including bank transactions, medical records, and secure chats, is protected with an encryption scheme called RSA (named after its creators Rivest, Shamir, and Adleman). This scheme is based on a simple fact—it is virtually impossible to calculate the prime factors of a large number in a reasonable amount of time, even on the world’s most powerful supercomputer. Unfortunately, large quantum computers, if and when they are built, would find this task a breeze, thus undermining the security of the entire Internet.

Luckily, quantum computers are only better than classical ones at a select class of problems, and there are plenty of encryption schemes where quantum computers don’t offer any advantage. Today, the U.S. National Institute of Standards and Technology (NIST) announced the standardization of three post-quantum cryptography encryption schemes. With these standards in hand, NIST is encouraging computer system administrators to begin transitioning to post-quantum security as soon as possible.

“Now our task is to replace the protocol in every device, which is not an easy task.” —Lily Chen, NIST

These standards are likely to be a big element of the Internet’s future. NIST’s previous cryptography standards, developed in the 1970s, are used in almost all devices, including Internet routers, phones, and laptops, says Lily Chen, head of the cryptography group at NIST who lead the standardization process. But adoption will not happen overnight.

“Today, public key cryptography is used everywhere in every device,” Chen says. “Now our task is to replace the protocol in every device, which is not an easy task.”

Why we need post-quantum cryptography now

Most experts believe large-scale quantum computers won’t be built for at least another decade. So why is NIST worried about this now? There are two main reasons.

First, many devices that use RSA security, like cars and some IoT devices, are expected to remain in use for at least a decade. So they need to be equipped with quantum-safe cryptography before they are released into the field.

“For us, it’s not an option to just wait and see what happens. We want to be ready and implement solutions as soon as possible.” —Richard Marty, LGT Financial Services

Second, a nefarious individual could potentially download and store encrypted data today, and decrypt it once a large enough quantum computer comes online. This concept is called “harvest now, decrypt later“ and by its nature, it poses a threat to sensitive data now, even if that data can only be cracked in the future.

Security experts in various industries are starting to take the threat of quantum computers seriously, says Joost Renes, principal security architect and cryptographer at NXP Semiconductors. “Back in 2017, 2018, people would ask ‘What’s a quantum computer?’” Renes says. “Now, they’re asking ‘When will the PQC standards come out and which one should we implement?’”

Richard Marty, chief technology officer at LGT Financial Services, agrees. “For us, it’s not an option to just wait and see what happens. We want to be ready and implement solutions as soon as possible, to avoid harvest now and decrypt later.”

NIST’s competition for the best quantum-safe algorithm

NIST announced a public competition for the best PQC algorithm back in 2016. They received a whopping 82 submissions from teams in 25 different countries. Since then, NIST has gone through 4 elimination rounds, finally whittling the pool down to four algorithms in 2022.

This lengthy process was a community-wide effort, with NIST taking input from the cryptographic research community, industry, and government stakeholders. “Industry has provided very valuable feedback,” says NIST’s Chen.

These four winning algorithms had intense-sounding names: CRYSTALS-Kyber, CRYSTALS-Dilithium, Sphincs+, and FALCON. Sadly, the names did not survive standardization: The algorithms are now known as Federal Information Processing Standard (FIPS) 203 through 206. FIPS 203, 204, and 205 are the focus of today’s announcement from NIST. FIPS 206, the algorithm previously known as FALCON, is expected to be standardized in late 2024.

The algorithms fall into two categories: general encryption, used to protect information transferred via a public network, and digital signature, used to authenticate individuals. Digital signatures are essential for preventing malware attacks, says Chen.

Every cryptography protocol is based on a math problem that’s hard to solve but easy to check once you have the correct answer. For RSA, it’s factoring large numbers into two primes—it’s hard to figure out what those two primes are (for a classical computer), but once you have one it’s straightforward to divide and get the other.

“We have a few instances of [PQC], but for a full transition, I couldn’t give you a number, but there’s a lot to do.” —Richard Marty, LGT Financial Services

Two out of the three schemes already standardized by NIST, FIPS 203 and FIPS 204 (as well as the upcoming FIPS 206), are based on another hard problem, called lattice cryptography. Lattice cryptography rests on the tricky problem of finding the lowest common multiple among a set of numbers. Usually, this is implemented in many dimensions, or on a lattice, where the least common multiple is a vector.

The third standardized scheme, FIPS 205, is based on hash functions—in other words, converting a message to an encrypted string that’s difficult to reverse

The standards include the encryption algorithms’ computer code, instructions for how to implement it, and intended uses. There are three levels of security for each protocol, designed to future-proof the standards in case some weaknesses or vulnerabilities are found in the algorithms.

Lattice cryptography survives alarms over vulnerabilities

Earlier this year, a pre-print published to the arXiv alarmed the PQC community. The paper, authored by Yilei Chen of Tsinghua University in Beijing, claimed to show that lattice-based cryptography, the basis of two out of the three NIST protocols, was not, in fact, immune to quantum attacks. On further inspection, Yilei Chen’s argument turned out to have a flaw—and lattice cryptography is still believed to be secure against quantum attacks.

On the one hand, this incident highlights the central problem at the heart of all cryptography schemes: There is no proof that any of the math problems the schemes are based on are actually “hard.” The only proof, even for the standard RSA algorithms, is that people have been trying to break the encryption for a long time, and have all failed. Since post-quantum cryptography standards, including lattice cryptogrphay, are newer, there is less certainty that no one will find a way to break them.

That said, the failure of this latest attempt only builds on the algorithm’s credibility. The flaw in the paper’s argument was discovered within a week, signaling that there is an active community of experts working on this problem. “The result of that paper is not valid, that means the pedigree of the lattice-based cryptography is still secure,” says NIST’s Lily Chen (no relation to Tsinghua University’s Yilei Chen). “People have tried hard to break this algorithm. A lot of people are trying, they try very hard, and this actually gives us confidence.”

NIST’s announcement is exciting, but the work of transitioning all devices to the new standards has only just begun. It is going to take time, and money, to fully protect the world from the threat of future quantum computers.

“We’ve spent 18 months on the transition and spent about half a million dollars on it,” says Marty of LGT Financial Services. “We have a few instances of [PQC], but for a full transition, I couldn’t give you a number, but there’s a lot to do.”

Quantum Cryptography Has Everyone Scrambling



While the technology world awaits NIST’s latest “post-quantum” cryptography standards this summer, a parallel effort is underway to also develop cryptosystems that are grounded in quantum technology—what are called quantum-key distribution or QKD systems.

As a result, India, China, and a range of technology organizations in the European Union and United States are researching and developing QKD and weighing standards for the nascent cryptography alternative. And the biggest question of all is how or if QKD fits into a robust, reliable, and fully future-proof cryptography system that will ultimately become the global standard for secure digital communications into the 2030s. As in any emerging technology standard, different players are staking claims on different technologies and implementations of those technologies. And many of the big players are pursuing such divergent options because no technology is a clear winner at the moment.

According to Ciel Qi, a research analyst at the New York-based Rhodium Group, there’s one clear leader in QKD research and development—at least for now. “While China likely holds an advantage in QKD-based cryptography due to its early investment and development, others are catching up,” says Qi.

Two different kinds of “quantum secure” tech

At the center of these varied cryptography efforts is the distinction between QKD and post-quantum cryptography (PQC) systems. QKD is based on quantum physics, which holds that entangled qubits can store their shared information so securely that any effort to uncover it is unavoidably detectable. Sending pairs of entangled-photon qubits to both ends of a network provides the basis for physically secure cryptographic keys that can lock down data packets sent across that network.

Typically, quantum cryptography systems are built around photon sources that chirp out entangled photon pairs—where photon A heading down one length of fiber has a polarization that’s perpendicular to the polarization of photon B heading in the other direction. The recipients of these two photons perform separate measurements that enable both recipients to know that they and only they have the shared information transmitted by these photon pairs. (Otherwise, if a third party had intervened and measured one or both photons first, the delicate photon states would have been irreparably altered before reaching the recipients.)

“People can’t predict theoretically that these PQC algorithms won’t be broken one day.” —Doug Finke, Global Quantum Intelligence

This shared bit the two people on opposite ends of the line have in common then becomes a 0 or 1 in a budding secret key that the two recipients build up by sharing more and more entangled photons. Build up enough shared secret 0s and 1s between sender and receiver, and that secret key can be used for a type of strong cryptography, called a one-time pad, that guarantees a message’s safe transmission and faithful receipt by only the intended recipient.

By contrast, post-quantum cryptography (PQC) is based not around quantum physics but pure math, in which next-generation cryptographic algorithms are designed to run on conventional computers. And it’s the algorithms’ vast complexity that makes PQC security systems practically uncrackable, even by a quantum computer. So NIST—the U.S. National Institute of Standards and Technology—is developing gold-standard PQC systems that will undergird tomorrow’s post-quantum networks and communications.

The big problem with the latter approach, says Doug Finke, chief content officer of the New York-based Global Quantum Intelligence, is PQC is only believed (on very, very good but not infallible evidence) to be uncrackable by a fully-grown quantum computer. PQC, in other words, cannot necessarily offer the ironclad “quantum security” that’s promised.

“People can’t predict theoretically that these PQC algorithms won’t be broken one day,” Finke says. “On the other hand, QKD—there are theoretical arguments based on quantum physics that you can’t break a QKD network.”

That said, real-world QKD implementations might still be hackable via side-channel, device-based, and other clever attacks. Plus, QKD also requires direct access to a quantum-grade fiber optics network and sensitive quantum communications tech, neither of which is exactly commonplace today. “For day-to-day stuff, for me to send my credit card information to Amazon on my cellphone,” Finke says, “I’m not going to use QKD.”

China’s early QKD lead dwindling

According to Qi, China may have originally picked QKD as a focal point of their quantum technology development in part because the U.S. was not directing its efforts that way. “[The] strategic focus on QKD may be driven by China’s desire to secure a unique technological advantage, particularly as the U.S. leads in PQC efforts globally,” she says.

In particular, she points to ramped up efforts to use satellite uplinks and downlinks as the basis for free-space Chinese QKD systems. Citing as a source China’s “father of quantum,” Pan Jianwei, Qi says, “To achieve global quantum network coverage, China is currently developing a medium-high orbit quantum satellite, which is expected to be launched around 2026.”

That said, the limiting factor in all QKD systems to date is their ultimate reliance on a single photon to represent each qubit. Not even the most exquisitely-refined lasers and fiber optic lines can’t escape the vulnerability of individual photons.

QKD repeaters, which would blindly replicate a single photon’s quantum state but not leak any distinguishing information about the individual photons passing through—meaning the repeater would not be hackable by eavesdroppers—do not exist today. But, Finke says, such tech is achievable, though at least 5 to 10 years away. “It definitely is early days,” he says.

“While China likely holds an advantage in QKD-based cryptography due to its early investment and development, others are catching up.” —Ciel Qi, Rhodium Group

“In China they do have a 2,000-kilometer network,” Finke says. “But it uses this thing called trusted nodes. I think they have over 30 in the Beijing to Shanghai network. So maybe every 100 km, they have this unit which basically measures the signal... and then regenerates it. But the trusted node you have to locate on an army base or someplace like that. If someone breaks in there, they can hack into the communications.”

Meanwhile, India has been playing catch-up, according to Satyam Priyadarshy, a senior advisor to Global Quantum Intelligence. Priyadarshy says India’s National Quantum Mission includes plans for QKD communications research—aiming ultimately for QKD networks connecting cities over 2,000-km distances, as well as across similarly long-ranging satellite communications networks.

Priyadarshy points both to government QKD research efforts—including at the Indian Space Research Organization—and private enterprise-based R&D, including by the Bengaluru-based cybersecurity firm QuNu Labs. Priyadarshy says that QuNu, for example, has been working on a hub-and-spoke framework named ChaQra for QKD. (Spectrum also sent requests for comment to officials at India’s Department of Telecommunications, which were unanswered as of press time.)

“A hybrid of QKD and PQC is the most likely solution for a quantum safe network.” —Satyam Priyadarshy, Global Quantum Intelligence

In the U.S. and European Union, similar early-stage efforts are also afoot. Contacted by IEEE Spectrum, officials from the European Telecommunications Standards Institute (ETSI); the International Standards Organization (ISO); the International Electrotechnical Commission (IEC); and the IEEE Communications Society confirmed initiatives and working groups that are now working to both promote QKD technologies and emergent standards now taking shape.

“While ETSI is fortunate to have experts in a broad range of relevant topics, there is a lot to do,” says Martin Ward, senior research scientist based at Toshiba’s Cambridge Research Laboratory in England, and chair of a QKD industry standards group at ETSI.

Multiple sources contacted for this article envisioned a probable future in which PQC will likely be the default standard for most secure communications in a world of pervasive quantum computing. Yet, PQC also cannot avoid its potential Achilles’ heel against increasingly powerful quantum algorithms and machines either. This is where, the sources suggest, QKD could offer the prospect of hybrid secure communications that PQC alone could never provide.

“QKD provides [theoretical] information security, while PQC enables scalab[ility],” Priyadarshy says. “A hybrid of QKD and PQC is the most likely solution for a quantum safe network.” But he added that efforts at investigating hybrid QKD-PQC technologies and standards today are “very limited.”

Then, says Finke, QKD could still have the final say, even in a world where PQC remains preeminent. Developing QKD technology just happens, he points out, to also provide the basis for a future quantum Internet.

“It’s very important to understand that QKD is actually just one use case for a full quantum network,” Finke says.

“There’s a lot of applications, like distributed quantum computing and quantum data centers and quantum sensor networks,” Finke adds. “So even the research that people are doing now in QKD is still very, very helpful because a lot of that same technology can be leveraged for some of these other use cases.”

Addressing Quantum Computing Threats With SRAM PUFs

Od: Roel Maes

You’ve probably been hearing a lot lately about the quantum-computing threat to cryptography. If so, you probably also have a lot of questions about what this “quantum threat” is and how it will impact your cryptographic solutions. Let’s take a look at some of the most common questions about quantum computing and its impact on cryptography.

What is a quantum computer?

A quantum computer is not a very fast general-purpose supercomputer, nor can it magically operate in a massively parallel manner. Instead, it efficiently executes unique quantum algorithms. These algorithms can in theory perform certain very specific computations much more efficiently than any traditional computer could.

However, the development of a meaningful quantum computer, i.e., one that can in practice outperform a modern traditional computer, is exceptionally difficult. Quantum computing technology has been in development since the 1980s, with gradually improving operational quantum computers since the 2010s. However, even extrapolating the current state of the art into the future, and assuming an exponential improvement equivalent to Moore’s law for traditional computers, experts estimate that it will still take at least 15 to 20 years for a meaningful quantum computer to become a reality. 1, 2

What is the quantum threat to cryptography?

In the 1990s, it was discovered that some quantum algorithms can impact the security of certain traditional cryptographic techniques. Two quantum algorithms have raised concern:

  • Shor’s algorithm, invented in 1994 by Peter Shor, is an efficient quantum algorithm for factoring large integers, and for solving a few related number-theoretical problems. Currently, there are no known efficient-factoring algorithms for traditional computers, a fact that provides the basis of security for several classic public-key cryptographic techniques.
  • Grover’s algorithm, invented in 1996 by Lov Grover, is a quantum algorithm that can search for the inverse of a generic function quadratically faster than a traditional computer can. In cryptographic terms, searching for inverses is equivalent to a brute-force attack (e.g., on an unknown secret key value). The difficulty of such attacks forms the basis of security for most symmetric cryptography primitives.

These quantum algorithms, if they can be executed on a meaningful quantum computer, will impact the security of current cryptographic techniques.

What is the impact on public-key cryptography solutions?

By far the most important and most widely used public-key primitives today are based on RSA, discrete-logarithm, or elliptic curve cryptography. When meaningful quantum computers become operational, all of these can be efficiently solved by Shor’s algorithm. This will make virtually all public-key cryptography in current use insecure.

For the affected public-key encryption and key exchange primitives, this threat is already real today. An attacker capturing and storing encrypted messages exchanged now (or in the past), could decrypt them in the future when meaningful quantum computers are operational. So, highly sensitive and/or long-term secrets communicated up to today are already at risk.

If you use the affected signing primitives in short-term commitments of less than 15 years, the problem is less urgent. However, if meaningful quantum computers become available, the value of any signature will be voided from that point. So, you shouldn’t use the affected primitives for signing long-term commitments that still need to be verifiable in 15-20 years or more. This is already an issue for some use cases, e.g., for the security of secure boot and update solutions of embedded systems with a long lifetime.

Over the last decade, the cryptographic community has designed new public-key primitives that are based on mathematical problems that cannot be solved by Shor’s algorithm (or any other known efficient algorithm, quantum or otherwise). These algorithms are generally referred to as postquantum cryptography. NIST’s announcement on a selection of these algorithms for standardization1, after years of public scrutiny, is the latest culmination of that field-wide exercise. For protecting the firmware of embedded systems in the short term, the NSA recommends the use of existing post-quantum secure hash-based signature schemes12.

What is the impact on my symmetric cryptography solutions?

The security level of a well-designed symmetric key primitive is equivalent to the effort needed for brute-forcing the secret key. On a traditional computer, the effort of brute-forcing a secret key is directly exponential in the key’s length. When a meaningful quantum computer can be used, Grover’s algorithm can speed up the brute-force attack quadratically. The needed effort remains exponential, though only in half of the key’s length. So, Grover’s algorithm could be said to reduce the security of any given-length algorithm by 50%.

However, there are some important things to keep in mind:

  • Grover’s algorithm is an optimal brute-force strategy (quantum or otherwise),4so the quadratic speed-up is the worst-case security impact.
  • There are strong indications that it is not possible to meaningfully parallelize the execution of Grover’s algorithm.2,5,6,7In a traditional brute-force attack, doubling the number of computers used will cut the computation time in half. Such a scaling is not possible for Grover’s algorithm on a quantum computer, which makes its use in a brute-force attack very impractical.
  • Before Grover’s algorithm can be used to perform real-world brute-force attacks on 128-bit keys, the performance of quantum computers must improve tremendously. Very modern traditional supercomputers can barely perform computations with a complexity exponential in 128/2=64 bits in a practically feasible time (several months). Based on their current state and rate of progress, it will be much, much more than 20 years before quantum computers could be at that same level 6.

The practical impact of quantum computers on symmetric cryptography is, for the moment, very limited. Worst-case, the security strength of currently used primitives is reduced by 50% (of their key length), but due to the limitations of Grover’s algorithm, that is an overly pessimistic assumption for the near future. Doubling the length of symmetric keys to withstand quantum brute-force attacks is a very broad blanket measure that will certainly solve the problem, but is too conservative. Today, there are no mandated requirement for quantum-hardening symmetric-key cryptography, and 128-bit security strength primitives like AES-128 or SHA-256 are considered safe to use now. For the long-term, moving from 128-bit to 256-bit security strength algorithms is guaranteed to solve any foreseeable issues. 12

Is there an impact on information-theoretical security?

Information-theoretically secure methods (also called unconditional or perfect security) are algorithmic techniques for which security claims are mathematically proven. Some important information-theoretically secure constructions and primitives include the Vernam cipher, Shamir’s secret sharing, quantum key distribution8 (not to be confused with post-quantum cryptography), entropy sources and physical unclonable functions (PUFs), and fuzzy commitment schemes9.

Because an information-theoretical proof demonstrates that an adversary does not have sufficient information to break the security claim, regardless of its computing power – quantum or otherwise – information-theoretically secure constructions are not impacted by the quantum threat.

PUFs: An antidote for post-quantum security uncertainty

SRAM PUFs

The core technology underpinning all Synopsys products is an SRAM PUF. Like other PUFs, an SRAM PUF generates device-unique responses that stem from unpredictable variations originating in the production process of silicon chips. The operation of an SRAM PUF is based on a conventional SRAM circuit readily available in virtually all digital chips.

Based on years of continuous measurements and analysis, Synopsys has developed stochastic models that describe the behavior of its SRAM PUFs very accurately10. Using these models, we can determine tight bounds on the unpredictability of SRAM PUFs. These unpredictability bounds are expressed in terms of entropy, and are fundamental in nature, and cannot be overcome by any amount of computation, quantum or otherwise.

Synopsys PUF IP

Synopsys PUF IP is a security solution based on SRAM PUF technology. The central component of Synopsys PUF IP is a fuzzy commitment scheme9 that protects a root key with an SRAM PUF response and produces public helper data. It is information-theoretically proven that the helper data discloses zero information on the root key, so the fact that the helper data is public has no impact on the root key’s security.

Fig. 1: High-level architecture of Synopsys PUF IP.

This no-leakage proof – kept intact over years of field deployment on hundreds of millions of devices – relies on the PUF employed by the system to be an entropy source, as expressed by its stochastic model. Synopsys PUF IP uses its entropy source to initialize its root key for the very first time, which is subsequently protected by the fuzzy commitment scheme.

In addition to the fuzzy commitment scheme and the entropy source, Synopsys PUF IP also implements cryptographic operations based on certified standard-compliant constructions making use of standard symmetric crypto primitives, particularly AES and SHA-25611. These operations include:

  • a key derivation function (KDF) that uses the root key protected by the fuzzy commitment scheme as a key derivation key.
  • a deterministic random bit generator (DRBG) that is initially seeded by a high-entropy seed coming from the entropy source.
  • key wrapping functionality, essentially a form of authenticated encryption, for the protection of externally provided application keys using a key-wrapping key derived from the root key protected by the fuzzy commitment scheme.

Conclusion

The security architecture of Synopsys PUF IP is based on information-theoretically secure components for the generation and protection of a root key, and on established symmetric cryptography for other cryptographic functions. Information-theoretically secure constructions are impervious to quantum attacks. The impact of the quantum threat on symmetric cryptography is very limited and does not require any remediation now or in the foreseeable future. Importantly, Synopsys PUF IP does not deploy any quantum-vulnerable public-key cryptographic primitives.

All variants of Synopsys PUF IP are quantum-secure and in accordance with recommended post-quantum guidelines. The use of the 256-bit security strength variant of Synopsys PUF IP will offer strong quantum resistance, even in a distant future, but also the 128-bit variant is considered perfectly safe to use now and in the foreseeable time to come.

References

  1. Report on Post-Quantum Cryptography”, NIST Information Technology Laboratory, NISTIR 8105, April 2016,
  2. 2021 Quantum Threat Timeline Report”, Global Risk Institute (GRI), M. Mosca and M. Piani, January, 2022,
  3. PQC Standardization Process: Announcing Four Candidates to be Standardized, Plus Fourth Round Candidates”, NIST Information Technology Laboratory, July 5, 2022,
  4. “Grover’s quantum searching algorithm is optimal”, C. Zalka, Phys. Rev. A 60, 2746, October 1, 1999, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.60.2746
  5. Reassessing Grover’s Algorithm”, S. Fluhrer, IACR ePrint 2017/811,
  6. NIST’s pleasant post-quantum surprise”, Bas Westerbaan, CloudFlare, July 8, 2022,
  7. Post-Quantum Cryptography – FAQs: To protect against the threat of quantum computers, should we double the key length for AES now? (added 11/18/18)”, NIST Information Technology Laboratory,
  8. Quantum cryptography: Public key distribution and coin tossing”, C. H. Bennett and G. Brassard, Proceedings of the IEEE International Conference on Computers, Systems and Signal Processing, December, 1984,
  9. A fuzzy commitment scheme”, A. Juels and M. Wattenberg, Proceedings of the 6th ACM conference on Computer and Communications Security, November, 1999,
  10. An Accurate Probabilistic Reliability Model for Silicon PUFs”, R. Maes, Proceedings of the International Workshop on Cryptographic Hardware and Embedded Systems, 2013,
  11. NIST Information Technology Laboratory, Cryptographic Algorithm Validation Program CAVP, validation #A2516, https://csrc.nist.gov/projects/cryptographic-algorithm-validation-program/details?validation=35127
  12. “Announcing the Commercial National Security Algorithm Suite 2.0”, National Security Agency, Cybersecurity Advisory https://media.defense.gov/2022/Sep/07/2003071834/-1/-1/0/CSA_CNSA_2.0_ALGORITHMS_.PDF

The post Addressing Quantum Computing Threats With SRAM PUFs appeared first on Semiconductor Engineering.

The Importance Of Memory Encryption For Protecting Data In Use

As systems-on-chips (SoCs) become increasingly complex, security functions must grow accordingly to protect the semiconductor devices themselves and the sensitive information residing on or passing through them. While a Root of Trust security solution built into the SoCs can protect the chip and data resident therein (data at rest), many other threats exist which target interception, theft or tampering with the valuable information in off-chip memory (data in use).

Many isolation technologies exist for memory protection, however, with the discovery of the Meltdown and Spectre vulnerabilities in 2018, and attacks like row hammer targeting DRAM, security architects realize there are practical threats that can bypass these isolation technologies.

One of the techniques to prevent data being accessed across different guests/domains/zones/realms is memory encryption. With memory encryption in place, even if any of the isolation techniques have been compromised, the data being accessed is still protected by cryptography. To ensure the confidentiality of data, each user has their own protected key. Memory encryption can also prevent physical attacks like hardware bus probing on the DRAM bus interface. It can also prevent tampering with control plane information like the MPU/MMU control bits in DRAM and prevent the unauthorized movement of protected data within the DRAM.

Memory encryption technology must ensure confidentiality of the data. If a “lightweight” algorithm is used, there are no guarantees the data will be protected from mathematic cryptanalysts given that the amount of data used in memory encryption is typically huge. Well known, proven algorithms are either the NIST approved AES or OSCAA approved SM4 algorithms.

The recommended key length is also an important aspect defining the security strength. AES offers 128, 192 or 256-bit security, and SM4 offers 128-bit security. Advanced memory encryption technologies also involve integrity and protocol level anti-replay techniques for high-end use-cases. Proven hash algorithms like SHA-2, SHA-3, SM3 or (AES-)GHASH can be used for integrity protection purposes.

Once one or more of the cipher algorithms are selected, the choice of secure modes of operation must be made. Block Cipher algorithms need to be used in certain specific modes to encrypt bulk data larger than a single block of just 128 bits.

XTS mode, which stands for “XEX (Xor-Encrypt-Xor) with tweak and CTS (Cipher Text Stealing)” mode has been widely adopted for disk encryption. CTS is a clever technique which ensures the number of bytes in the encrypted payload is the same as the number of bytes in the plaintext payload. This is particularly important in storage to ensure the encrypted payload can fit in the same location as would the unencrypted version.

XTS/XEX uses two keys, one key for block encryption, and another key to process a “tweak.” The tweak ensures every block of memory is encrypted differently. Any changes in the plaintext result in a complete change of the ciphertext, preventing an attacker from obtaining any information about the plaintext.

While memory encryption is a critical aspect of security, there are many challenges to designing and implementing a secure memory encryption solution. Rambus is a leading provider of both memory and security technologies and understands the challenges from both the memory and security viewpoints. Rambus provides state-of-the-art Inline Memory Encryption (IME) IP that enables chip designers to build high-performance, secure, and scalable memory encryption solutions.

Additional information:

The post The Importance Of Memory Encryption For Protecting Data In Use appeared first on Semiconductor Engineering.

Open-Source Security Chip Released



The first commercial silicon chip that includes open-source, built-in hardware security was announced today by the OpenTitan coalition.

This milestone represents another step in the growth of the open hardware movement. Open hardware has been gaining steam since the development of the popular open-source processor architecture RISC-V.

RISC-V gives an openly available prescription for how a computer can operate efficiently at the most basic level. OpenTitan goes beyond RISC-V’s open-source instruction set by delivering an open-source design for the silicon itself. Although other open-source silicon has been developed, this is the first one to include the design-verification stage and to produce a fully functional commercial chip, the coalition claims.

Utilizing a RISC-V based processor core, the chip, called Earl Grey, includes a number of built-in hardware security and cryptography modules, all working together in a self-contained microprocessor. The project began back in 2019 by a coalition of companies, started by Google and shepherded by the nonprofit lowRISC in Cambridge, United Kingdom. Modeled after open-source software projects, it has been developed by contributors from around the world, both official affiliates with the project and independent coders. Today’s announcement is the culmination of five years of work.

Open source “just takes over because it has certain valuable properties... I think we’re seeing the beginning of this now with silicon.”—Dominic Rizzo, zeroRISC

“This chip is very, very exciting,” says OpenTitan cocreator and CEO of coalition partner zeroRISC Dominic Rizzo. “But there’s a much bigger thing here, which is the development of this whole new type of methodology. Instead of a traditional…command and control style structure, this is distributed.”

The methodology they have developed is called Silicon Commons. Open-source hardware design faces challenges that open-source software didn’t, such as greater costs, a smaller professional community, and inability to supply bug fixes in patches after the product is released, explains lowRISC CEO Gavin Ferris. The Silicon Commons framework provides rules for documentation, predefined interfaces, and quality standards, as well as the governance structure laying out how the different partners make decisions as a collective.

Another key to the success of the project, Ferris says, was picking a problem that all the partners would have an incentive to continue participating in over the course of the five years of development. Hardware security was the right fit for the job because of its commercial importance as well as its particular fit to the open-source model. There’s a notion in cryptography known as Kerckhoffs’s principle, which states that the only thing that should actually be secret in a cryptosystem is the secret key itself. Open-sourcing the entire protocol makes sure the cryptosystem conforms to this rule.

What Is a Hardware Root-of-Trust?

OpenTitan uses a hardware security protocol known as a root of trust (RoT). The idea is to provide an on-chip source of cryptographic keys that is inaccessible remotely. Because it’s otherwise inaccessible, the system can trust that it hasn’t been tampered with, providing a basis to build security on. “Root of Trust means that at the end of the day, there is something that we both believe in,” explains Ravi Subrahmanyan, senior director of integrated circuit design at Analog Devices, who was not involved in the effort. Once there is something both people agree on, a trusted secure connection can be established.

Conventional, proprietary chips can also leverage RoT technology. Open-sourcing it provides an extra layer of trust, proponents argue. Since anyone can inspect and probe the design, the theory is that bugs are more likely to get noticed and the bug fixes can be verified. “The openness is a good thing,” says Subrahmanyan. “Because for example, let’s say a proprietary implementation has some problem. I won’t necessarily know, right? I’m at their mercy as to whether they’re going to tell me or not.”

This kind of on-chip security is especially relevant in devices forming the Internet of Things (IoT), which suffer from unaddressed security challenges. ZeroRISC and its partners will open up sales to IoT markets via an early-access program, and they anticipate broad adoption in that sphere.

Rizzo and Ferris believe their chip has a template for open-source hardware development that other collaborations will replicate. On top of providing transparent security, open-sourcing saves companies money by allowing them to reuse hardware components rather than having to independently develop proprietary versions of the same thing. It also opens the door for many more partners to participate in the effort, including academic institutions such as OpenTitan coalition partner ETH Zurich. Thanks to academic involvement, OpenTitan was able to incorporate cryptography protocols that are safe against future quantum computers.

“Once the methodology has been proven, others will pick it up,” Rizzo says. “If you look at what’s happened with open-source software, first, people thought it was kind of an edge pursuit, and then it ended up running almost every mobile phone. It just takes over because it has certain valuable properties. And so I think we’re seeing the beginning of this now with silicon.”

❌