FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál

Microsoft breaks some Linux dual-boots in a recent Windows update

Od: Liam Dawe
21. Srpen 2024 v 14:22
Do you dual-boot Windows and Linux? Well, a recent Windows update seems to have been a bit messy and may have broken the ability to boot into Linux. Causing an alarming message to display of "Something has gone seriously wrong".

.

Read the full article on GamingOnLinux.

  • ✇Android Authority
  • Google Play will no longer pay to discover vulnerabilities in popular Android appsMishaal Rahman
    Google has announced that it is winding down the Google Play Security Reward Program. The program was introduced in late 2017 to incentivize security researchers to find and responsibly disclose vulnerabilities in popular Android apps. Google says it is winding down the program due to a decrease in actionable vulnerabilities reported by security researchers. Security vulnerabilities are lurking in most of the apps you use on a day-to-day basis; there’s just no way for most companies to preem
     

Google Play will no longer pay to discover vulnerabilities in popular Android apps

19. Srpen 2024 v 09:39

  • Google has announced that it is winding down the Google Play Security Reward Program.
  • The program was introduced in late 2017 to incentivize security researchers to find and responsibly disclose vulnerabilities in popular Android apps.
  • Google says it is winding down the program due to a decrease in actionable vulnerabilities reported by security researchers.


Security vulnerabilities are lurking in most of the apps you use on a day-to-day basis; there’s just no way for most companies to preemptively fix every possible security issue because of human error, deadlines, lack of resources, and a multitude of other factors. That’s why many organizations run bug bounty programs to get external help with fixing these issues. The Google Play Security Reward Program (GPSRP) is an example of a bug bounty program that paid security researchers to find vulnerabilities in popular Android apps, but it’s being shut down later this month.

Google announced the Google Play Security Reward Program back in October 2017 as a way to incentivize security searchers to find and, most importantly, responsibly disclose vulnerabilities in popular Android apps distributed through the Google Play Store.

When the GPSRP first launched, it was limited to a select number of developers who were only allowed to submit eligible vulnerabilities that affected applications from a small number of participating developers. Eligible vulnerabilities include those that lead to remote code execution or theft of insecure private data, with payouts initially reaching a maximum of $5,000 for vulnerabilities of the former type and $1,000 for the latter type.

Over the years, the scope of the Google Play Security Reward Program program expanded to cover developers of some of the biggest Android apps such as Airbnb, Alibaba, Amazon, Dropbox, Facebook, Grammarly, Instacart, Line, Lyft, Opera, Paypal, Pinterest, Shopify, Snapchat, Spotify, Telegram, Tesla, TikTok, Tinder, VLC, and Zomato, among many others.

In August 2019, Google opened up the GPSRP to cover all apps in Google Play with at least 100 million installations, even if they didn’t have their own vulnerability disclosure or bug bounty program. In July 2019, the rewards were increased to a maximum of $20,000 for remote code execution bugs and $3,000 for bugs that led to the theft of insecure private data or access to protected app components.

Google Play Security Reward Program eligible vulnerabilities

Credit: Mishaal Rahman / Android Authority

The purpose of the Google Play Security Reward Program was simple: Google wanted to make the Play Store a more secure destination for Android apps. According to the company, vulnerability data they collected from the program was used to help create automated checks that scanned all apps available in Google Play for similar vulnerabilities. In 2019, Google said these automated checks helped more than 300,000 developers fix more than 1,000,000 apps on Google Play. Thus, the downstream effect of the GPSRP is that fewer vulnerable apps are distributed to Android users.

However, Google has now decided to wind down the Google Play Security Reward Program. In an email to participating developers, such as Sean Pesce, the company announced that the GPSRP will end on August 31st.

The reason Google gave is that the program has seen a decrease in the number of actionable vulnerabilities reported. The company credits this success to the “overall increase in the Android OS security posture and feature hardening efforts.”

The full email sent to developers is reproduced below:

“Dear Researchers,

 

I hope this email finds you well. I am writing to express my sincere gratitude to all of you who have submitted bugs to the Google Play Security Reward Program over the past few years. Your contributions have been invaluable in helping us to improve the security of Android and Google Play.

 

As a result of the overall increase in the Android OS security posture and feature hardening efforts, we’ve seen fewer actionable vulnerabilities reported by the research community. Due to this decrease in actionable vulnerabilities reported, we are winding down the GPSRP program. The GPSRP program will end on August 31st. Any reports submitted before then will be triaged by September 15th. Final reward decisions will be made before September 30th when the program is officially discontinued. Final payments may take a few weeks to process.

 

I want to assure you that all of your reports will be reviewed and addressed before the program ends. We greatly value your input and want to make sure that any issues you have identified are resolved.

 

Thank you again for your support of the GPSRP program. We hope that you will continue working with us, on programs like the Android and Google Devices Security Reward Program.

 

Best regards,

Tony

On behalf of the Android Security Team”

In September of 2018, nearly a year after the GPSRP was announced, Google said that researchers had reported over 30 vulnerabilities through the program, earning a combined bounty of over $100k. Approximately a year later, in August of 2019, Google said that the program had paid out over $265k in bounties.

As far as we know, the company hasn’t disclosed how much they’ve paid out to security researchers since then, but we’d be surprised if the number isn’t notably higher than $265k given how long it’s been since the last disclosure and the number of popular apps in the crosshairs of security researchers.

Google shutting down this program is a mixed bag for users. On one hand, it means that popular apps have largely gotten their act together, but on the other hand, it means that some security researchers won’t have the incentive to disclose any future vulnerabilities responsibly, especially if those vulnerabilities impact an app made by a developer who doesn’t run their own bug bounty program.

  • ✇Pocketables
  • Are passkeys really any good?Paul E King
    Google’s web based Home implementation evidently now requires passkeys. Passkeys are yet another hurdle to prevent hackers from gaining entry to your stuff and involve a second factor of authentication, in this case my Pixel 8 Pro and a thumb print. Something happened when I attempted to use my phone as a passkey and that is it failed. Failed hard claiming it was not near the computer I was using, and I had to repeat the steps and pay attention. The passkey request on your phone is that so
     

Are passkeys really any good?

6. Srpen 2024 v 16:06

Google’s web based Home implementation evidently now requires passkeys. Passkeys are yet another hurdle to prevent hackers from gaining entry to your stuff and involve a second factor of authentication, in this case my Pixel 8 Pro and a thumb print.

Something happened when I attempted to use my phone as a passkey and that is it failed. Failed hard claiming it was not near the computer I was using, and I had to repeat the steps and pay attention.

The passkey request on your phone is that something has requested access, you have no options but to tap the box and then it asks for your thumbprint or other unlock – at no point on the passkey screen I was presented was there an option of “hell no, this isn’t me.”

I attempted to recreate the steps because, well, I wanted either a screenshot or a picture of what was happening but my computer that I am posting this on now has an inability to send a passkey request using Edge (Chrome has its 24 hours token so that’s not happening again.)

Passkey not connecting
Stuck in a waiting for phone to respond / phone doesn’t have a request loop

The inability to trigger a request for a passkey unlock concerning enough, but there being no clearly labeled what is requesting this is more. As a note there is a site name listed (google.com) I managed to force using a QR code to passkey unlock as it would never actually do anything.

There’s just a standard looking fingerprint unlock with small text saying something wants to identify me. No method visible to not validate and move on.

Pretty sure this lack of text to let people know their account is going to be accessed somewhere else is going to come back and bite someone on the butt.

“What? oh ignore that just unlock your phone and tell me what you see…” $15,000 later…

Would really love to see a more verbose implementation that includes “you’re unlocking your account to a different device, do you really want to do this?” message – or something similar.

Are passkeys really any good? by Paul E King first appeared on Pocketables.

  • ✇Semiconductor Engineering
  • A Generic Approach For Fuzzing Arbitrary HypervisorsTechnical Paper Link
    A technical paper titled “HYPERPILL: Fuzzing for Hypervisor-bugs by Leveraging the Hardware Virtualization Interface” was presented at the August 2024 USENIX Security Symposium by researchers at EPFL, Boston University, and Zhejiang University. Abstract: “The security guarantees of cloud computing depend on the isolation guarantees of the underlying hypervisors. Prior works have presented effective methods for automatically identifying vulnerabilities in hypervisors. However, these approaches ar
     

A Generic Approach For Fuzzing Arbitrary Hypervisors

A technical paper titled “HYPERPILL: Fuzzing for Hypervisor-bugs by Leveraging the Hardware Virtualization Interface” was presented at the August 2024 USENIX Security Symposium by researchers at EPFL, Boston University, and Zhejiang University.

Abstract:

“The security guarantees of cloud computing depend on the isolation guarantees of the underlying hypervisors. Prior works have presented effective methods for automatically identifying vulnerabilities in hypervisors. However, these approaches are limited in scope. For instance, their implementation is typically hypervisor-specific and limited by requirements for detailed grammars, access to source-code, and assumptions about hypervisor behaviors. In practice, complex closed-source and recent open-source hypervisors are often not suitable for off-the-shelf fuzzing techniques.

HYPERPILL introduces a generic approach for fuzzing arbitrary hypervisors. HYPERPILL leverages the insight that although hypervisor implementations are diverse, all hypervisors rely on the identical underlying hardware-virtualization interface to manage virtual-machines. To take advantage of the hardware-virtualization interface, HYPERPILL makes a snapshot of the hypervisor, inspects the snapshotted hardware state to enumerate the hypervisor’s input-spaces, and leverages feedback-guided snapshot-fuzzing within an emulated environment to identify vulnerabilities in arbitrary hypervisors. In our evaluation, we found that beyond being the first hypervisor-fuzzer capable of identifying vulnerabilities in arbitrary hypervisors across all major attack-surfaces (i.e., PIO/MMIO/Hypercalls/DMA), HYPERPILL also outperforms state-of-the-art approaches that rely on access to source-code, due to the granularity of feedback provided by HYPERPILL’s emulation-based approach. In terms of coverage, HYPERPILL outperformed past fuzzers for 10/12 QEMU devices, without the API hooking or source-code instrumentation techniques required by prior works. HYPERPILL identified 26 new bugs in recent versions of QEMU, Hyper-V, and macOS Virtualization Framework across four device-categories.”

Find the technical paper here. Published August 2024. Distinguished Paper Award Winner.

Bulekov, Alexander, Qiang Liu, Manuel Egele, and Mathias Payer. “HYPERPILL: Fuzzing for Hypervisor-bugs by Leveraging the Hardware Virtualization Interface.” In 33rd USENIX Security Symposium (USENIX Security 24). 2024.

Further Reading
SRAM Security Concerns Grow
Volatile memory threat increases as chips are disaggregated into chiplets, making it easier to isolate memory and slow data degradation.

The post A Generic Approach For Fuzzing Arbitrary Hypervisors appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • A Novel Attack For Depleting DNN Model Inference With Runtime Code Fault InjectionsTechnical Paper Link
    A technical paper titled “Yes, One-Bit-Flip Matters! Universal DNN Model Inference Depletion with Runtime Code Fault Injection” was presented at the August 2024 USENIX Security Symposium by researchers at Peng Cheng Laboratory, Shanghai Jiao Tong University, CSIRO’s Data61, University of Western Australia, and University of Waterloo. Abstract: “We propose, FrameFlip, a novel attack for depleting DNN model inference with runtime code fault injections. Notably, Frameflip operates independently o
     

A Novel Attack For Depleting DNN Model Inference With Runtime Code Fault Injections

A technical paper titled “Yes, One-Bit-Flip Matters! Universal DNN Model Inference Depletion with Runtime Code Fault Injection” was presented at the August 2024 USENIX Security Symposium by researchers at Peng Cheng Laboratory, Shanghai Jiao Tong University, CSIRO’s Data61, University of Western Australia, and University of Waterloo.

Abstract:

“We propose, FrameFlip, a novel attack for depleting DNN model inference with runtime code fault injections. Notably, Frameflip operates independently of the DNN models deployed and succeeds with only a single bit-flip injection. This fundamentally distinguishes it from the existing DNN inference depletion paradigm that requires injecting tens of deterministic faults concurrently. Since our attack performs at the universal code or library level, the mandatory code snippet can be perversely called by all mainstream machine learning frameworks, such as PyTorch and TensorFlow, dependent on the library code. Using DRAM Rowhammer to facilitate end-to-end fault injection, we implement Frameflip across diverse model architectures (LeNet, VGG-16, ResNet-34 and ResNet-50) with different datasets (FMNIST, CIFAR-10, GTSRB, and ImageNet). With a single bit fault injection, Frameflip achieves high depletion efficacy that consistently renders the model inference utility as no better than guessing. We also experimentally verify that identified vulnerable bits are almost equally effective at depleting different deployed models. In contrast, transferability is unattainable for all existing state-of-the-art model inference depletion attacks. Frameflip is shown to be evasive against all known defenses, generally due to the nature of current defenses operating at the model level (which is model-dependent) in lieu of the underlying code level.”

Find the technical paper here. Published August 2024. Distinguished Paper Award Winner.

Li, Shaofeng, Xinyu Wang, Minhui Xue, Haojin Zhu, Zhi Zhang, Yansong Gao, Wen Wu, and Xuemin Sherman Shen. “Yes, One-Bit-Flip Matters! Universal DNN Model Inference Depletion with Runtime Code Fault Injection.” In Proceedings of the 33th USENIX Security Symposium. 2024.

Related Reading
Why It’s So Hard To Secure AI Chips
Much of the hardware is the same, but AI systems have unique vulnerabilities that require novel defense strategies.

The post A Novel Attack For Depleting DNN Model Inference With Runtime Code Fault Injections appeared first on Semiconductor Engineering.

Uncovering A Significant Residual Attack Surface For Cross-Privilege Spectre-V2 Attacks

A technical paper titled “InSpectre Gadget: Inspecting the Residual Attack Surface of Cross-privilege Spectre v2” was presented at the August 2024 USENIX Security Symposium by researchers at Vrije Universiteit Amsterdam.

Abstract:

“Spectre v2 is one of the most severe transient execution vulnerabilities, as it allows an unprivileged attacker to lure a privileged (e.g., kernel) victim into speculatively jumping to a chosen gadget, which then leaks data back to the attacker. Spectre v2 is hard to eradicate. Even on last-generation Intel CPUs, security hinges on the unavailability of exploitable gadgets. Nonetheless, with (i) deployed mitigations—eIBRS, no-eBPF, (Fine)IBT—all aimed at hindering many usable gadgets, (ii) existing exploits relying on now-privileged features (eBPF), and (iii) recent Linux kernel gadget analysis studies reporting no exploitable gadgets, the common belief is that there is no residual attack surface of practical concern.

In this paper, we challenge this belief and uncover a significant residual attack surface for cross-privilege Spectre-v2 attacks. To this end, we present InSpectre Gadget, a new gadget analysis tool for in-depth inspection of Spectre gadgets. Unlike existing tools, ours performs generic constraint analysis and models knowledge of advanced exploitation techniques to accurately reason over gadget exploitability in an automated fashion. We show that our tool can not only uncover new (unconventionally) exploitable gadgets in the Linux kernel, but that those gadgets are sufficient to bypass all deployed Intel mitigations. As a demonstration, we present the first native Spectre-v2 exploit against the Linux kernel on last-generation Intel CPUs, based on the recent BHI variant and able to leak arbitrary kernel memory at 3.5 kB/sec. We also present a number of gadgets and exploitation techniques to bypass the recent FineIBT mitigation, along with a case study on a 13th Gen Intel CPU that can leak kernel memory at 18 bytes/sec.”

Find the technical paper here. Published August 2024. Distinguished Paper Award Winner.  Find additional information here on VU Amsterdam’s site.

Wiebing, Sander, Alvise de Faveri Tron, Herbert Bos, and Cristiano Giuffrida. “InSpectre Gadget: Inspecting the residual attack surface of cross-privilege Spectre v2.” In USENIX Security. 2024.

Further Reading
Defining Chip Threat Models To Identify Security Risks
Not every device has the same requirements, and even the best security needs to adapt.

The post Uncovering A Significant Residual Attack Surface For Cross-Privilege Spectre-V2 Attacks appeared first on Semiconductor Engineering.

Data Memory-Dependent Prefetchers Pose SW Security Threat By Breaking Cryptographic Implementations

A technical paper titled “GoFetch: Breaking Constant-Time Cryptographic Implementations Using Data Memory-Dependent Prefetchers” was presented at the August 2024 USENIX Security Symposium by researchers at University of Illinois Urbana-Champaign, University of Texas at Austin, Georgia Institute of Technology, University of California Berkeley, University of Washington, and Carnegie Mellon University.

Abstract:

“Microarchitectural side-channel attacks have shaken the foundations of modern processor design. The cornerstone defense against these attacks has been to ensure that security-critical programs do not use secret-dependent data as addresses. Put simply: do not pass secrets as addresses to, e.g., data memory instructions. Yet, the discovery of data memory-dependent prefetchers (DMPs)—which turn program data into addresses directly from within the memory system—calls into question whether this approach will continue to remain secure.

This paper shows that the security threat from DMPs is significantly worse than previously thought and demonstrates the first end-to-end attacks on security-critical software using the Apple m-series DMP. Undergirding our attacks is a new understanding of how DMPs behave which shows, among other things, that the Apple DMP will activate on behalf of any victim program and attempt to “leak” any cached data that resembles a pointer. From this understanding, we design a new type of chosen-input attack that uses the DMP to perform end-to-end key extraction on popular constant-time implementations of classical (OpenSSL Diffie-Hellman Key Exchange, Go RSA decryption) and post-quantum cryptography (CRYSTALS-Kyber and CRYSTALS-Dilithium).”

Find the technical paper here. Published August 2024.

Chen, Boru, Yingchen Wang, Pradyumna Shome, Christopher W. Fletcher, David Kohlbrenner, Riccardo Paccagnella, and Daniel Genkin. “GoFetch: Breaking constant-time cryptographic implementations using data memory-dependent prefetchers.” In Proc. USENIX Secur. Symp, pp. 1-21. 2024.

Further Reading
Chip Security Now Depends On Widening Supply Chain
How tighter HW-SW integration and increasing government involvement are changing the security landscape for chips and systems.

 

The post Data Memory-Dependent Prefetchers Pose SW Security Threat By Breaking Cryptographic Implementations appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • A New Low-Cost HW-Counterbased RowHammer Mitigation TechniqueTechnical Paper Link
    A technical paper titled “ABACuS: All-Bank Activation Counters for Scalable and Low Overhead RowHammer Mitigation” was presented at the August 2024 USENIX Security Symposium by researchers at ETH Zurich. Abstract: “We introduce ABACuS, a new low-cost hardware-counterbased RowHammer mitigation technique that performance-, energy-, and area-efficiently scales with worsening RowHammer vulnerability. We observe that both benign workloads and RowHammer attacks tend to access DRAM rows with the sa
     

A New Low-Cost HW-Counterbased RowHammer Mitigation Technique

A technical paper titled “ABACuS: All-Bank Activation Counters for Scalable and Low Overhead RowHammer Mitigation” was presented at the August 2024 USENIX Security Symposium by researchers at ETH Zurich.

Abstract:

“We introduce ABACuS, a new low-cost hardware-counterbased RowHammer mitigation technique that performance-, energy-, and area-efficiently scales with worsening RowHammer vulnerability. We observe that both benign workloads and RowHammer attacks tend to access DRAM rows with the same row address in multiple DRAM banks at around the same time. Based on this observation, ABACuS’s key idea is to use a single shared row activation counter to track activations to the rows with the same row address in all DRAM banks. Unlike state-of-the-art RowHammer mitigation mechanisms that implement a separate row activation counter for each DRAM bank, ABACuS implements fewer counters (e.g., only one) to track an equal number of aggressor rows.

Our comprehensive evaluations show that ABACuS securely prevents RowHammer bitflips at low performance/energy overhead and low area cost. We compare ABACuS to four state-of-the-art mitigation mechanisms. At a nearfuture RowHammer threshold of 1000, ABACuS incurs only 0.58% (0.77%) performance and 1.66% (2.12%) DRAM energy overheads, averaged across 62 single-core (8-core) workloads, requiring only 9.47 KiB of storage per DRAM rank. At the RowHammer threshold of 1000, the best prior lowarea-cost mitigation mechanism incurs 1.80% higher average performance overhead than ABACuS, while ABACuS requires 2.50× smaller chip area to implement. At a future RowHammer threshold of 125, ABACuS performs very similarly to (within 0.38% of the performance of) the best prior performance- and energy-efficient RowHammer mitigation mechanism while requiring 22.72× smaller chip area. We show that ABACuS’s performance scales well with the number of DRAM banks. At the RowHammer threshold of 125, ABACuS incurs 1.58%, 1.50%, and 2.60% performance overheads for 16-, 32-, and 64-bank systems across all single-core workloads, respectively. ABACuS is freely and openly available at https://github.com/CMU-SAFARI/ABACuS.”

Find the technical paper here.

Olgun, Ataberk, Yahya Can Tugrul, Nisa Bostanci, Ismail Emir Yuksel, Haocong Luo, Steve Rhyner, Abdullah Giray Yaglikci, Geraldo F. Oliveira, and Onur Mutlu. “Abacus: All-bank activation counters for scalable and low overhead rowhammer mitigation.” In USENIX Security. 2024.

Further Reading
Securing DRAM Against Evolving Rowhammer Threats
A multi-layered, system-level approach is crucial to DRAM protection.

The post A New Low-Cost HW-Counterbased RowHammer Mitigation Technique appeared first on Semiconductor Engineering.

An AWS Configuration Issue Could Expose Thousands of Web Apps

21. Srpen 2024 v 00:00
Amazon has updated its instructions for how customers should more securely implement AWS's traffic-routing service known as Application Load Balancer, but it's not clear everyone will get the memo.

  • ✇IEEE Spectrum
  • NIST Announces Post-Quantum Cryptography StandardsDina Genkina
    Today, almost all data on the Internet, including bank transactions, medical records, and secure chats, is protected with an encryption scheme called RSA (named after its creators Rivest, Shamir, and Adleman). This scheme is based on a simple fact—it is virtually impossible to calculate the prime factors of a large number in a reasonable amount of time, even on the world’s most powerful supercomputer. Unfortunately, large quantum computers, if and when they are built, would find this task a bree
     

NIST Announces Post-Quantum Cryptography Standards

13. Srpen 2024 v 12:01


Today, almost all data on the Internet, including bank transactions, medical records, and secure chats, is protected with an encryption scheme called RSA (named after its creators Rivest, Shamir, and Adleman). This scheme is based on a simple fact—it is virtually impossible to calculate the prime factors of a large number in a reasonable amount of time, even on the world’s most powerful supercomputer. Unfortunately, large quantum computers, if and when they are built, would find this task a breeze, thus undermining the security of the entire Internet.

Luckily, quantum computers are only better than classical ones at a select class of problems, and there are plenty of encryption schemes where quantum computers don’t offer any advantage. Today, the U.S. National Institute of Standards and Technology (NIST) announced the standardization of three post-quantum cryptography encryption schemes. With these standards in hand, NIST is encouraging computer system administrators to begin transitioning to post-quantum security as soon as possible.

“Now our task is to replace the protocol in every device, which is not an easy task.” —Lily Chen, NIST

These standards are likely to be a big element of the Internet’s future. NIST’s previous cryptography standards, developed in the 1970s, are used in almost all devices, including Internet routers, phones, and laptops, says Lily Chen, head of the cryptography group at NIST who lead the standardization process. But adoption will not happen overnight.

“Today, public key cryptography is used everywhere in every device,” Chen says. “Now our task is to replace the protocol in every device, which is not an easy task.”

Why we need post-quantum cryptography now

Most experts believe large-scale quantum computers won’t be built for at least another decade. So why is NIST worried about this now? There are two main reasons.

First, many devices that use RSA security, like cars and some IoT devices, are expected to remain in use for at least a decade. So they need to be equipped with quantum-safe cryptography before they are released into the field.

“For us, it’s not an option to just wait and see what happens. We want to be ready and implement solutions as soon as possible.” —Richard Marty, LGT Financial Services

Second, a nefarious individual could potentially download and store encrypted data today, and decrypt it once a large enough quantum computer comes online. This concept is called “harvest now, decrypt later“ and by its nature, it poses a threat to sensitive data now, even if that data can only be cracked in the future.

Security experts in various industries are starting to take the threat of quantum computers seriously, says Joost Renes, principal security architect and cryptographer at NXP Semiconductors. “Back in 2017, 2018, people would ask ‘What’s a quantum computer?’” Renes says. “Now, they’re asking ‘When will the PQC standards come out and which one should we implement?’”

Richard Marty, chief technology officer at LGT Financial Services, agrees. “For us, it’s not an option to just wait and see what happens. We want to be ready and implement solutions as soon as possible, to avoid harvest now and decrypt later.”

NIST’s competition for the best quantum-safe algorithm

NIST announced a public competition for the best PQC algorithm back in 2016. They received a whopping 82 submissions from teams in 25 different countries. Since then, NIST has gone through 4 elimination rounds, finally whittling the pool down to four algorithms in 2022.

This lengthy process was a community-wide effort, with NIST taking input from the cryptographic research community, industry, and government stakeholders. “Industry has provided very valuable feedback,” says NIST’s Chen.

These four winning algorithms had intense-sounding names: CRYSTALS-Kyber, CRYSTALS-Dilithium, Sphincs+, and FALCON. Sadly, the names did not survive standardization: The algorithms are now known as Federal Information Processing Standard (FIPS) 203 through 206. FIPS 203, 204, and 205 are the focus of today’s announcement from NIST. FIPS 206, the algorithm previously known as FALCON, is expected to be standardized in late 2024.

The algorithms fall into two categories: general encryption, used to protect information transferred via a public network, and digital signature, used to authenticate individuals. Digital signatures are essential for preventing malware attacks, says Chen.

Every cryptography protocol is based on a math problem that’s hard to solve but easy to check once you have the correct answer. For RSA, it’s factoring large numbers into two primes—it’s hard to figure out what those two primes are (for a classical computer), but once you have one it’s straightforward to divide and get the other.

“We have a few instances of [PQC], but for a full transition, I couldn’t give you a number, but there’s a lot to do.” —Richard Marty, LGT Financial Services

Two out of the three schemes already standardized by NIST, FIPS 203 and FIPS 204 (as well as the upcoming FIPS 206), are based on another hard problem, called lattice cryptography. Lattice cryptography rests on the tricky problem of finding the lowest common multiple among a set of numbers. Usually, this is implemented in many dimensions, or on a lattice, where the least common multiple is a vector.

The third standardized scheme, FIPS 205, is based on hash functions—in other words, converting a message to an encrypted string that’s difficult to reverse

The standards include the encryption algorithms’ computer code, instructions for how to implement it, and intended uses. There are three levels of security for each protocol, designed to future-proof the standards in case some weaknesses or vulnerabilities are found in the algorithms.

Lattice cryptography survives alarms over vulnerabilities

Earlier this year, a pre-print published to the arXiv alarmed the PQC community. The paper, authored by Yilei Chen of Tsinghua University in Beijing, claimed to show that lattice-based cryptography, the basis of two out of the three NIST protocols, was not, in fact, immune to quantum attacks. On further inspection, Yilei Chen’s argument turned out to have a flaw—and lattice cryptography is still believed to be secure against quantum attacks.

On the one hand, this incident highlights the central problem at the heart of all cryptography schemes: There is no proof that any of the math problems the schemes are based on are actually “hard.” The only proof, even for the standard RSA algorithms, is that people have been trying to break the encryption for a long time, and have all failed. Since post-quantum cryptography standards, including lattice cryptogrphay, are newer, there is less certainty that no one will find a way to break them.

That said, the failure of this latest attempt only builds on the algorithm’s credibility. The flaw in the paper’s argument was discovered within a week, signaling that there is an active community of experts working on this problem. “The result of that paper is not valid, that means the pedigree of the lattice-based cryptography is still secure,” says NIST’s Lily Chen (no relation to Tsinghua University’s Yilei Chen). “People have tried hard to break this algorithm. A lot of people are trying, they try very hard, and this actually gives us confidence.”

NIST’s announcement is exciting, but the work of transitioning all devices to the new standards has only just begun. It is going to take time, and money, to fully protect the world from the threat of future quantum computers.

“We’ve spent 18 months on the transition and spent about half a million dollars on it,” says Marty of LGT Financial Services. “We have a few instances of [PQC], but for a full transition, I couldn’t give you a number, but there’s a lot to do.”

  • ✇Ars Technica - All content
  • “Something has gone seriously wrong,” dual-boot systems warn after Microsoft updateDan Goodin
    Enlarge (credit: Getty Images) Last Tuesday, loads of Linux users—many running packages released as early as this year—started reporting their devices were failing to boot. Instead, they received a cryptic error message that included the phrase: “Something has gone seriously wrong.” The cause: an update Microsoft issued as part of its monthly patch release. It was intended to close a 2-year-old vulnerability in GRUB, an open source boot loader used to start up many Linux devi
     

“Something has gone seriously wrong,” dual-boot systems warn after Microsoft update

21. Srpen 2024 v 02:17
“Something has gone seriously wrong,” dual-boot systems warn after Microsoft update

Enlarge (credit: Getty Images)

Last Tuesday, loads of Linux users—many running packages released as early as this year—started reporting their devices were failing to boot. Instead, they received a cryptic error message that included the phrase: “Something has gone seriously wrong.”

The cause: an update Microsoft issued as part of its monthly patch release. It was intended to close a 2-year-old vulnerability in GRUB, an open source boot loader used to start up many Linux devices. The vulnerability, with a severity rating of 8.6 out of 10, made it possible for hackers to bypass secure boot, the industry standard for ensuring that devices running Windows or other operating systems don’t load malicious firmware or software during the bootup process. CVE-2022-2601 was discovered in 2022, but for unclear reasons, Microsoft patched it only last Tuesday.

Multiple distros, both new and old, affected

Tuesday’s update left dual-boot devices—meaning those configured to run both Windows and Linux—no longer able to boot into the latter when Secure Boot was enforced. When users tried to load Linux, they received the message: “Verifying shim SBAT data failed: Security Policy Violation. Something has gone seriously wrong: SBAT self-check failed: Security Policy Violation.” Almost immediately support and discussion forums lit up with ​​reports of the failure.

Read 10 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Windows 0-day was exploited by North Korea to install advanced rootkitDan Goodin
    Enlarge (credit: Getty Images) A Windows zero-day vulnerability recently patched by Microsoft was exploited by hackers working on behalf of the North Korean government so they could install custom malware that’s exceptionally stealthy and advanced, researchers reported Monday. The vulnerability, tracked as CVE-2024-38193, was one of six zero-days—meaning vulnerabilities known or actively exploited before the vendor has a patch—fixed in Microsoft’s monthly update release last
     

Windows 0-day was exploited by North Korea to install advanced rootkit

20. Srpen 2024 v 01:37
Windows 0-day was exploited by North Korea to install advanced rootkit

Enlarge (credit: Getty Images)

A Windows zero-day vulnerability recently patched by Microsoft was exploited by hackers working on behalf of the North Korean government so they could install custom malware that’s exceptionally stealthy and advanced, researchers reported Monday.

The vulnerability, tracked as CVE-2024-38193, was one of six zero-days—meaning vulnerabilities known or actively exploited before the vendor has a patch—fixed in Microsoft’s monthly update release last Tuesday. Microsoft said the vulnerability—in a class known as a "use after free"—was located in AFD.sys, the binary file for what’s known as the ancillary function driver and the kernel entry point for the Winsock API. Microsoft warned that the zero-day could be exploited to give attackers system privileges, the maximum system rights available in Windows and a required status for executing untrusted code.

Lazarus gets access to the Windows kernel

Microsoft warned at the time that the vulnerability was being actively exploited but provided no details about who was behind the attacks or what their ultimate objective was. On Monday, researchers with Gen—the security firm that discovered the attacks and reported them privately to Microsoft—said the threat actors were part of Lazarus, the name researchers use to track a hacking outfit backed by the North Korean government.

Read 6 remaining paragraphs | Comments

  • ✇Android Authority
  • After robbing you blind, this Android malware erases your phone (Update: Google statement)Stephen Schenck
    BingoMod is a remote access trojan that uses your phone to set up money transfers. The app is spread via text message, and pretends to be security software. Once its done stealing from you, its operators remotely wipe your phone. Update, August 2, 2024 (04:10 PM ET): Google has reached out to us with a message of reassurance: Android users are automatically protected against known versions of this malware by Google Play Protect, which is on by default on Android devices with Google Play Serv
     

After robbing you blind, this Android malware erases your phone (Update: Google statement)

2. Srpen 2024 v 17:44

  • BingoMod is a remote access trojan that uses your phone to set up money transfers.
  • The app is spread via text message, and pretends to be security software.
  • Once its done stealing from you, its operators remotely wipe your phone.


Update, August 2, 2024 (04:10 PM ET): Google has reached out to us with a message of reassurance:

Android users are automatically protected against known versions of this malware by Google Play Protect, which is on by default on Android devices with Google Play Services.

Of course, the key word there is “known” versions, and as the team at Cleafy reported, BingoMod is still evolving and working on new tricks to evade detection. Play Protect isn’t going to rest on its laurels, either, so expect this cat-and-mouse game to continue. And for your own part, keep using best practices when it comes to sourcing your apps.

Original article, August 2, 2024 (11:44 AM ET): Getting malware on your smartphone is just a recipe for a bad day, but even within that misery there’s a spectrum of how awful things will be. Some malware may be interested in exploiting its position on your device to send spam texts or mine crypto. But the really dangerous stuff just wants to straight-up steal from you, and the example we’re checking out today has a particularly nasty going-away present for your phone when it’s done.

A remote access trojan (RAT) dubbed BingoMod was first spotted back in May by the researchers at Cleafy (via BleepingComputer). The software is largely spread via SMS-based phishing, where it masquerades as a security tool — one of the icons the app dresses itself up with is that from AVG antivirus. Once on your phone, it requests access to Android Accessibility Services, which it uses to get its hooks in for remotely controlling your device.

Once established, the malware’s goal is setting up money transfers. It steals login data with a keylogger, and confirmation codes by intercepting SMS. And then when it has the credentials and access it needs, the threat actor controlling the malware can start transferring all your savings away. With language support for English, Romanian, and Italian, the app seems targeted at European users, and circumstantial evidence suggests Romanian devs may be behind it.

All this sounds bad, but not that different from plenty of malware, right? Well, BingoMod, it seems, is a little paranoid about being found out. Besides the numerous tricks it uses to evade automatic detection, it’s got a doomsday weapon it’s ready to deploy after achieving its goals and wiping your accounts clean: it wipes your phone.

While BingoMod supports a built-in command for wiping data, that’s limited to external storage, which isn’t going to get it very far. Instead, Cleafy’s team suspects that the people controlling the malware remotely are manually executing these wipes when they’re done stealing from you, just like you’d do yourself before getting rid of an old phone. Presumably, that’s in the goal of destroying evidence of the hack — losing your personal data is just collateral damage.

That’s a fresh kind of awful that we would be very happy never having to deal with. The good news is that you really don’t have to. Get your apps from official sources, don’t install software from sketchy text messages, and you’ll be well on your way to not losing all your data in a malware attack.

I had to physically destroy a cheap tablet this weekend after all data destruction methods including Google Family Link remote wipe failed

1. Červenec 2024 v 16:44

A while back on Vine I’d picked up a tablet of remarkable minor problems. One of them being that the USB port only worked on the most antiquated chargers I had. I had never investigated setting it up for ADB as this was my for my kiddos on managed accounts and hidden behind a password. Secure enough for a kid’s tablet.

Google Family Link

The device developed a little crack from a drop, which was no big deal, it was still usable. I attempted to instill the understanding of the need for a password as I assume (erring on the side of caution) managed accounts have some of my information. Even if they don’t I want my kids to have a lock on their phones so if they lose them someone else can’t make calls to <pick country that would cost a lot to call> or be used to call in fake emergency services calls. Simple security.

The tablet’s main user decides what she’s going to use as her password. Yay! Then she figures out how to set the wallpaper on the lock screen so it shows her her password in case she forgets it. Boo!

And then someone steps on the tablet.

We were left with a tablet that the touch screen was completely unresponsive. I attempted to get into the bootloader, nope, attempted adb, nope (didn’t even recognize it,) even looked up the part for the screen and it was more than the tablet was worth.

Exhausting all methods I knew from my ROM flashing days I resigned myself and went into the Google Family Link program and chose to wipe the tablet as there was no saving this beast.

And nothing happened.

Nothing.

The tablet was connected to our Wi-Fi, powered on, I could lock and unlock it from Family Link, but the wipe and erase function straight up did nothing. We waited hours. I repeatedly wiped and erased remotely. I made the unit beep for it’s location and to verify that it was getting commands. But it never wiped the tablet.

I spent about two weeks attempting everything I could think of, bearing in mind my rooting days were about 3 phones behind me, and I couldn’t even see that anything was connected when I plugged the tablet into my computer. There was only power drain. Never anything.

Then again, might not be anything if ADB was never turned on… but I couldn’t turn it on. The only thing I didn’t try was a USB Keyboard and mouse and I would have needed a USB-A to C adapter which have gone off somewhere to hide for a decade.

Deciding this was a gaping glaring security risk it was time to take the mini sledgehammer to it. I whacked it in a bag hard enough to pop the screen off the back, gently removed the battery as I didn’t want to let the magic smoke out, and went about destroying the storage.

That accomplished I felt it was now safe to recycle the rest.

I may have been a bit paranoid thinking her managed account could somehow come back to bite me, but let’s put it simply that I don’t want to find yet another zero day in which a compromised child account manages to gain access to an adult’s OAUTH2 tokens or some such.

After the destruction we went over security measures once again and that making the lock screen wallpaper be your password was a bad idea.

The stepping on the tablet, it’s an accident and why they got an inexpensive device. Next one I will make sure to add some sort of remote access app for times when this happens as evidently I can’t trust Family Link to do it.

I thought about taking some pictures, but I didn’t really like the destruction. I didn’t want to do this, I just wanted to recycle the thing, but it’s somehow tied to my Google account, however minutely, and I just watched my accountant’s digital life go down in flames due to a scam app, so not really willing to risk it.

I had to physically destroy a cheap tablet this weekend after all data destruction methods including Google Family Link remote wipe failed by Paul E King first appeared on Pocketables.

  • ✇Pocketables
  • Chrome triggering camera? How it happened to me / fixPaul E King
    Had an interesting thing happen earlier in the day and that is that every time I opened Chrome my camera notification came on. I’ll save the long an unnecessary SEO improving pages of why you should be scared of the camera kicking on… I killed all my chrome tabs, killed chrome, and every time I would come in the camera notification would kick back on. Earlier in the day I had accessed a web page sent by car insurance people that had me line my car up and take pictures of some damage, a
     

Chrome triggering camera? How it happened to me / fix

26. Červen 2024 v 19:13

Had an interesting thing happen earlier in the day and that is that every time I opened Chrome my camera notification came on.

I’ll save the long an unnecessary SEO improving pages of why you should be scared of the camera kicking on…

I killed all my chrome tabs, killed chrome, and every time I would come in the camera notification would kick back on.

Earlier in the day I had accessed a web page sent by car insurance people that had me line my car up and take pictures of some damage, and I highly suspect that was where this started as I had to answer yes to allowing Chrome to access the camera to take photos.

App permissions for chrome

Why Chrome kept accessing the camera even with all the tabs closed, I don’t know. It has no need for it other than to report body damage to my car.

There seemed to be no extensions running, no weirdness, but no matter what I do the camera active icon pops on when opening chrome. To disable / calm my paranoia down, I went into app permissions and revoked the camera permission I had granted it earlier in the day and now things are back to normal.

Or my phone’s been hacked because T-Mobile wouldn’t get me the critical firmware update until yesterday.

Whatever the case, the fix appears to be to manually revoke permissions.

Chrome triggering camera? How it happened to me / fix by Paul E King first appeared on Pocketables.

  • ✇Semiconductor Engineering
  • Ensure Reliability In Automotive ICs By Reducing Thermal EffectsLee Wang
    In the relentless pursuit of performance and miniaturization, the semiconductor industry has increasingly turned to 3D integrated circuits (3D-ICs) as a cutting-edge solution. Stacking dies in a 3D assembly offers numerous benefits, including enhanced performance, reduced power consumption, and more efficient use of space. However, this advanced technology also introduces significant thermal dissipation challenges that can impact the electrical behavior, reliability, performance, and lifespan of
     

Ensure Reliability In Automotive ICs By Reducing Thermal Effects

Od: Lee Wang
5. Srpen 2024 v 09:01

In the relentless pursuit of performance and miniaturization, the semiconductor industry has increasingly turned to 3D integrated circuits (3D-ICs) as a cutting-edge solution. Stacking dies in a 3D assembly offers numerous benefits, including enhanced performance, reduced power consumption, and more efficient use of space. However, this advanced technology also introduces significant thermal dissipation challenges that can impact the electrical behavior, reliability, performance, and lifespan of the chips (figure 1). For automotive applications, where safety and reliability are paramount, managing these thermal effects is of utmost importance.

Fig. 1: Illustration of a 3D-IC with heat dissipation.

3D-ICs have become particularly attractive for safety-critical devices like automotive sensors. Advanced driver-assistance systems (ADAS) and autonomous vehicles (AVs) rely on these compact, high-performance chips to process vast amounts of sensor data in real time. Effective thermal management in these devices is a top priority to ensure that they function reliably under various operating conditions.

The thermal challenges of 3D-ICs in automotive applications

The stacked configuration of 3D-ICs inherently leads to complex thermal dynamics. In traditional 2D designs, heat dissipation occurs across a single plane, making it relatively straightforward to manage. However, in 3D-ICs, multiple active layers generate heat, creating significant thermal gradients and hotspots. These thermal issues can adversely affect device performance and reliability, which is particularly critical in automotive applications where components must operate reliably under extreme temperatures and harsh conditions.

These thermal effects in automotive 3D-ICs can impact the electrical behavior of the circuits, causing timing errors, increased leakage currents, and potential device failure. Therefore, accurate and comprehensive thermal analysis throughout the design flow is essential to ensure the reliability and performance of automotive ICs.

The importance of early and continuous thermal analysis

Traditionally, thermal analysis has been performed at the package and system levels, often as a separate process from IC design. However, with the advent of 3D-ICs, this approach is no longer sufficient.

To address the thermal challenges of 3D-ICs for automotive applications, it is crucial to incorporate die-level thermal analysis early in the design process and continue it throughout the design flow (figure 2). Early-stage thermal analysis can help identify potential hotspots and thermal bottlenecks before they become critical issues, enabling designers to make informed decisions about chiplet placement, power distribution, and cooling strategies. These early decisions reduce the risks of thermal-induced failures, improving the reliability of 3D automotive ICs.

Fig. 2: Die-level detailed thermal analysis using accurate package and boundary conditions should be fully integrated into the ASIC design flow to allow for fast thermal exploration.

Early package design, floorplanning and thermal feasibility analysis

During the initial package design and floorplanning stage, designers can use high-level power estimates and simplified models to perform thermal feasibility studies. These early analyses help identify configurations that are likely to cause thermal problems, allowing designers to rule out problematic designs before investing significant time and resources in detailed implementation.

Fig. 3: Thermal analysis as part of the package design, floorplanning and implementation flows.

For example, thermal analysis can reveal issues such as overlapping heat sources in stacked dies or insufficient cooling paths. By identifying these problems early, designers can explore alternative floorplans and adjust power distribution to mitigate thermal risks. This proactive approach reduces the likelihood of encountering critical thermal issues late in the design process, thereby shortening the overall design cycle.

Iterative thermal analysis throughout design refinement

As the design progresses and more detailed information becomes available, thermal analysis should be performed iteratively to refine the thermal model and verify that the design remains within acceptable thermal limits. At each stage of design refinement, additional details such as power maps, layout geometries and their material properties can be incorporated into the thermal model to improve accuracy.

This iterative approach lets designers continuously monitor and address thermal issues, ensuring that the design evolves in a thermally aware manner. By integrating thermal analysis with other design verification tasks, such as timing and power analysis, designers can achieve a holistic view of the design’s performance and reliability.

A robust thermal analysis tool should support various stages of the design process, providing value from initial concept to final signoff:

  1. Early design planning: At the conceptual stage, designers can apply high-level power estimates to explore the thermal impact of different design options. This includes decisions related to 3D partitioning, die assembly, block and TSV floorplan, interface layer design, and package selection. By identifying potential thermal issues early, designers can make informed decisions that avoid costly redesigns later.
  2. Detailed design and implementation: As designs become more detailed, thermal analysis should be used to verify that the design stays within its thermal budget. This involves analyzing the maturing package and die layout representations to account for their impact on thermally sensitive electrical circuits. Fine-grained power maps are crucial at this stage to capture hotspot effects accurately.
  3. Design signoff: Before finalizing the design, it is essential to perform comprehensive thermal verification. This ensures that the design meets all thermal constraints and reliability requirements. Automated constraints checking and detailed reporting can expedite this process, providing designers with clear insights into any remaining thermal issues.
  4. Connection to package-system analysis: Models from IC-level thermal analysis can be used in thermal analysis of the package and system. The integration lets designers build a streamlined flow through the entire development process of a 3D electronic product.

Tools and techniques for accurate thermal analysis

To effectively manage thermal challenges in automotive ICs, designers need advanced tools and techniques that can provide accurate and fast thermal analysis throughout the design flow. Modern thermal analysis tools are equipped with capabilities to handle the complexity of 3D-IC designs, from early feasibility studies to final signoff.

High-fidelity thermal models

Accurate thermal analysis requires high-fidelity thermal models that capture the intricate details of the 3D-IC assembly. These models should account for non-uniform material properties, fine-grained power distributions, and the thermal impact of through-silicon vias (TSVs) and other 3D features. Advanced tools can generate detailed thermal models based on the actual design geometries, providing a realistic representation of heat flow and temperature distribution.

For instance, tools like Calibre 3DThermal embeds an optimized custom 3D solver from Simcenter Flotherm to perform precise thermal analysis down to the nanometer scale. By leveraging detailed layer information and accurate boundary conditions, these tools can produce reliable thermal models that reflect the true thermal behavior of the design.

Automation and results viewing

Automation is a key feature of modern thermal analysis tools, enabling designers to perform complex analyses without requiring deep expertise in thermal engineering. An effective thermal analysis tool must offer advanced automation to facilitate use by non-experts. Key automation features include:

  1. Optimized gridding: Automatically applying finer grids in critical areas of the model to ensure high resolution where needed, while using coarser grids elsewhere for efficiency.
  2. Time step automation: In transient analysis, smaller time steps can be automatically generated during power transitions to capture key impacts accurately.
  3. Equivalent thermal properties: Automatically reducing model complexity while maintaining accuracy by applying different bin sizes for critical (hotspot) vs non-critical regions when generating equivalent thermal properties.
  4. Power map compression: Using adaptive bin sizes to compress very large power maps to improve tool performance.
  1. Automated reporting: Generating summary reports that highlight key results for easy review and decision-making (figure 4).

Fig. 4: Ways to view thermal analysis results.

Automated thermal analysis tools can also integrate seamlessly with other design verification and implementation tools, providing a unified environment for managing thermal, electrical, and mechanical constraints. This integration ensures that thermal considerations are consistently addressed throughout the design flow, from initial feasibility analysis to final tape-out and even connecting with package-level analysis tools.

Real-world application

The practical benefits of integrated thermal analysis solutions are evident in real-world applications. For instance, a leading research organization, CEA, utilized an advanced thermal analysis tool from Siemens EDA to study the thermal performance of their 3DNoC demonstrator. The high-fidelity thermal model they developed showed a worst-case difference of just 3.75% and an average difference within 2% between simulation and measured data, demonstrating the accuracy and reliability of the tool (figure 5).

Fig. 5: Correlation of simulation versus measured results.

The path forward for automotive 3D-IC thermal management

As the automotive industry continues to embrace advanced technologies, the importance of accurate thermal analysis throughout the design flow of 3D-ICs cannot be overstated. By incorporating thermal analysis early in the design process and iteratively refining thermal models, designers can mitigate thermal risks, reduce design time, and enhance chip reliability.

Advanced thermal analysis tools that integrate seamlessly with the broader design environment are essential for achieving these goals. These tools enable designers to perform high-fidelity thermal analysis, automate complex tasks, and ensure that thermal considerations are addressed consistently from package design, through implementation to signoff.

By embracing these practices, designers can unlock the full potential of 3D-IC technology, delivering innovative, high-performance devices that meet the demands of today’s increasingly complex automotive applications.

For more information about die-level 3D-IC thermal analysis, read Conquer 3DIC thermal impacts with Calibre 3DThermal.

The post Ensure Reliability In Automotive ICs By Reducing Thermal Effects appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Heterogeneity Of 3DICs As A Security VulnerabilityTechnical Paper Link
    A new technical paper titled “Harnessing Heterogeneity for Targeted Attacks on 3-D ICs” was published by Drexel University. Abstract “As 3-D integrated circuits (ICs) increasingly pervade the microelectronics industry, the integration of heterogeneous components presents a unique challenge from a security perspective. To this end, an attack on a victim die of a multi-tiered heterogeneous 3-D IC is proposed and evaluated. By utilizing on-chip inductive circuits and transistors with low voltage th
     

Heterogeneity Of 3DICs As A Security Vulnerability

A new technical paper titled “Harnessing Heterogeneity for Targeted Attacks on 3-D ICs” was published by Drexel University.

Abstract
“As 3-D integrated circuits (ICs) increasingly pervade the microelectronics industry, the integration of heterogeneous components presents a unique challenge from a security perspective. To this end, an attack on a victim die of a multi-tiered heterogeneous 3-D IC is proposed and evaluated. By utilizing on-chip inductive circuits and transistors with low voltage threshold (LVT), a die based on CMOS technology is proposed that includes a sensor to monitor the electromagnetic (EM) emissions from the normal function of a victim die, without requiring physical probing. The adversarial circuit is self-powered through the use of thermocouples that supply the generated current to circuits that sense EM emissions. Therefore, the integration of disparate technologies in a single 3-D circuit allows for a stealthy, wireless, and non-invasive side-channel attack. A thin-film thermo-electric generator (TEG) is developed that produces a 115 mV voltage source, which is amplified 5 × through a voltage booster to provide power to the adversarial circuit. An on-chip inductor is also developed as a component of a sensing array, which detects changes to the magnetic field induced by the computational activity of the victim die. In addition, the challenges associated with detecting and mitigating such attacks are discussed, highlighting the limitations of existing security mechanisms in addressing the multifaceted nature of vulnerabilities due to the heterogeneity of 3-D ICs.”

Find the technical paper here. Published June 2024.

Alec Aversa and Ioannis Savidis. 2024. Harnessing Heterogeneity for Targeted Attacks on 3-D ICs. In Proceedings of the Great Lakes Symposium on VLSI 2024 (GLSVLSI ’24). Association for Computing Machinery, New York, NY, USA, 246–251. https://doi.org/10.1145/3649476.3660385.

The post Heterogeneity Of 3DICs As A Security Vulnerability appeared first on Semiconductor Engineering.

How Project 2025 Would Put US Elections at Risk

5. Srpen 2024 v 12:30
Experts say the “nonsensical” policy proposal, which largely aligns with Donald Trump’s agenda, would weaken the US agency tasked with protecting election integrity, critical infrastructure, and more.

  • ✇Slashdot
  • Design Flaw Has Microsoft Authenticator Overwriting MFA Accounts, Locking Users Outmsmash
    snydeq writes: CSO Online's Evan Schuman reports on a design flaw in Microsoft Authenticator that causes it to often overwrite authentication accounts when a user adds a new one via QR scan. "But because of the way the resulting lockout happens, the user is not likely to realize the issue resides with Microsoft Authenticator. Instead, the company issuing the authentication is considered the culprit, resulting in wasted corporate helpdesk hours trying to fix an issue not of that company's making.
     

Design Flaw Has Microsoft Authenticator Overwriting MFA Accounts, Locking Users Out

Od: msmash
5. Srpen 2024 v 20:50
snydeq writes: CSO Online's Evan Schuman reports on a design flaw in Microsoft Authenticator that causes it to often overwrite authentication accounts when a user adds a new one via QR scan. "But because of the way the resulting lockout happens, the user is not likely to realize the issue resides with Microsoft Authenticator. Instead, the company issuing the authentication is considered the culprit, resulting in wasted corporate helpdesk hours trying to fix an issue not of that company's making." Schuman writes: "The core of the problem? Microsoft Authenticator will overwrite an account with the same username. Given the prominent use of email addresses for usernames, most users' apps share the same username. Google Authenticator and just about every other authenticator app add the name of the issuer -- such as a bank or a car company -- to avoid this issue. Microsoft only uses the username." The flaw appears to have been in place since Authenticator was released in 2016. Users have complained about this issue in the past to no avail. In its two correspondences with Schuman, Microsoft first laid blame on users, then on issuers. Several IT experts confirmed the flaw, with one saying, "It's possible that this problem occurs more often than anyone realizes because [users] don't realize what the cause is. If you haven't picked an authentication app, why would you pick Microsoft?"

Read more of this story at Slashdot.

  • ✇Slashdot
  • Every Microsoft Employee Is Now Being Judged on Their Security Workmsmash
    Reeling from security and optics issues, Microsoft appears to be trying to correct its story. An anonymous reader shares a report: Microsoft made it clear earlier this year that it was planning to make security its top priority, following years of security issues and mounting criticisms. Starting today, the software giant is now tying its security efforts to employee performance reviews. Kathleen Hogan, Microsoft's chief people officer, has outlined what the company expects of employees in an in
     

Every Microsoft Employee Is Now Being Judged on Their Security Work

Od: msmash
5. Srpen 2024 v 19:25
Reeling from security and optics issues, Microsoft appears to be trying to correct its story. An anonymous reader shares a report: Microsoft made it clear earlier this year that it was planning to make security its top priority, following years of security issues and mounting criticisms. Starting today, the software giant is now tying its security efforts to employee performance reviews. Kathleen Hogan, Microsoft's chief people officer, has outlined what the company expects of employees in an internal memo obtained by The Verge. "Everyone at Microsoft will have security as a Core Priority," says Hogan. "When faced with a tradeoff, the answer is clear and simple: security above all else." A lack of security focus for Microsoft employees could impact promotions, merit-based salary increases, and bonuses. "Delivering impact for the Security Core Priority will be a key input for managers in determining impact and recommending rewards," Microsoft is telling employees in an internal Microsoft FAQ on its new policy. Microsoft has now placed security as one of its key priorities alongside diversity and inclusion. Both are now required to be part of performance conversations -- internally called a "Connect" -- for every employee, alongside priorities that are agreed upon between employees and their managers.

Read more of this story at Slashdot.

  • ✇Ars Technica - All content
  • Mac and Windows users infected by software updates delivered over hacked ISPDan Goodin
    Enlarge (credit: Marco Verch Professional Photographer and Speaker) Hackers delivered malware to Windows and Mac users by compromising their Internet service provider and then tampering with software updates delivered over unsecure connections, researchers said. The attack, researchers from security firm Volexity said, worked by hacking routers or similar types of device infrastructure of an unnamed ISP. The attackers then used their control of the devices to poison domain na
     

Mac and Windows users infected by software updates delivered over hacked ISP

6. Srpen 2024 v 01:43
The words

Enlarge (credit: Marco Verch Professional Photographer and Speaker)

Hackers delivered malware to Windows and Mac users by compromising their Internet service provider and then tampering with software updates delivered over unsecure connections, researchers said.

The attack, researchers from security firm Volexity said, worked by hacking routers or similar types of device infrastructure of an unnamed ISP. The attackers then used their control of the devices to poison domain name system responses for legitimate hostnames providing updates for at least six different apps written for Windows or macOS. The apps affected were the 5KPlayer, Quick Heal, Rainmeter, Partition Wizard, and those from Corel and Sogou.

These aren’t the update servers you’re looking for

Because the update mechanisms didn’t use TLS or cryptographic signatures to authenticate the connections or downloaded software, the threat actors were able to use their control of the ISP infrastructure to successfully perform machine-in-the-middle (MitM) attacks that directed targeted users to hostile servers rather than the ones operated by the affected software makers. These redirections worked even when users employed non-encrypted public DNS services such as Google’s 8.8.8.8 or Cloudflare’s 1.1.1.1 rather than the authoritative DNS server provided by the ISP.

Read 12 remaining paragraphs | Comments

  • ✇Android Authority
  • After robbing you blind, this Android malware erases your phone (Update: Google statement)Stephen Schenck
    BingoMod is a remote access trojan that uses your phone to set up money transfers. The app is spread via text message, and pretends to be security software. Once its done stealing from you, its operators remotely wipe your phone. Update, August 2, 2024 (04:10 PM ET): Google has reached out to us with a message of reassurance: Android users are automatically protected against known versions of this malware by Google Play Protect, which is on by default on Android devices with Google Play Serv
     

After robbing you blind, this Android malware erases your phone (Update: Google statement)

2. Srpen 2024 v 17:44

  • BingoMod is a remote access trojan that uses your phone to set up money transfers.
  • The app is spread via text message, and pretends to be security software.
  • Once its done stealing from you, its operators remotely wipe your phone.


Update, August 2, 2024 (04:10 PM ET): Google has reached out to us with a message of reassurance:

Android users are automatically protected against known versions of this malware by Google Play Protect, which is on by default on Android devices with Google Play Services.

Of course, the key word there is “known” versions, and as the team at Cleafy reported, BingoMod is still evolving and working on new tricks to evade detection. Play Protect isn’t going to rest on its laurels, either, so expect this cat-and-mouse game to continue. And for your own part, keep using best practices when it comes to sourcing your apps.

Original article, August 2, 2024 (11:44 AM ET): Getting malware on your smartphone is just a recipe for a bad day, but even within that misery there’s a spectrum of how awful things will be. Some malware may be interested in exploiting its position on your device to send spam texts or mine crypto. But the really dangerous stuff just wants to straight-up steal from you, and the example we’re checking out today has a particularly nasty going-away present for your phone when it’s done.

A remote access trojan (RAT) dubbed BingoMod was first spotted back in May by the researchers at Cleafy (via BleepingComputer). The software is largely spread via SMS-based phishing, where it masquerades as a security tool — one of the icons the app dresses itself up with is that from AVG antivirus. Once on your phone, it requests access to Android Accessibility Services, which it uses to get its hooks in for remotely controlling your device.

Once established, the malware’s goal is setting up money transfers. It steals login data with a keylogger, and confirmation codes by intercepting SMS. And then when it has the credentials and access it needs, the threat actor controlling the malware can start transferring all your savings away. With language support for English, Romanian, and Italian, the app seems targeted at European users, and circumstantial evidence suggests Romanian devs may be behind it.

All this sounds bad, but not that different from plenty of malware, right? Well, BingoMod, it seems, is a little paranoid about being found out. Besides the numerous tricks it uses to evade automatic detection, it’s got a doomsday weapon it’s ready to deploy after achieving its goals and wiping your accounts clean: it wipes your phone.

While BingoMod supports a built-in command for wiping data, that’s limited to external storage, which isn’t going to get it very far. Instead, Cleafy’s team suspects that the people controlling the malware remotely are manually executing these wipes when they’re done stealing from you, just like you’d do yourself before getting rid of an old phone. Presumably, that’s in the goal of destroying evidence of the hack — losing your personal data is just collateral damage.

That’s a fresh kind of awful that we would be very happy never having to deal with. The good news is that you really don’t have to. Get your apps from official sources, don’t install software from sketchy text messages, and you’ll be well on your way to not losing all your data in a malware attack.

I had to physically destroy a cheap tablet this weekend after all data destruction methods including Google Family Link remote wipe failed

1. Červenec 2024 v 16:44

A while back on Vine I’d picked up a tablet of remarkable minor problems. One of them being that the USB port only worked on the most antiquated chargers I had. I had never investigated setting it up for ADB as this was my for my kiddos on managed accounts and hidden behind a password. Secure enough for a kid’s tablet.

Google Family Link

The device developed a little crack from a drop, which was no big deal, it was still usable. I attempted to instill the understanding of the need for a password as I assume (erring on the side of caution) managed accounts have some of my information. Even if they don’t I want my kids to have a lock on their phones so if they lose them someone else can’t make calls to <pick country that would cost a lot to call> or be used to call in fake emergency services calls. Simple security.

The tablet’s main user decides what she’s going to use as her password. Yay! Then she figures out how to set the wallpaper on the lock screen so it shows her her password in case she forgets it. Boo!

And then someone steps on the tablet.

We were left with a tablet that the touch screen was completely unresponsive. I attempted to get into the bootloader, nope, attempted adb, nope (didn’t even recognize it,) even looked up the part for the screen and it was more than the tablet was worth.

Exhausting all methods I knew from my ROM flashing days I resigned myself and went into the Google Family Link program and chose to wipe the tablet as there was no saving this beast.

And nothing happened.

Nothing.

The tablet was connected to our Wi-Fi, powered on, I could lock and unlock it from Family Link, but the wipe and erase function straight up did nothing. We waited hours. I repeatedly wiped and erased remotely. I made the unit beep for it’s location and to verify that it was getting commands. But it never wiped the tablet.

I spent about two weeks attempting everything I could think of, bearing in mind my rooting days were about 3 phones behind me, and I couldn’t even see that anything was connected when I plugged the tablet into my computer. There was only power drain. Never anything.

Then again, might not be anything if ADB was never turned on… but I couldn’t turn it on. The only thing I didn’t try was a USB Keyboard and mouse and I would have needed a USB-A to C adapter which have gone off somewhere to hide for a decade.

Deciding this was a gaping glaring security risk it was time to take the mini sledgehammer to it. I whacked it in a bag hard enough to pop the screen off the back, gently removed the battery as I didn’t want to let the magic smoke out, and went about destroying the storage.

That accomplished I felt it was now safe to recycle the rest.

I may have been a bit paranoid thinking her managed account could somehow come back to bite me, but let’s put it simply that I don’t want to find yet another zero day in which a compromised child account manages to gain access to an adult’s OAUTH2 tokens or some such.

After the destruction we went over security measures once again and that making the lock screen wallpaper be your password was a bad idea.

The stepping on the tablet, it’s an accident and why they got an inexpensive device. Next one I will make sure to add some sort of remote access app for times when this happens as evidently I can’t trust Family Link to do it.

I thought about taking some pictures, but I didn’t really like the destruction. I didn’t want to do this, I just wanted to recycle the thing, but it’s somehow tied to my Google account, however minutely, and I just watched my accountant’s digital life go down in flames due to a scam app, so not really willing to risk it.

I had to physically destroy a cheap tablet this weekend after all data destruction methods including Google Family Link remote wipe failed by Paul E King first appeared on Pocketables.

  • ✇Pocketables
  • Chrome triggering camera? How it happened to me / fixPaul E King
    Had an interesting thing happen earlier in the day and that is that every time I opened Chrome my camera notification came on. I’ll save the long an unnecessary SEO improving pages of why you should be scared of the camera kicking on… I killed all my chrome tabs, killed chrome, and every time I would come in the camera notification would kick back on. Earlier in the day I had accessed a web page sent by car insurance people that had me line my car up and take pictures of some damage, a
     

Chrome triggering camera? How it happened to me / fix

26. Červen 2024 v 19:13

Had an interesting thing happen earlier in the day and that is that every time I opened Chrome my camera notification came on.

I’ll save the long an unnecessary SEO improving pages of why you should be scared of the camera kicking on…

I killed all my chrome tabs, killed chrome, and every time I would come in the camera notification would kick back on.

Earlier in the day I had accessed a web page sent by car insurance people that had me line my car up and take pictures of some damage, and I highly suspect that was where this started as I had to answer yes to allowing Chrome to access the camera to take photos.

App permissions for chrome

Why Chrome kept accessing the camera even with all the tabs closed, I don’t know. It has no need for it other than to report body damage to my car.

There seemed to be no extensions running, no weirdness, but no matter what I do the camera active icon pops on when opening chrome. To disable / calm my paranoia down, I went into app permissions and revoked the camera permission I had granted it earlier in the day and now things are back to normal.

Or my phone’s been hacked because T-Mobile wouldn’t get me the critical firmware update until yesterday.

Whatever the case, the fix appears to be to manually revoke permissions.

Chrome triggering camera? How it happened to me / fix by Paul E King first appeared on Pocketables.

  • ✇Semiconductor Engineering
  • Heterogeneity Of 3DICs As A Security VulnerabilityTechnical Paper Link
    A new technical paper titled “Harnessing Heterogeneity for Targeted Attacks on 3-D ICs” was published by Drexel University. Abstract “As 3-D integrated circuits (ICs) increasingly pervade the microelectronics industry, the integration of heterogeneous components presents a unique challenge from a security perspective. To this end, an attack on a victim die of a multi-tiered heterogeneous 3-D IC is proposed and evaluated. By utilizing on-chip inductive circuits and transistors with low voltage th
     

Heterogeneity Of 3DICs As A Security Vulnerability

A new technical paper titled “Harnessing Heterogeneity for Targeted Attacks on 3-D ICs” was published by Drexel University.

Abstract
“As 3-D integrated circuits (ICs) increasingly pervade the microelectronics industry, the integration of heterogeneous components presents a unique challenge from a security perspective. To this end, an attack on a victim die of a multi-tiered heterogeneous 3-D IC is proposed and evaluated. By utilizing on-chip inductive circuits and transistors with low voltage threshold (LVT), a die based on CMOS technology is proposed that includes a sensor to monitor the electromagnetic (EM) emissions from the normal function of a victim die, without requiring physical probing. The adversarial circuit is self-powered through the use of thermocouples that supply the generated current to circuits that sense EM emissions. Therefore, the integration of disparate technologies in a single 3-D circuit allows for a stealthy, wireless, and non-invasive side-channel attack. A thin-film thermo-electric generator (TEG) is developed that produces a 115 mV voltage source, which is amplified 5 × through a voltage booster to provide power to the adversarial circuit. An on-chip inductor is also developed as a component of a sensing array, which detects changes to the magnetic field induced by the computational activity of the victim die. In addition, the challenges associated with detecting and mitigating such attacks are discussed, highlighting the limitations of existing security mechanisms in addressing the multifaceted nature of vulnerabilities due to the heterogeneity of 3-D ICs.”

Find the technical paper here. Published June 2024.

Alec Aversa and Ioannis Savidis. 2024. Harnessing Heterogeneity for Targeted Attacks on 3-D ICs. In Proceedings of the Great Lakes Symposium on VLSI 2024 (GLSVLSI ’24). Association for Computing Machinery, New York, NY, USA, 246–251. https://doi.org/10.1145/3649476.3660385.

The post Heterogeneity Of 3DICs As A Security Vulnerability appeared first on Semiconductor Engineering.

Secure Low-Cost In-DRAM Trackers For Mitigating Rowhammer (Georgia Tech, Google, Nvidia)

A new technical paper titled “MINT: Securely Mitigating Rowhammer with a Minimalist In-DRAM Tracker” was published by researchers at Georgia Tech, Google, and Nvidia.

Abstract
“This paper investigates secure low-cost in-DRAM trackers for mitigating Rowhammer (RH). In-DRAM solutions have the advantage that they can solve the RH problem within the DRAM chip, without relying on other parts of the system. However, in-DRAM mitigation suffers from two key challenges: First, the mitigations are synchronized with refresh, which means we cannot mitigate at arbitrary times. Second, the SRAM area available for aggressor tracking is severely limited, to only a few bytes. Existing low-cost in-DRAM trackers (such as TRR) have been broken by well-crafted access patterns, whereas prior counter-based schemes require impractical overheads of hundreds or thousands of entries per bank. The goal of our paper is to develop an ultra low-cost secure in-DRAM tracker.

Our solution is based on a simple observation: if only one row can be mitigated at refresh, then we should ideally need to track only one row. We propose a Minimalist In-DRAM Tracker (MINT), which provides secure mitigation with just a single entry. At each refresh, MINT probabilistically decides which activation in the upcoming interval will be selected for mitigation at the next refresh. MINT provides guaranteed protection against classic single and double-sided attacks. We also derive the minimum RH threshold (MinTRH) tolerated by MINT across all patterns. MINT has a MinTRH of 1482 which can be lowered to 356 with RFM. The MinTRH of MINT is lower than a prior counter-based design with 677 entries per bank, and is within 2x of the MinTRH of an idealized design that stores one-counter-per-row. We also analyze the impact of refresh postponement on the MinTRH of low-cost in-DRAM trackers, and propose an efficient solution to make such trackers compatible with refresh postponement.”

Find the technical paper here. Preprint published July 2024.

Qureshi, Moinuddin, Salman Qazi, and Aamer Jaleel. “MINT: Securely Mitigating Rowhammer with a Minimalist In-DRAM Tracker.” arXiv preprint arXiv:2407.16038 (2024).

The post Secure Low-Cost In-DRAM Trackers For Mitigating Rowhammer (Georgia Tech, Google, Nvidia) appeared first on Semiconductor Engineering.

A Practical Approach To Inline Memory Encryption And Confidential Computing For Enhanced Data Security

1. Srpen 2024 v 09:12

In today’s technology-driven landscape in which reducing TCO is top of mind, robust data protection is not merely an option but a necessity. As data, both personal and business-specific, is continuously exchanged, stored, and moved across various platforms and devices, the demand for a secure means of data aggregation and trust enhancement is escalating. Traditional data protection strategies of protecting data at rest and data in motion need to be complemented with protection of data in use. This is where the role of inline memory encryption (IME) becomes critical. It acts as a shield for data in use, thus underpinning confidential computing and ensuring data remains encrypted even when in use. This blog post will guide you through a practical approach to inline memory encryption and confidential computing for enhanced data security.

Understanding inline memory encryption and confidential computing

Inline memory encryption offers a smart answer to data security concerns, encrypting data for storage and decoding only during computation. This is the core concept of confidential computing. As modern applications on personal devices increasingly leverage cloud systems and services, data privacy and security become critical. Confidential computing and zero trust are suggested as solutions, providing data in use protection through hardware-based trusted execution environments. This strategy reduces trust reliance within any compute environment and shrinks hackers’ attack surface.

The security model of confidential computing demands data in use protection, in addition to traditional data at rest and data in motion protection. This usually pertains to data stored in off-chip memory such as DDR memory, regardless of it being volatile or non-volatile memory. Memory encryption suggests data should be encrypted before storage into memory, in either inline or look aside form. The performance demands of modern-day memories for encryption to align with the memory path have given rise to the term inline memory encryption.

There are multiple off-chip memory technologies, and modern NVMS and DDR memory performance requirements make inline encryption the most sensible solution. The XTS algorithm using AES or SM4 ciphers and the GCM algorithm are the commonly used cryptographic algorithms for memory encryption. While the XTS algorithm encrypts data solely for confidentiality, the GCM algorithm provides data encryption and data authentication, but requires extra memory space for metadata storage.

Inline memory encryption is utilized in many systems. When considering inline cipher performance for DDR, the memory performance required for different technologies is considered. For instance, LPDDR5 typically necessitates a data path bandwidth of 25 gigabytes per second. An AES operation involves 10 to 14 rounds of encryption rounds, implying that these 14 rounds must operate at the memory’s required bandwidth. This is achievable with correct pipelining in the crypto engine. Other considerations include minimizing read path latency, support for narrow bursts, memory specific features such as the number of outstanding transactions, data-hazard protection between READ and WRITE paths, and so on. Furthermore, side channel attack protections and data path integrity, a critical factor for robustness in advanced technology nodes, are additional concerns to be taken care without prohibitive PPA overhead.

Ensuring data security in AI and computational storage

The importance of data security is not limited to traditional computing areas but also extends to AI inference and training, which heavily rely on user data. Given the privacy issues and regulatory demands tied to user data, it’s essential to guarantee the encryption of this data, preventing unauthorized access. This necessitates the application of a trusted execution environment and data encryption whenever it’s transported outside this environment. With the advent of new algorithms that call for data sharing and model refinement among multiple parties, it’s crucial to maintain data privacy by implementing appropriate encryption algorithms.

Equally important is the rapidly evolving field of computational storage. The advent of new applications and increasing demands are pushing the boundaries of conventional storage architecture. Solutions such as flexible and composable storage, software-defined storage, and memory duplication and compression algorithms are being introduced to tackle these challenges. Yet, these solutions introduce security vulnerabilities as storage devices operate on raw disk data. To counter this, storage accelerators must be equipped with encryption and decryption capabilities and should manage operations at the storage nodes.

As our computing landscape continues to evolve, we need to address the escalating demand for robust data protection. Inline memory encryption emerges as a key solution, offering data in use protection for confidential computing, securing both personal and business data.

Rambus Inline Memory Encryption IP provide scalable, versatile, and high-performance inline memory encryption solutions that cater to a wide range of application requirements. The ICE-IP-338 is a FIPS certified inline cipher engine supporting AES and SM4, as well as XTS GCM modes of operation. Building on the ICE-IP-338 IP, the ICE-IP-339 provides essential AXI4 operation support, simplifying system integration for XTS operation, delivering confidentiality protection. The IME-IP-340 IP extends basic AXI4 support to narrow data access granularity, as well as AES GCM operations, delivering confidentiality and authentication. Finally, the most recent offering, the Rambus IME-IP-341 guarantees memory encryption with AES-XTS, while supporting the Arm v9 architecture specifications.

For more information, check out my recent IME webinar now available to watch on-demand.

The post A Practical Approach To Inline Memory Encryption And Confidential Computing For Enhanced Data Security appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Keeping Up With New ADAS And IVI SoC TrendsHezi Saar
    In the automotive industry, AI-enabled automotive devices and systems are dramatically transforming the way SoCs are designed, making high-quality and reliable die-to-die and chip-to-chip connectivity non-negotiable. This article explains how interface IP for die-to-die connectivity, display, and storage can support new developments in automotive SoCs for the most advanced innovations such as centralized zonal architecture and integrated ADAS and IVI applications. AI-integrated ADAS SoCs The aut
     

Keeping Up With New ADAS And IVI SoC Trends

Od: Hezi Saar
1. Srpen 2024 v 09:10

In the automotive industry, AI-enabled automotive devices and systems are dramatically transforming the way SoCs are designed, making high-quality and reliable die-to-die and chip-to-chip connectivity non-negotiable. This article explains how interface IP for die-to-die connectivity, display, and storage can support new developments in automotive SoCs for the most advanced innovations such as centralized zonal architecture and integrated ADAS and IVI applications.

AI-integrated ADAS SoCs

The automotive industry is adopting a new electronic/electric (EE) architecture where a centralized compute module executes multiple applications such as ADAS and in-vehicle infotainment (IVI). With the advent of EVs and more advanced features in the car, the new centralized zonal architecture will help minimize complexity, maximize scalability, and facilitate faster decision-making time. This new architecture is demanding a new set of SoCs on advanced process technologies with very high performance. More traditional monolithic SoCs for single functions like ADAS are giving way to multi-die designs where various dies are connected in a single package and placed in a system to perform a function in the car. While such multi-die designs are gaining adoption, semiconductor companies must remain cost-conscious as these ADAS SoCs will be manufactured at high volumes for a myriad of safety levels. One example is the automated driving central compute system. The system can include modules for the sensor interface, safety management, memory control and interfaces, and dies for CPU, GPU, and AI accelerator, which are then connected via a die-to-die interface such as the Universal Chiplet Interconnect Express (UCIe). Figure 1 illustrates how semiconductor companies can develop SoCs for such systems using multi-die designs. For a base ADAS or IVI SoC, the requirement might just be the CPU die for a level 2 functional safety. A GPU die can be added to the base CPU die for a base ADAS or premium IVI function at a level 2+ driving automation. To allow more compute power for AI workloads, an NPU die can be added to the base CPU or the base CPU and GPU dies for level 3/3+ functional safety. None of these scalable scenarios are possible without a solution for die-to-die connectivity.

Fig. 1: A simplified view of automotive systems using multi-die designs.

The adoption of UCIe for automotive SoCs

The industry has come together to define, develop, and deploy the UCIe standard, a universal interconnect at the package-level. In a recent news release, the UCIe Consortium announced “updates to the standard with additional enhancements for automotive usages – such as predictive failure analysis and health monitoring – and enabling lower-cost packaging implementations.” Figure 2 shows three use cases for UCIe. The first use case is for low-latency and coherency where two Network on a Chip (NoC) are connected via UCIe. This use case is mainly for applications requiring ADAS computing power. The second automotive use case is when memory and IO are split into two separate dies and are then connected to the compute die via CXL and UCIe streaming protocols. The third automotive use case is very similar to what is seen in HPC applications where a companion AI accelerator die is connected to the main CPU die via UCIe.

Fig. 2: Examples of common and new use cases for UCIe in automotive applications.

To enable such automotive use cases, UCIe offers several advantages, all of which are supported by the Synopsys UCIe IP:

  • Latency optimized architecture: Flit-Aware Die-to-Die Interface (FDI) or Raw Die-to-Die Interface (RDI) operate with local 2GHz system clock. Transmitter and receiver FIFOs accommodate phase mismatch between clock domains. There is no clock domain crossing (CDC) between the PHY and Adapter layers for minimum latency. The reference clock has the same frequency for the two dies.
  • Power-optimized architecture: The transmitter provides the CMOS driver without source termination. IT offers programmable drive strength without a Feed-Forward Equalizer (FFE). The receiver provides a continuous-time linear equalizer (CTLE) without VGA and decision feedback equalizer (DFE), clock forwarding without Clock and Data Recovery (CDR), and optional receiver termination.
  • Reliability and test: Signal integrity monitors track the performance of the interconnect through the chip’s lifecycle. This can monitor inaccessible paths in the multi-die package, test and repair the PHY, and execute real time reporting for preventative maintenance.

Synopsys UCIe IP is integrated with Synopsys 3DIC Compiler, a unified exploration-to-signoff platform. The combination eases package design and provides a complete set of IP deliverables, automated UCIe routing for better quality of results, and reference interposer design for faster integration.

Fig. 3: Synopsys 3DIC Compiler.

New automotive SoC design trends for IVI applications

OEMs are attracting consumers by providing the utmost in cockpit experience with high-resolution, 4K, pillar-to-pillar displays. Multi-Stream Transport (MTR) enables a daisy-chained display topology using a single port, which consists of a single GPU, one DP TX controller, and PHY, to display images on multiple screens in the car. This revision clarifies the components involved and maintains the original meaning. This daisy-chained set up simplifies the display wiring in the car. Figure 4 illustrates how connectivity in the SoC can enable multi-display environments in the car. Row 1: Multiple image sources from the application processor are fed into the daisy-chained display set up via the DisplayPort (DP) MTR interface. Row 2: Multiple image sources from the application processor are fed to the daisy-chained display set up but also to the left or right mirrors, all via the DP MTR interface. Row 3: The same set up in row 2 can be executed via the MIPI DSI or embedded DP MTR interfaces, depending on display size and power requirements.

An alternate use case is USB/DP. A single USB port can be used for silicon lifecycle management, sentry mode, test, debug, and firmware download. USB can be used to avoid the need for very large numbers of test pings, speed up test by exceeding GPIO test pin data rates, repeat manufacturing test in-system and in-field, access PVT monitors, and debug.

Fig. 4: Examples of display connectivity in software-defined vehicles.

ISO/SAE 21434 automotive cybersecurity

ISO/SAE 21434 Automotive Cybersecurity is being adopted by industry leaders as mandated by the UNECE R155 regulation. Starting in July 2024, automotive OEMs must comply with the UNECE R155 automotive cybersecurity regulation for all new vehicles in Europe, Japan, and Korea.

Automotive suppliers must develop processes that meet the automotive cybersecurity requirements of ISO/SAE 21434, addressing the cybersecurity perspective in the engineering of electrical and electronic (E/E) systems. Adopting this methodology involves embracing a cybersecurity culture which includes developing security plans, setting security goals, conducting engineering reviews and implementing mitigation strategies.

The industry is expected to move towards enabling cybersecurity risk-managed products to mitigate the risks associated with advancement in connectivity for software-defined vehicles. As a result, automotive IP needs to be ready to support these advancements.

Synopsys ARC HS4xFS Processor IP has achieved ISO/SAE 21434 cybersecurity certification by SGS-TṺV Saar, meeting stringent automotive regulatory requirements designed to protect connected vehicles from malicious cyberattacks. In addition, Synopsys has achieved certification of its IP development process to the ISO/SAE 21434 standard to help ensure its IP products are developed with a security-first mindset through every phase of the product development lifecycle.

Conclusion

The transformation to software-defined vehicles marks a significant shift in the automotive industry, bringing together highly integrated systems and AI to create safer and more efficient vehicles while addressing sophisticated user needs and vendor serviceability. New trends in the automotive industry are presenting opportunities for innovations in ADAS and IVI SoC designs. Centralized zonal architecture, multi-die design, daisy-chained displays, and integration of ADAS/IVI functions in a single SoC are among some of the key trends that the automotive industry is tracking. Synopsys is at the forefront of automotive SoC innovations with a portfolio of silicon-proven automotive IP for the highest levels of functional safety, security, quality, and reliability. The IP portfolio is developed and assessed specifically for ISO 26262 random hardware faults and ASIL D systematic. To minimize cybersecurity risks, Synopsys is developing IP products as per the ISO/SAE 21434 standard to provide automotive SoC developers a safe, reliable, and future proof solution.

The post Keeping Up With New ADAS And IVI SoC Trends appeared first on Semiconductor Engineering.

US Hands Over Russian Cybercriminals in WSJ Reporter Prisoner Swap

Plus: Meta pays $1.4 million in a historic privacy settlement, Microsoft blames a cyberattack for a major Azure outage, and an artist creates a face recognition system to reveal your NYPD “coppelganger.”

Sensitive Illinois Voter Data Exposed by Contractor’s Unsecured Databases

2. Srpen 2024 v 18:34
Social Security numbers, death certificates, voter applications, and other personal data were accessible on the open internet, highlighting the ongoing challenges in election security.

  • ✇Latest
  • Secret Service May Get Even More Money After Failing To Protect TrumpJoe Lancaster
    Less than a month after the attempted assassination of former President Donald Trump, the agency that failed to protect him from harm may get a bigger budget. On July 13, when Thomas Matthew Crooks shot and wounded Trump during a campaign rally in Pennsylvania, Secret Service agents sprang into action, heroically shielding him from further harm and escorting him from the stage. But subsequent reporting revealed that the incident was entirely prev
     

Secret Service May Get Even More Money After Failing To Protect Trump

2. Srpen 2024 v 20:20
Secret Service agents hustle former President Donald Trump offstage after an assassination attempt at a rally in Pennsylvania. | Morgan Phillips/Polaris/Newscom

Less than a month after the attempted assassination of former President Donald Trump, the agency that failed to protect him from harm may get a bigger budget.

On July 13, when Thomas Matthew Crooks shot and wounded Trump during a campaign rally in Pennsylvania, Secret Service agents sprang into action, heroically shielding him from further harm and escorting him from the stage.

But subsequent reporting revealed that the incident was entirely preventable: Rally attendees alerted law enforcement to the presence of a suspicious person more than an hour before he started shooting, and they later saw him climbing on top of a building with a gun. "Trump was on stage for around 10 minutes between the moment Crooks was spotted on the roof with a gun and the moment he fired his first shot," the BBC reported.

After a particularly disastrous appearance before the House Oversight Committee, Secret Service Director Kimberly Cheatle resigned. Appearing before two Senate committees this week, acting Director Ronald Rowe Jr. testified that he was "ashamed" of the agency's failure.

Almost immediately after the shooting, a narrative emerged that the lapse owed to a lack of resources.

At a July 15 White House press briefing, a reporter asked Secretary Alejandro Mayorkas of the Department of Homeland Security—which oversees the Secret Service—"Is the Secret Service stretched too thin?"

"The Secret Service in—in times like this calls upon other resources and capabilities to handle a—a campaign of this magnitude," Mayorkas replied. "And I do intend to speak with members of the Hill with respect to the resources that we need."

"Our agency needs to be adequately resourced in order to serve our current mission requirements and to anticipate future requirements," Cheatle noted in her opening testimony before the House Oversight Committee on July 22. "As of today, the Secret Service has just over 8,000 employees," she told Rep. Stephen Lynch (D–Mass.). "We are still striving toward a number of 9,500 employees, approximately, in order to be able to meet future and emerging needs."

In a letter to Rowe this week, Sens. Chris Murphy (D–Conn.) and Katie Britt (R–Ala.)—respectively the chairman and the ranking member of the Senate Appropriations Subcommittee on Homeland Security—sought to understand the agency's financial needs as the subcommittee drafts an appropriations bill.

"Congress provided more than $190 million to the Secret Service in Fiscal Year 2024, specifically for protection requirements related to the 2024 presidential campaign, plus an additional $22 million above President Biden's budget request for protection-related travel costs," the senators wrote. "Despite this increase, in mid-June, prior to the attempted assassination, the Secret Service submitted a reprogramming notification to our subcommittee detailing its intent to shift $19 million to cover a shortfall for protection-related travel funding." This was in addition to the imminent addition of two vice presidential candidates and their families to the agency's protective purview, plus independent presidential candidate Robert F. Kennedy Jr., whom President Joe Biden added to the list after the shooting.

"As a result, the Secret Service is assuming new protection costs related to the campaign at a time when it already appears to lack sufficient resources to fulfill its protective mission," the senators continue.

But it's not at all clear that a lack of resources was the issue: The agency's budget in real numbers grew 55 percent over the last decade, to $3.62 billion, and its work force grew 33 percent from 2002 to 2019.

It is possible the agency may be stretched thin in its duties: The Secret Service is tasked by law with protecting not only the president, vice president, and their immediate families, but also former presidents, vice presidents, and their spouses for life, and their children until age 16. They also protect visiting heads of state and "other distinguished foreign visitors to the United States and official representatives of the United States performing special missions abroad when the President directs that such protection be provided."

"The Secret Service currently protects 36 individuals on a daily basis, as well as world leaders who visit the United States," Cheatle told the House Oversight Committee.

But that's not the agency's only job: Agents are tasked with investigating a number of financial crimes like counterfeiting, money laundering, and identity theft, as well as ransomware attacks, botnets, and "online sexual exploitation and abuse by predators and other criminals, sometimes for financial gain."

It's possible that the Secret Service is doing too many jobs for the amount of resources it enjoys. Perhaps many of its financial and investigative tasks should be shifted to the U.S. Treasury Department, which is where the Secret Service originated before Congress added presidential protection to its plate in 1901. The numbers demonstrate that the agency's problem is not purely financial.

But it's also worth keeping in mind that government agencies, by their nature, do too much, too poorly, and for too much money. The Secret Service, for all the nobility of its mission, is no exception.

The post Secret Service May Get Even More Money After Failing To Protect Trump appeared first on Reason.com.

  • ✇Latest
  • Will Biden Sleepwalk Into a War With Iran?Matthew Petti
    This week has been especially chaotic for the Middle East. On Saturday, a Lebanese rocket killed 12 children and youth at a soccer game in the Israeli-controlled Golan Heights. (The victims were Syrian citizens with Israeli residency.) On Tuesday night, Israel took revenge for the rocket by killing Fuad Shukr, a commander in the pro-Iranian militia Hezbollah, along with two children. A few hours later, a bomb killed Ismail Haniyeh, the head of Ha
     

Will Biden Sleepwalk Into a War With Iran?

1. Srpen 2024 v 20:30
Lebanese mourners carry the coffins of two children, Hassan and Amira Muhammed Fadallah, who were killed in the Israeli drone attack on Beirut on July 30, 2023. | Marwan Naamani/dpa/picture-alliance/Newscom

This week has been especially chaotic for the Middle East. On Saturday, a Lebanese rocket killed 12 children and youth at a soccer game in the Israeli-controlled Golan Heights. (The victims were Syrian citizens with Israeli residency.) On Tuesday night, Israel took revenge for the rocket by killing Fuad Shukr, a commander in the pro-Iranian militia Hezbollah, along with two children.

A few hours later, a bomb killed Ismail Haniyeh, the head of Hamas' political bureau and the lead negotiator with Israel, while he was visiting Tehran for the Iranian president's inauguration. Israel is widely believed to be the culprit. Iran's Supreme Leader Ali Khamenei and Hezbollah leader Hassan Nasrallah have both promised to take revenge.

The same night that Shukr and Haniyeh were killed, U.S. warplanes rained down fire on an Iraqi militia base, killing four pro-Iranian fighters. An anonymous U.S. official told reporters that the militiamen were launching an attack drone that "posed a threat" to U.S. and allied forces. It was not clear whether the Iraqi drone was really aimed at U.S. troops—or Israel.

Soon it may not matter. The Biden administration affirmed again on Wednesday that it will help defend Israel in case of a conflict with Lebanon or Iran, as it did during clashes this April. And the administration has hinted before that it will get involved directly if Israel faces military setbacks in Lebanon. Israeli leaders may have been betting on exactly that outcome.

Unnamed "sources in the security establishment" told The Jerusalem Post that they could have assassinated Haniyeh in Qatar, where he usually lives. Instead, those sources explained, "the choice to carry out the assassination in the heart of Tehran was precisely because Haniyeh was under Iranian security responsibility, which placed Iran at the heart of the world's focus as a host, director, and supplier of terrorism."

In other words, killing Haniyeh was possibly meant to turn the Israel-Hamas war into an international crisis involving Iran and Israel's allies.

Months before the October 2023 attacks, Israeli policy makers had gamed out an Israeli strike leading to a U.S.-Iranian war. The Institute for National Security Studies, a think tank close to the Israeli government, ran a simulation in July 2023 that was eerily similar to the current escalation. The scenario began with an Israeli assassination campaign in Tehran, which provoked Hezbollah and Iraqi militias into attacking Israel and ended with direct U.S. attacks on Iran.

"Former top political and military leaders from Israel, the United States and a number of European countries took part in the simulation," reported the Israeli newspaper Haaretz.

For years before that, Israeli Prime Minister Benjamin Netanyahu and other Israeli leaders had been demanding U.S. support for an attack on Iran's nuclear facilities. It's not hard to understand why. Khamenei has called Israel a cancerous tumor that needs to be excised, and Israeli leaders have in turn said that Iran is the head of an evil octopus, which must be cut off.

The attacks on October 7, 2023, by Hamas seemed to confirm the Israeli perception. Whatever role Iran did or didn't have in planning the attacks—the U.S. government believes that Iranian leaders were just as surprised as everyone else—Iran's allies immediately jumped into the fray, attacking Israel in the name of the Palestinian cause.

And plenty of American politicians want conflict for their own reasons. Immediately after the October 7 attacks, Sen. Lindsey Graham (R–S.C.) had called for bombing Iran whether or not there was evidence that Iran was behind the attacks. On Wednesday, he claimed to have intelligence that "Iran will, in the coming weeks or months, possess a nuclear weapon" and introduced a bill calling for war with Iran.

A conflict with Iran also helps Netanyahu alleviate some of the domestic political pressure on him. Before the October 7 attacks, he was facing protests over his proposal to defang the Israeli Supreme Court. And instead of rallying Israelis around Netanyahu, the attacks galvanized opposition, as many Israelis blamed Netanyahu for the security lapse and the failure to rescue hostages.

This week, those tensions exploded into an outright mutiny. After months of international pressure regarding the treatment of inmates at the Sde Teiman prison, Israeli military police began a probe into one of the most egregious cases. Nine soldiers had allegedly raped a Palestinian prisoner so hard that he was sent to the hospital with a ruptured bowel, a severe injury to his anus, lung damage, and broken ribs.

Police detained some of the accused soldiers, and Israeli nationalists accused the government of betraying its troops. Nationalist rioters, including members of parliament, stormed both Sde Teiman and the Beit Lid military courts in support of the accused rapists. The army was forced to pull three battalions away from the Palestinian territories to guard the courthouse.

Killing Shukr and Haniyeh, then, was a good political bet for Netanyahu. At the very least, Netanyahu got to drown out headlines about the Sde Teiman riot with a dashing military victory. And if Iran hits back hard enough, then Israel may be able to get the world's superpower to fight Israel's greatest enemy.

But a full-on U.S.-Iran war would be a disaster for the region and for Americans. Gen. Kenneth McKenzie warned The New Yorker in December 2021 that Iran has missile "overmatch in the theatre—the ability to overwhelm" U.S. air defenses. American troops would face attacks in Iran, Iraq, and Syria, and a few well-placed Iranian strikes on Tel Aviv or Abu Dhabi could do serious damage to the world economy.

It would be a disaster of the Biden administration's own making. Soon after the October 7 attacks, President Joe Biden embraced the "bear hug" theory of diplomacy. By giving Israel public reassurances and unlimited military support, the theory went, Biden would earn enough goodwill from Israelis to keep their war contained and eventually broker an Israeli-Palestinian ceasefire.

Instead, the bear hug has turned out to be a sleepwalk. Netanyahu has taken U.S. support as a license to continue expanding the conflict. And the Biden administration seems to be at a loss for words about the latest escalation. Asked what impact the assassination of one side's chief negotiator would have on ceasefire negotiations, Secretary of State Antony Blinken played dumb.

"Well, I've seen the reports, and what I can tell you is this: First, this is something we were not aware of or involved in," Blinken told Channel News Asia. "It's very hard to speculate, and I've learned never to speculate, on the impact one event may have on something else. So I can't tell you what this means."

The post Will Biden Sleepwalk Into a War With Iran? appeared first on Reason.com.

  • ✇Latest
  • Trump and Harris Are Just Making It Up as They GoEric Boehm
    A few minutes before 10 a.m. on Wednesday, former President Donald Trump dropped a plan to completely overhaul the relationship between millions of older Americans and the federal government. "SENIORS SHOULD NOT PAY TAX ON SOCIAL SECURITY," Trump shouted from his Truth Social account. If implemented, that would be a hugely expensive policy change. According to one quick estimate by a former White House chief economist, it would reduce federal rev
     

Trump and Harris Are Just Making It Up as They Go

1. Srpen 2024 v 20:15
Donald Trump and Kamala Harris | AFP / GDA Photo Service/Newscom

A few minutes before 10 a.m. on Wednesday, former President Donald Trump dropped a plan to completely overhaul the relationship between millions of older Americans and the federal government.

"SENIORS SHOULD NOT PAY TAX ON SOCIAL SECURITY," Trump shouted from his Truth Social account.

If implemented, that would be a hugely expensive policy change. According to one quick estimate by a former White House chief economist, it would reduce federal revenue by $1.5 trillion over 10 years and would add $1.8 trillion to the national debt. (The extra cost is the result of interest on the new debt that would be racked up in the absence of that revenue.) It would also accelerate Social Security's slide into insolvency. And, obviously, it would be a big tax break for Americans who collect Social Security checks—but not a tax break that would be particularly good at fostering economic growth.

Despite all that, the most notable thing about Trump's announcement was what it didn't include. There was no attempt to reckon with those figures, for example. No surrogates were dispatched to explain why this change is necessary or good for the economy or country. No press releases went out. There was, of course, no attempt to explain what government programs would be cut to offset the drop in revenue. For that matter, there had been no discussion of this idea at the Republican National Convention. It was not mentioned in Trump's (long) acceptance speech and was not included in the party's platform.

Like so much else in the Trump era, this looks like an idea that went from the former president's head to his social media account with very few stops in between.

There is something to be said for that degree of—let's say—transparency. If nothing else, it is quintessentially Trumpian: hastily conceived and not deeply considered, more of a marketing slogan than substance. Let's just call this what it is: a nakedly political play to win the votes of Social Security–collecting Americans.

Coming as it did on Wednesday morning, the "no taxes on Social Security" plan stood in stark contrast to the news the Trump campaign had made just one day earlier. On Tuesday, Trump's campaign had officially (and gleefully) sunk the Heritage Foundation's "Project 2025"—a 900-page document in which the conservative think tank had outlined an extensive policy plan for Trump's prospective second term. The project had been headed by Paul Dans, who had served in the Trump administration, and was central to the institutional-wide pivot toward populism that Kevin Roberts, Heritage's president, had executed in recent years.

In a statement, two of Trump's top campaign officials didn't merely bury Project 2025 but also issued a threat.

"Reports of Project 2025's demise would be greatly welcomed and should serve as notice to anyone or any group trying to misrepresent their influence with President Trump and his campaign—it will not end well for you," said Susie Wiles and Chris LaCivita.

Translation: How dare anyone try to substitute actual policy substance for whatever random thought might fall out of the former president's head on a Wednesday morning?

Roberts' mistake "was thinking that Mr. Trump cares about anyone's ideas other than his own. He governs on feral instinct, tactical opportunism, and what seems popular at a given moment," wrote the Wall Street Journal's editorial board in a scathing response to the news of Project 2025 being scuttled and that Dans had resigned from Heritage. "The lesson for Heritage, and other think tanks, is that it's better to stick to your principles rather than court the political flavor of the day."

Amen to that.

Meanwhile, Vice President Kamala Harris has launched her campaign by veering hard into an almost Trump-like policy nihilism of her own. Having already tried to memory-hole her track record as the Biden administration's so-called "border czar"—read Reason's Liz Wolfe if you need to catch up on that controversy—Harris is now seemingly rewriting her positions on a bunch of other things too.

For example, Harris was a co-sponsor of the Green New Deal when she was a member of the U.S. Senate in 2019. She voiced her support for the progressive environmental package while campaigning for president that same year.

Now, she's backing away from it. This week, a spokesperson for the Harris campaign told the Washington Examiner that Harris no longer supports the federal job guarantee—a promise that the federal government would provide jobs with "family-sustaining wages" to anyone who wanted one—that was a key feature of the Green New Deal.

As the Examiner notes, Harris has also "backed away from her endorsement of eliminating private healthcare plans as part of a Medicare for All proposal. Her campaign also told The Hill that she will not seek to ban fracking if she is elected. That was after previously telling CNN while running for president 'There's no question I'm in favor of banning fracking.'"

Maybe this is Harris embracing her philosophy of being "unburdened by what has been." Maybe she's simply taking a page from Trump's book—after all, the former president has never paid much of a price for making it up as he goes along.

For both Trump and Harris, simply telling voters what you think they want to hear is possibly the most direct route to winning an election. But such a cynical approach to campaigning sidelines any discussion of policy—and means the election is likely to be decided on far stupider grounds.

The post Trump and Harris Are Just Making It Up as They Go appeared first on Reason.com.

  • ✇Ars Technica - All content
  • Who are the two major hackers Russia just received in a prisoner swap?Nate Anderson
    Enlarge (credit: Getty Images) As part of today’s blockbuster prisoner swap between the US and Russia, which freed the journalist Evan Gershkovich and several Russian opposition figures, Russia received in return a motley collection of serious criminals, including an assassin who had executed an enemy of the Russian state in the middle of Berlin. But the Russians also got two hackers, Vladislav Klyushin and Roman Seleznev, each of whom had been convicted of major financial cr
     

Who are the two major hackers Russia just received in a prisoner swap?

2. Srpen 2024 v 02:14
Who are the two major hackers Russia just received in a prisoner swap?

Enlarge (credit: Getty Images)

As part of today’s blockbuster prisoner swap between the US and Russia, which freed the journalist Evan Gershkovich and several Russian opposition figures, Russia received in return a motley collection of serious criminals, including an assassin who had executed an enemy of the Russian state in the middle of Berlin.

But the Russians also got two hackers, Vladislav Klyushin and Roman Seleznev, each of whom had been convicted of major financial crimes in the US. The US government said that Klyushin “stands convicted of the most significant hacking and trading scheme in American history, and one of the largest insider trading schemes ever prosecuted.” As for Seleznev, federal prosecutors said that he has “harmed more victims and caused more financial loss than perhaps any other defendant that has appeared before the court.”

What sort of hacker do you have to be to attract the interest of the Russian state in prisoner swaps like these? Clearly, it helps to have hacked widely and caused major damage to Russia’s enemies. By bringing these two men home, Russian leadership is sending a clear message to domestic hackers: We’ve got your back.

Read 17 remaining paragraphs | Comments

  • ✇Slashdot
  • How Chinese Attackers Breached an ISP to Poison Insecure Software Updates with MalwareEditorDavid
    An anonymous reader shared this report from BleepingComputer: A Chinese hacking group tracked as StormBamboo has compromised an undisclosed internet service provider (ISP) to poison automatic software updates with malware. Also tracked as Evasive Panda, Daggerfly, and StormCloud, this cyber-espionage group has been active since at least 2012, targeting organizations across mainland China, Hong Kong, Macao, Nigeria, and various Southeast and East Asian countries. On Friday, Volexity threat rese
     

How Chinese Attackers Breached an ISP to Poison Insecure Software Updates with Malware

3. Srpen 2024 v 22:34
An anonymous reader shared this report from BleepingComputer: A Chinese hacking group tracked as StormBamboo has compromised an undisclosed internet service provider (ISP) to poison automatic software updates with malware. Also tracked as Evasive Panda, Daggerfly, and StormCloud, this cyber-espionage group has been active since at least 2012, targeting organizations across mainland China, Hong Kong, Macao, Nigeria, and various Southeast and East Asian countries. On Friday, Volexity threat researchers revealed that the Chinese cyber-espionage gang had exploited insecure HTTP software update mechanisms that didn't validate digital signatures to deploy malware payloads on victims' Windows and macOS devices... To do that, the attackers intercepted and modified victims' DNS requests and poisoned them with malicious IP addresses. This delivered the malware to the targets' systems from StormBamboo's command-and-control servers without requiring user interaction. Volexity's blog post says they observed StormBamboo "targeting multiple software vendors, who use insecure update workflows..." and then "notified and worked with the ISP, who investigated various key devices providing traffic-routing services on their network. As the ISP rebooted and took various components of the network offline, the DNS poisoning immediately stopped." BleepingComputer notes that "âAfter compromising the target's systems, the threat actors installed a malicious Google Chrome extension (ReloadText), which allowed them to harvest and steal browser cookies and mail data."

Read more of this story at Slashdot.

  • ✇Semiconductor Engineering
  • LPDDR Memory Is Key For On-Device AI PerformanceNidish Kamath
    Low-Power Double Data Rate (LPDDR) emerged as a specialized high performance, low power memory for mobile phones. Since its first release in 2006, each new generation of LPDDR has delivered the bandwidth and capacity needed for major shifts in the mobile user experience. Once again, LPDDR is at the forefront of another key shift as the next wave of generative AI applications will be built into our mobile phones and laptops. AI on endpoints is all about efficient inference. The process of employi
     

LPDDR Memory Is Key For On-Device AI Performance

24. Červen 2024 v 09:02

Low-Power Double Data Rate (LPDDR) emerged as a specialized high performance, low power memory for mobile phones. Since its first release in 2006, each new generation of LPDDR has delivered the bandwidth and capacity needed for major shifts in the mobile user experience. Once again, LPDDR is at the forefront of another key shift as the next wave of generative AI applications will be built into our mobile phones and laptops.

AI on endpoints is all about efficient inference. The process of employing trained AI models to make predictions or decisions requires specialized memory technologies with greater performance that are tailored to the unique demands of endpoint devices. Memory for AI inference on endpoints requires getting the right balance between bandwidth, capacity, power and compactness of form factor.

LPDDR evolved from DDR memory technology as a power-efficient alternative; LPDDR5, and the optional extension LPDDR5X, are the most recent updates to the standard. LPDDR5X is focused on improving performance, power, and flexibility; it offers data rates up to 8.533 Gbps, significantly boosting speed and performance. Compared to DDR5 memory, LPDDR5/5X limits the data bus width to 32 bits, while increasing the data rate. The switch to a quarter-speed clock, as compared to a half-speed clock in LPDDR4, along with a new feature – Dynamic Voltage Frequency Scaling – keeps the higher data rate LPDDR5 operation within the same thermal budget as LPDDR4-based devices.

Given the space considerations of mobiles, combined with greater memory needs for advanced applications, LPDDR5X can support capacities of up to 64GB by using multiple DRAM dies in a multi-die package. Consider the example of a 7B LLaMa 2 model: the model consumes 3.5GB of memory capacity if based on INT4. A LPDDR5X package of x64, with two LPDDR5X devices per package, provides an aggregate bandwidth of 68 GB/s and, therefore, a LLaMa 2 model can run inference at 19 tokens per second.

As demand for more memory performance grows, we see LPDDR5 evolve in the market with the major vendors announcing additional extensions to LPDDR5 known as LPDDR5T, with the T standing for turbo. LPDDR5T boosts performance to 9.6 Gbps enabling an aggregate bandwidth of 76.8 GB/s in a x64 package of multiple LPDDR5T stacks. Therefore, the above example of a 7B LLaMa 2 model can run inference at 21 tokens per second.

With its low power consumption and high bandwidth capabilities, LPDDR5 is a great choice of memory not just for cutting-edge mobile devices, but also for AI inference on endpoints where power efficiency and compact form factor are crucial considerations. Rambus offers a new LPDDR5T/5X/5 Controller IP that is fully optimized for use in applications requiring high memory throughput and low latency. The Rambus LPDDR5T/5X/5 Controller enables cutting-edge LPDDR5T memory devices and supports all third-party LPDDR5 PHYs. It maximizes bus bandwidth and minimizes latency via look-ahead command processing, bank management and auto-precharge. The controller can be delivered with additional cores such as the In-line ECC or Memory Analyzer cores to improve in-field reliability, availability and serviceability (RAS).

The post LPDDR Memory Is Key For On-Device AI Performance appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • NeuroHammer Attacks on ReRAM-Based MemoriesTechnical Paper Link
    A new technical paper titled “NVM-Flip: Non-Volatile-Memory BitFlips on the System Level” was published by researchers at Ruhr-University Bochum, University of Duisburg-Essen, and Robert Bosch. Abstract “Emerging non-volatile memories (NVMs) are promising candidates to substitute conventional memories due to their low access latency, high integration density, and non-volatility. These superior properties stem from the memristor representing the centerpiece of each memory cell and is branded as t
     

NeuroHammer Attacks on ReRAM-Based Memories

21. Červen 2024 v 18:32

A new technical paper titled “NVM-Flip: Non-Volatile-Memory BitFlips on the System Level” was published by researchers at Ruhr-University Bochum, University of Duisburg-Essen, and Robert Bosch.

Abstract
“Emerging non-volatile memories (NVMs) are promising candidates to substitute conventional memories due to their low access latency, high integration density, and non-volatility. These superior properties stem from the memristor representing the centerpiece of each memory cell and is branded as the fourth fundamental circuit element. Memristors encode information in the form of its resistance by altering the physical characteristics of their filament. Hence, each memristor can store multiple bits increasing the memory density and positioning it as a potential candidate to replace DRAM and SRAM-based memories, such as caches.

However, new security risks arise with the benefits of these emerging technologies, like the recent NeuroHammer attack, which allows adversaries to deliberately flip bits in ReRAMs. While NeuroHammer has been shown to flip single bits within memristive crossbar arrays, the system-level impact remains unclear. Considering the significance of the Rowhammer attack on conventional DRAMs, NeuroHammer can potentially cause crucial damage to applications taking advantage of emerging memory technologies.

To answer this question, we introduce NVgem5, a versatile system-level simulator based on gem5. NVgem5 is capable of injecting bit-flips in eNVMs originating from NeuroHammer. Our experiments evaluate the impact of the NeuroHammer attack on main and cache memories. In particular, we demonstrate a single-bit fault attack on cache memories leaking the secret key used during the computation of RSA signatures. Our findings highlight the need for improved hardware security measures to mitigate the risk of hardware-level attacks in computing systems based on eNVMs.”

Find the technical paper here. Published June 2024.

Felix Staudigl, Jan Philipp Thoma, Christian Niesler, Karl Sturm, Rebecca Pelke, Dominik Germek, Jan Moritz Joseph, Tim Güneysu, Lucas Davi, and Rainer Leupers. 2024. NVM-Flip: Non-Volatile-Memory BitFlips on the System Level. In Proceedings of the 2024 ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS ’24). Association for Computing Machinery, New York, NY, USA, 11–20. https://doi.org/10.1145/3643650.3658606

The post NeuroHammer Attacks on ReRAM-Based Memories appeared first on Semiconductor Engineering.

MIT, Applied Materials, and the Northeast Microelectronics Coalition Hub to bring 200mm advanced research capabilities to MIT.nano

The following is a joint announcement from MIT and Applied Materials, Inc.

MIT and Applied Materials, Inc., announced an agreement today that, together with a grant to MIT from the Northeast Microelectronics Coalition (NEMC) Hub, commits more than $40 million of estimated private and public investment to add advanced nano-fabrication equipment and capabilities to MIT.nano, the Institute’s center for nanoscale science and engineering. The collaboration will create a unique open-access site in the United States that supports research and development at industry-compatible scale using the same equipment found in high-volume production fabs to accelerate advances in silicon and compound semiconductors, power electronics, optical computing, analog devices, and other critical technologies.

The equipment and related funding and in-kind support provided by Applied Materials will significantly enhance MIT.nano’s existing capabilities to fabricate up to 200-millimeter (8-inch) wafers, a size essential to industry prototyping and production of semiconductors used in a broad range of markets including consumer electronics, automotive, industrial automation, clean energy, and more. Positioned to fill the gap between academic experimentation and commercialization, the equipment will help establish a bridge connecting early-stage innovation to industry pathways to the marketplace.

“A brilliant new concept for a chip won’t have impact in the world unless companies can make millions of copies of it. MIT.nano’s collaboration with Applied Materials will create a critical open-access capacity to help innovations travel from lab bench to industry foundries for manufacturing,” says Maria Zuber, MIT’s vice president for research and the E. A. Griswold Professor of Geophysics. “I am grateful to Applied Materials for its investment in this vision. The impact of the new toolset will ripple across MIT and throughout Massachusetts, the region, and the nation.”

Applied Materials is the world’s largest supplier of equipment for manufacturing semiconductors, displays, and other advanced electronics. The company will provide at MIT.nano several state-of-the-art process tools capable of supporting 150 and 200mm wafers and will enhance and upgrade an existing tool owned by MIT. In addition to assisting MIT.nano in the day-to-day operation and maintenance of the equipment, Applied engineers will develop new process capabilities that will benefit researchers and students from MIT and beyond.

“Chips are becoming increasingly complex, and there is tremendous need for continued advancements in 200mm devices, particularly compound semiconductors like silicon carbide and gallium nitride,” says Aninda Moitra, corporate vice president and general manager of Applied Materials’ ICAPS Business. “Applied is excited to team with MIT.nano to create a unique, open-access site in the U.S. where the chip ecosystem can collaborate to accelerate innovation. Our engagement with MIT expands Applied’s university innovation network and furthers our efforts to reduce the time and cost of commercializing new technologies while strengthening the pipeline of future semiconductor industry talent.”

The NEMC Hub, managed by the Massachusetts Technology Collaborative (MassTech), will allocate $7.7 million to enable the installation of the tools. The NEMC is the regional “hub” that connects and amplifies the capabilities of diverse organizations from across New England, plus New Jersey and New York. The U.S. Department of Defense (DoD) selected the NEMC Hub as one of eight Microelectronics Commons Hubs and awarded funding from the CHIPS and Science Act to accelerate the transition of critical microelectronics technologies from lab-to-fab, spur new jobs, expand workforce training opportunities, and invest in the region’s advanced manufacturing and technology sectors.

The Microelectronics Commons program is managed at the federal level by the Office of the Under Secretary of Defense for Research and Engineering and the Naval Surface Warfare Center, Crane Division, and facilitated through the National Security Technology Accelerator (NSTXL), which organizes the execution of the eight regional hubs located across the country. The announcement of the public sector support for the project was made at an event attended by leaders from the DoD and NSTXL during a site visit to meet with NEMC Hub members.

The installation and operation of these tools at MIT.nano will have a direct impact on the members of the NEMC Hub, the Massachusetts and Northeast regional economy, and national security. This is what the CHIPS and Science Act is all about,” says Ben Linville-Engler, deputy director at the MassTech Collaborative and the interim director of the NEMC Hub. “This is an essential investment by the NEMC Hub to meet the mission of the Microelectronics Commons.”

MIT.nano is a 200,000 square-foot facility located in the heart of the MIT campus with pristine, class-100 cleanrooms capable of accepting these advanced tools. Its open-access model means that MIT.nano’s toolsets and laboratories are available not only to the campus, but also to early-stage R&D by researchers from other academic institutions, nonprofit organizations, government, and companies ranging from Fortune 500 multinationals to local startups. Vladimir Bulović, faculty director of MIT.nano, says he expects the new equipment to come online in early 2025.

“With vital funding for installation from NEMC and after a thorough and productive planning process with Applied Materials, MIT.nano is ready to install this toolset and integrate it into our expansive capabilities that serve over 1,100 researchers from academia, startups, and established companies,” says Bulović, who is also the Fariborz Maseeh Professor of Emerging Technologies in MIT’s Department of Electrical Engineering and Computer Science. “We’re eager to add these powerful new capabilities and excited for the new ideas, collaborations, and innovations that will follow.”

As part of its arrangement with MIT.nano, Applied Materials will join the MIT.nano Consortium, an industry program comprising 12 companies from different industries around the world. With the contributions of the company’s technical staff, Applied Materials will also have the opportunity to engage with MIT’s intellectual centers, including continued membership with the Microsystems Technology Laboratories.

© Image: Anton Grassl

MIT.nano (at right), the Institute’s center for nanoscale science and engineering, will receive more than $40M of estimated private and public investment to add advanced nanofabrication equipment to the facility’s toolsets.

Red Tape Is Making Hospital Ransomware Attacks Worse

24. Červen 2024 v 11:00
With cyberattacks increasingly targeting health care providers, an arduous bureaucratic process meant to address legal risk is keeping hospitals offline longer, potentially risking lives.

Google Pixel Users: Urgent Update Needed to Patch Security Hole

24. Červen 2024 v 12:25
Google Pixel 8 Pro

The Cybersecurity and Infrastructure Security Agency (CISA) is urging everyone to update their Google Pixel phones. A recent security issue, called CVE-2024-32896, could let attackers ...

The post Google Pixel Users: Urgent Update Needed to Patch Security Hole appeared first on Gizchina.com.

  • ✇Android Police
  • Google Authenticator: How to get backup codesAnu Joy
    Protecting your digital privacy with passwords is crucial for online security. While a reliable password manager helps, a two-factor authentication (2FA) app like Google Authenticator adds an extra layer of protection. However, you might get locked out of your accounts if your phone is lost or damaged. This guide shows you how to back up your Google Authenticator app so you dont have to worry about losing access to your online accounts.
     

Google Authenticator: How to get backup codes

Od: Anu Joy
11. Červen 2024 v 01:01

Protecting your digital privacy with passwords is crucial for online security. While a reliable password manager helps, a two-factor authentication (2FA) app like Google Authenticator adds an extra layer of protection. However, you might get locked out of your accounts if your phone is lost or damaged. This guide shows you how to back up your Google Authenticator app so you dont have to worry about losing access to your online accounts.

MIT, Applied Materials, and the Northeast Microelectronics Coalition Hub to bring 200mm advanced research capabilities to MIT.nano

The following is a joint announcement from MIT and Applied Materials, Inc.

MIT and Applied Materials, Inc., announced an agreement today that, together with a grant to MIT from the Northeast Microelectronics Coalition (NEMC) Hub, commits more than $40 million of estimated private and public investment to add advanced nano-fabrication equipment and capabilities to MIT.nano, the Institute’s center for nanoscale science and engineering. The collaboration will create a unique open-access site in the United States that supports research and development at industry-compatible scale using the same equipment found in high-volume production fabs to accelerate advances in silicon and compound semiconductors, power electronics, optical computing, analog devices, and other critical technologies.

The equipment and related funding and in-kind support provided by Applied Materials will significantly enhance MIT.nano’s existing capabilities to fabricate up to 200-millimeter (8-inch) wafers, a size essential to industry prototyping and production of semiconductors used in a broad range of markets including consumer electronics, automotive, industrial automation, clean energy, and more. Positioned to fill the gap between academic experimentation and commercialization, the equipment will help establish a bridge connecting early-stage innovation to industry pathways to the marketplace.

“A brilliant new concept for a chip won’t have impact in the world unless companies can make millions of copies of it. MIT.nano’s collaboration with Applied Materials will create a critical open-access capacity to help innovations travel from lab bench to industry foundries for manufacturing,” says Maria Zuber, MIT’s vice president for research and the E. A. Griswold Professor of Geophysics. “I am grateful to Applied Materials for its investment in this vision. The impact of the new toolset will ripple across MIT and throughout Massachusetts, the region, and the nation.”

Applied Materials is the world’s largest supplier of equipment for manufacturing semiconductors, displays, and other advanced electronics. The company will provide at MIT.nano several state-of-the-art process tools capable of supporting 150 and 200mm wafers and will enhance and upgrade an existing tool owned by MIT. In addition to assisting MIT.nano in the day-to-day operation and maintenance of the equipment, Applied engineers will develop new process capabilities that will benefit researchers and students from MIT and beyond.

“Chips are becoming increasingly complex, and there is tremendous need for continued advancements in 200mm devices, particularly compound semiconductors like silicon carbide and gallium nitride,” says Aninda Moitra, corporate vice president and general manager of Applied Materials’ ICAPS Business. “Applied is excited to team with MIT.nano to create a unique, open-access site in the U.S. where the chip ecosystem can collaborate to accelerate innovation. Our engagement with MIT expands Applied’s university innovation network and furthers our efforts to reduce the time and cost of commercializing new technologies while strengthening the pipeline of future semiconductor industry talent.”

The NEMC Hub, managed by the Massachusetts Technology Collaborative (MassTech), will allocate $7.7 million to enable the installation of the tools. The NEMC is the regional “hub” that connects and amplifies the capabilities of diverse organizations from across New England, plus New Jersey and New York. The U.S. Department of Defense (DoD) selected the NEMC Hub as one of eight Microelectronics Commons Hubs and awarded funding from the CHIPS and Science Act to accelerate the transition of critical microelectronics technologies from lab-to-fab, spur new jobs, expand workforce training opportunities, and invest in the region’s advanced manufacturing and technology sectors.

The Microelectronics Commons program is managed at the federal level by the Office of the Under Secretary of Defense for Research and Engineering and the Naval Surface Warfare Center, Crane Division, and facilitated through the National Security Technology Accelerator (NSTXL), which organizes the execution of the eight regional hubs located across the country. The announcement of the public sector support for the project was made at an event attended by leaders from the DoD and NSTXL during a site visit to meet with NEMC Hub members.

The installation and operation of these tools at MIT.nano will have a direct impact on the members of the NEMC Hub, the Massachusetts and Northeast regional economy, and national security. This is what the CHIPS and Science Act is all about,” says Ben Linville-Engler, deputy director at the MassTech Collaborative and the interim director of the NEMC Hub. “This is an essential investment by the NEMC Hub to meet the mission of the Microelectronics Commons.”

MIT.nano is a 200,000 square-foot facility located in the heart of the MIT campus with pristine, class-100 cleanrooms capable of accepting these advanced tools. Its open-access model means that MIT.nano’s toolsets and laboratories are available not only to the campus, but also to early-stage R&D by researchers from other academic institutions, nonprofit organizations, government, and companies ranging from Fortune 500 multinationals to local startups. Vladimir Bulović, faculty director of MIT.nano, says he expects the new equipment to come online in early 2025.

“With vital funding for installation from NEMC and after a thorough and productive planning process with Applied Materials, MIT.nano is ready to install this toolset and integrate it into our expansive capabilities that serve over 1,100 researchers from academia, startups, and established companies,” says Bulović, who is also the Fariborz Maseeh Professor of Emerging Technologies in MIT’s Department of Electrical Engineering and Computer Science. “We’re eager to add these powerful new capabilities and excited for the new ideas, collaborations, and innovations that will follow.”

As part of its arrangement with MIT.nano, Applied Materials will join the MIT.nano Consortium, an industry program comprising 12 companies from different industries around the world. With the contributions of the company’s technical staff, Applied Materials will also have the opportunity to engage with MIT’s intellectual centers, including continued membership with the Microsystems Technology Laboratories.

© Image: Anton Grassl

MIT.nano (at right), the Institute’s center for nanoscale science and engineering, will receive more than $40M of estimated private and public investment to add advanced nanofabrication equipment to the facility’s toolsets.
  • ✇Semiconductor Engineering
  • Addressing Quantum Computing Threats With SRAM PUFsRoel Maes
    You’ve probably been hearing a lot lately about the quantum-computing threat to cryptography. If so, you probably also have a lot of questions about what this “quantum threat” is and how it will impact your cryptographic solutions. Let’s take a look at some of the most common questions about quantum computing and its impact on cryptography. What is a quantum computer? A quantum computer is not a very fast general-purpose supercomputer, nor can it magically operate in a massively parallel manner.
     

Addressing Quantum Computing Threats With SRAM PUFs

Od: Roel Maes
6. Červen 2024 v 09:07

You’ve probably been hearing a lot lately about the quantum-computing threat to cryptography. If so, you probably also have a lot of questions about what this “quantum threat” is and how it will impact your cryptographic solutions. Let’s take a look at some of the most common questions about quantum computing and its impact on cryptography.

What is a quantum computer?

A quantum computer is not a very fast general-purpose supercomputer, nor can it magically operate in a massively parallel manner. Instead, it efficiently executes unique quantum algorithms. These algorithms can in theory perform certain very specific computations much more efficiently than any traditional computer could.

However, the development of a meaningful quantum computer, i.e., one that can in practice outperform a modern traditional computer, is exceptionally difficult. Quantum computing technology has been in development since the 1980s, with gradually improving operational quantum computers since the 2010s. However, even extrapolating the current state of the art into the future, and assuming an exponential improvement equivalent to Moore’s law for traditional computers, experts estimate that it will still take at least 15 to 20 years for a meaningful quantum computer to become a reality. 1, 2

What is the quantum threat to cryptography?

In the 1990s, it was discovered that some quantum algorithms can impact the security of certain traditional cryptographic techniques. Two quantum algorithms have raised concern:

  • Shor’s algorithm, invented in 1994 by Peter Shor, is an efficient quantum algorithm for factoring large integers, and for solving a few related number-theoretical problems. Currently, there are no known efficient-factoring algorithms for traditional computers, a fact that provides the basis of security for several classic public-key cryptographic techniques.
  • Grover’s algorithm, invented in 1996 by Lov Grover, is a quantum algorithm that can search for the inverse of a generic function quadratically faster than a traditional computer can. In cryptographic terms, searching for inverses is equivalent to a brute-force attack (e.g., on an unknown secret key value). The difficulty of such attacks forms the basis of security for most symmetric cryptography primitives.

These quantum algorithms, if they can be executed on a meaningful quantum computer, will impact the security of current cryptographic techniques.

What is the impact on public-key cryptography solutions?

By far the most important and most widely used public-key primitives today are based on RSA, discrete-logarithm, or elliptic curve cryptography. When meaningful quantum computers become operational, all of these can be efficiently solved by Shor’s algorithm. This will make virtually all public-key cryptography in current use insecure.

For the affected public-key encryption and key exchange primitives, this threat is already real today. An attacker capturing and storing encrypted messages exchanged now (or in the past), could decrypt them in the future when meaningful quantum computers are operational. So, highly sensitive and/or long-term secrets communicated up to today are already at risk.

If you use the affected signing primitives in short-term commitments of less than 15 years, the problem is less urgent. However, if meaningful quantum computers become available, the value of any signature will be voided from that point. So, you shouldn’t use the affected primitives for signing long-term commitments that still need to be verifiable in 15-20 years or more. This is already an issue for some use cases, e.g., for the security of secure boot and update solutions of embedded systems with a long lifetime.

Over the last decade, the cryptographic community has designed new public-key primitives that are based on mathematical problems that cannot be solved by Shor’s algorithm (or any other known efficient algorithm, quantum or otherwise). These algorithms are generally referred to as postquantum cryptography. NIST’s announcement on a selection of these algorithms for standardization1, after years of public scrutiny, is the latest culmination of that field-wide exercise. For protecting the firmware of embedded systems in the short term, the NSA recommends the use of existing post-quantum secure hash-based signature schemes12.

What is the impact on my symmetric cryptography solutions?

The security level of a well-designed symmetric key primitive is equivalent to the effort needed for brute-forcing the secret key. On a traditional computer, the effort of brute-forcing a secret key is directly exponential in the key’s length. When a meaningful quantum computer can be used, Grover’s algorithm can speed up the brute-force attack quadratically. The needed effort remains exponential, though only in half of the key’s length. So, Grover’s algorithm could be said to reduce the security of any given-length algorithm by 50%.

However, there are some important things to keep in mind:

  • Grover’s algorithm is an optimal brute-force strategy (quantum or otherwise),4so the quadratic speed-up is the worst-case security impact.
  • There are strong indications that it is not possible to meaningfully parallelize the execution of Grover’s algorithm.2,5,6,7In a traditional brute-force attack, doubling the number of computers used will cut the computation time in half. Such a scaling is not possible for Grover’s algorithm on a quantum computer, which makes its use in a brute-force attack very impractical.
  • Before Grover’s algorithm can be used to perform real-world brute-force attacks on 128-bit keys, the performance of quantum computers must improve tremendously. Very modern traditional supercomputers can barely perform computations with a complexity exponential in 128/2=64 bits in a practically feasible time (several months). Based on their current state and rate of progress, it will be much, much more than 20 years before quantum computers could be at that same level 6.

The practical impact of quantum computers on symmetric cryptography is, for the moment, very limited. Worst-case, the security strength of currently used primitives is reduced by 50% (of their key length), but due to the limitations of Grover’s algorithm, that is an overly pessimistic assumption for the near future. Doubling the length of symmetric keys to withstand quantum brute-force attacks is a very broad blanket measure that will certainly solve the problem, but is too conservative. Today, there are no mandated requirement for quantum-hardening symmetric-key cryptography, and 128-bit security strength primitives like AES-128 or SHA-256 are considered safe to use now. For the long-term, moving from 128-bit to 256-bit security strength algorithms is guaranteed to solve any foreseeable issues. 12

Is there an impact on information-theoretical security?

Information-theoretically secure methods (also called unconditional or perfect security) are algorithmic techniques for which security claims are mathematically proven. Some important information-theoretically secure constructions and primitives include the Vernam cipher, Shamir’s secret sharing, quantum key distribution8 (not to be confused with post-quantum cryptography), entropy sources and physical unclonable functions (PUFs), and fuzzy commitment schemes9.

Because an information-theoretical proof demonstrates that an adversary does not have sufficient information to break the security claim, regardless of its computing power – quantum or otherwise – information-theoretically secure constructions are not impacted by the quantum threat.

PUFs: An antidote for post-quantum security uncertainty

SRAM PUFs

The core technology underpinning all Synopsys products is an SRAM PUF. Like other PUFs, an SRAM PUF generates device-unique responses that stem from unpredictable variations originating in the production process of silicon chips. The operation of an SRAM PUF is based on a conventional SRAM circuit readily available in virtually all digital chips.

Based on years of continuous measurements and analysis, Synopsys has developed stochastic models that describe the behavior of its SRAM PUFs very accurately10. Using these models, we can determine tight bounds on the unpredictability of SRAM PUFs. These unpredictability bounds are expressed in terms of entropy, and are fundamental in nature, and cannot be overcome by any amount of computation, quantum or otherwise.

Synopsys PUF IP

Synopsys PUF IP is a security solution based on SRAM PUF technology. The central component of Synopsys PUF IP is a fuzzy commitment scheme9 that protects a root key with an SRAM PUF response and produces public helper data. It is information-theoretically proven that the helper data discloses zero information on the root key, so the fact that the helper data is public has no impact on the root key’s security.

Fig. 1: High-level architecture of Synopsys PUF IP.

This no-leakage proof – kept intact over years of field deployment on hundreds of millions of devices – relies on the PUF employed by the system to be an entropy source, as expressed by its stochastic model. Synopsys PUF IP uses its entropy source to initialize its root key for the very first time, which is subsequently protected by the fuzzy commitment scheme.

In addition to the fuzzy commitment scheme and the entropy source, Synopsys PUF IP also implements cryptographic operations based on certified standard-compliant constructions making use of standard symmetric crypto primitives, particularly AES and SHA-25611. These operations include:

  • a key derivation function (KDF) that uses the root key protected by the fuzzy commitment scheme as a key derivation key.
  • a deterministic random bit generator (DRBG) that is initially seeded by a high-entropy seed coming from the entropy source.
  • key wrapping functionality, essentially a form of authenticated encryption, for the protection of externally provided application keys using a key-wrapping key derived from the root key protected by the fuzzy commitment scheme.

Conclusion

The security architecture of Synopsys PUF IP is based on information-theoretically secure components for the generation and protection of a root key, and on established symmetric cryptography for other cryptographic functions. Information-theoretically secure constructions are impervious to quantum attacks. The impact of the quantum threat on symmetric cryptography is very limited and does not require any remediation now or in the foreseeable future. Importantly, Synopsys PUF IP does not deploy any quantum-vulnerable public-key cryptographic primitives.

All variants of Synopsys PUF IP are quantum-secure and in accordance with recommended post-quantum guidelines. The use of the 256-bit security strength variant of Synopsys PUF IP will offer strong quantum resistance, even in a distant future, but also the 128-bit variant is considered perfectly safe to use now and in the foreseeable time to come.

References

  1. Report on Post-Quantum Cryptography”, NIST Information Technology Laboratory, NISTIR 8105, April 2016,
  2. 2021 Quantum Threat Timeline Report”, Global Risk Institute (GRI), M. Mosca and M. Piani, January, 2022,
  3. PQC Standardization Process: Announcing Four Candidates to be Standardized, Plus Fourth Round Candidates”, NIST Information Technology Laboratory, July 5, 2022,
  4. “Grover’s quantum searching algorithm is optimal”, C. Zalka, Phys. Rev. A 60, 2746, October 1, 1999, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.60.2746
  5. Reassessing Grover’s Algorithm”, S. Fluhrer, IACR ePrint 2017/811,
  6. NIST’s pleasant post-quantum surprise”, Bas Westerbaan, CloudFlare, July 8, 2022,
  7. Post-Quantum Cryptography – FAQs: To protect against the threat of quantum computers, should we double the key length for AES now? (added 11/18/18)”, NIST Information Technology Laboratory,
  8. Quantum cryptography: Public key distribution and coin tossing”, C. H. Bennett and G. Brassard, Proceedings of the IEEE International Conference on Computers, Systems and Signal Processing, December, 1984,
  9. A fuzzy commitment scheme”, A. Juels and M. Wattenberg, Proceedings of the 6th ACM conference on Computer and Communications Security, November, 1999,
  10. An Accurate Probabilistic Reliability Model for Silicon PUFs”, R. Maes, Proceedings of the International Workshop on Cryptographic Hardware and Embedded Systems, 2013,
  11. NIST Information Technology Laboratory, Cryptographic Algorithm Validation Program CAVP, validation #A2516, https://csrc.nist.gov/projects/cryptographic-algorithm-validation-program/details?validation=35127
  12. “Announcing the Commercial National Security Algorithm Suite 2.0”, National Security Agency, Cybersecurity Advisory https://media.defense.gov/2022/Sep/07/2003071834/-1/-1/0/CSA_CNSA_2.0_ALGORITHMS_.PDF

The post Addressing Quantum Computing Threats With SRAM PUFs appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • The Importance Of Memory Encryption For Protecting Data In UseEmma-Jane Crozier
    As systems-on-chips (SoCs) become increasingly complex, security functions must grow accordingly to protect the semiconductor devices themselves and the sensitive information residing on or passing through them. While a Root of Trust security solution built into the SoCs can protect the chip and data resident therein (data at rest), many other threats exist which target interception, theft or tampering with the valuable information in off-chip memory (data in use). Many isolation technologies ex
     

The Importance Of Memory Encryption For Protecting Data In Use

6. Červen 2024 v 09:06

As systems-on-chips (SoCs) become increasingly complex, security functions must grow accordingly to protect the semiconductor devices themselves and the sensitive information residing on or passing through them. While a Root of Trust security solution built into the SoCs can protect the chip and data resident therein (data at rest), many other threats exist which target interception, theft or tampering with the valuable information in off-chip memory (data in use).

Many isolation technologies exist for memory protection, however, with the discovery of the Meltdown and Spectre vulnerabilities in 2018, and attacks like row hammer targeting DRAM, security architects realize there are practical threats that can bypass these isolation technologies.

One of the techniques to prevent data being accessed across different guests/domains/zones/realms is memory encryption. With memory encryption in place, even if any of the isolation techniques have been compromised, the data being accessed is still protected by cryptography. To ensure the confidentiality of data, each user has their own protected key. Memory encryption can also prevent physical attacks like hardware bus probing on the DRAM bus interface. It can also prevent tampering with control plane information like the MPU/MMU control bits in DRAM and prevent the unauthorized movement of protected data within the DRAM.

Memory encryption technology must ensure confidentiality of the data. If a “lightweight” algorithm is used, there are no guarantees the data will be protected from mathematic cryptanalysts given that the amount of data used in memory encryption is typically huge. Well known, proven algorithms are either the NIST approved AES or OSCAA approved SM4 algorithms.

The recommended key length is also an important aspect defining the security strength. AES offers 128, 192 or 256-bit security, and SM4 offers 128-bit security. Advanced memory encryption technologies also involve integrity and protocol level anti-replay techniques for high-end use-cases. Proven hash algorithms like SHA-2, SHA-3, SM3 or (AES-)GHASH can be used for integrity protection purposes.

Once one or more of the cipher algorithms are selected, the choice of secure modes of operation must be made. Block Cipher algorithms need to be used in certain specific modes to encrypt bulk data larger than a single block of just 128 bits.

XTS mode, which stands for “XEX (Xor-Encrypt-Xor) with tweak and CTS (Cipher Text Stealing)” mode has been widely adopted for disk encryption. CTS is a clever technique which ensures the number of bytes in the encrypted payload is the same as the number of bytes in the plaintext payload. This is particularly important in storage to ensure the encrypted payload can fit in the same location as would the unencrypted version.

XTS/XEX uses two keys, one key for block encryption, and another key to process a “tweak.” The tweak ensures every block of memory is encrypted differently. Any changes in the plaintext result in a complete change of the ciphertext, preventing an attacker from obtaining any information about the plaintext.

While memory encryption is a critical aspect of security, there are many challenges to designing and implementing a secure memory encryption solution. Rambus is a leading provider of both memory and security technologies and understands the challenges from both the memory and security viewpoints. Rambus provides state-of-the-art Inline Memory Encryption (IME) IP that enables chip designers to build high-performance, secure, and scalable memory encryption solutions.

Additional information:

The post The Importance Of Memory Encryption For Protecting Data In Use appeared first on Semiconductor Engineering.

Multiple ChatGPT instances work together to find and exploit security flaws — teams of LLMs tested by UIUC beat single bots and dedicated software

10. Červen 2024 v 20:42
UIUC computer scientists have innovated a system for teams of LLMs to work together to exploit zero-day security vulnerabilities without an explanation of the flaws, a major improvement in the school's ongoing research.

© Shutterstock

Ransomware Is ‘More Brutal’ Than Ever in 2024

10. Červen 2024 v 16:01
As the fight against ransomware slogs on, security experts warn of a potential escalation to “real-world violence.” But recent police crackdowns are successfully disrupting the cybercriminal ecosystem.

Microsoft’s Recall feature for Copilot+ PCs will be opt-in by default, in response to feedback (and criticism)

7. Červen 2024 v 19:26

When Microsoft recently introduced its Copilot+ PC platform that will bring new AI-enabled features to Windows 11 PCs that have an NPU capable of delivering at least 40 TOPS of performance, one of the key features the company highlighted was Recall. It’s a service that saves snapshots of everything you do on a computer, making […]

The post Microsoft’s Recall feature for Copilot+ PCs will be opt-in by default, in response to feedback (and criticism) appeared first on Liliputing.

  • ✇Ars Technica - All content
  • Hackers steal “significant volume” of data from hundreds of Snowflake customersDan Goodin
    Enlarge (credit: Getty Images) As many as 165 customers of cloud storage provider Snowflake have been compromised by a group that obtained login credentials through information-stealing malware, researchers said Monday. On Friday, Lending Tree subsidiary QuoteWizard confirmed it was among the customers notified by Snowflake that it was affected in the incident. Lending Tree spokesperson Megan Greuling said the company is in the process of determining whether data stored on Sn
     

Hackers steal “significant volume” of data from hundreds of Snowflake customers

11. Červen 2024 v 00:08
Hackers steal “significant volume” of data from hundreds of Snowflake customers

Enlarge (credit: Getty Images)

As many as 165 customers of cloud storage provider Snowflake have been compromised by a group that obtained login credentials through information-stealing malware, researchers said Monday.

On Friday, Lending Tree subsidiary QuoteWizard confirmed it was among the customers notified by Snowflake that it was affected in the incident. Lending Tree spokesperson Megan Greuling said the company is in the process of determining whether data stored on Snowflake has been stolen.

“That investigation is ongoing,” she wrote in an email. “As of this time, it does not appear that consumer financial account information was impacted, nor information of the parent entity, Lending Tree.”

Read 13 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Apple’s AI promise: “Your data is never stored or made accessible to Apple”Kyle Orland
    Enlarge / Apple Senior VP of Software Engineering Craig Federighi announces "Private Cloud Compute" at WWDC 2024. (credit: Apple) With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC keynote today, Apple stressed that the new "Apple Intelligence" system it's integrating into its products will use a new "Private Cloud Compute" to ensure a
     

Apple’s AI promise: “Your data is never stored or made accessible to Apple”

10. Červen 2024 v 21:05
Apple Senior VP of Software Engineering Craig Federighi announces "Private Cloud Compute" at WWDC 2024.

Enlarge / Apple Senior VP of Software Engineering Craig Federighi announces "Private Cloud Compute" at WWDC 2024. (credit: Apple)

With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC keynote today, Apple stressed that the new "Apple Intelligence" system it's integrating into its products will use a new "Private Cloud Compute" to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.

"You should not have to hand over all the details of your life to be warehoused and analyzed in someone's AI cloud," Apple Senior VP of Software Engineering Craig Federighi said.

Trust, but verify

Part of what Apple calls "a brand new standard for privacy and AI" is achieved through on-device processing. Federighi said "many" of Apple's generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.

Read 4 remaining paragraphs | Comments

  • ✇Slashdot
  • Malicious VSCode Extensions With Millions of Installs Discoveredmsmash
    A group of Israeli researchers explored the security of the Visual Studio Code marketplace and managed to "infect" over 100 organizations by trojanizing a copy of the popular 'Dracula Official theme to include risky code. Further research into the VSCode Marketplace found thousands of extensions with millions of installs. From a report: Visual Studio Code (VSCode) is a source code editor published by Microsoft and used by many professional software developers worldwide. Microsoft also operates a
     

Malicious VSCode Extensions With Millions of Installs Discovered

Od: msmash
10. Červen 2024 v 19:23
A group of Israeli researchers explored the security of the Visual Studio Code marketplace and managed to "infect" over 100 organizations by trojanizing a copy of the popular 'Dracula Official theme to include risky code. Further research into the VSCode Marketplace found thousands of extensions with millions of installs. From a report: Visual Studio Code (VSCode) is a source code editor published by Microsoft and used by many professional software developers worldwide. Microsoft also operates an extensions market for the IDE, called the Visual Studio Code Marketplace, which offers add-ons that extend the application's functionality and provide more customization options. Previous reports have highlighted gaps in VSCode's security, allowing extension and publisher impersonation and extensions that steal developer authentication tokens. There have also been in-the-wild findings that were confirmed to be malicious.

Read more of this story at Slashdot.

❌
❌