FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremSemiconductor Engineering
  • ✇Semiconductor Engineering
  • Chip Industry Week In ReviewThe SE Staff
    By Adam Kovac, Gregory Haley, and Liz Allan. Cadence plans to acquire BETA CAE Systems for $1.24 billion, the latest volley in a race to sell multi-physics simulation and analysis across a broad set of customers with deep pockets. Cadence said the deal opens the door to structural analysis for the automotive, aerospace, industrial, and health care sectors. Under the terms of the agreement, 60% of the purchase would be paid in cash, and the remainder in stock. South Korea’s National Intelligence
     

Chip Industry Week In Review

8. Březen 2024 v 09:01

By Adam Kovac, Gregory Haley, and Liz Allan.

Cadence plans to acquire BETA CAE Systems for $1.24 billion, the latest volley in a race to sell multi-physics simulation and analysis across a broad set of customers with deep pockets. Cadence said the deal opens the door to structural analysis for the automotive, aerospace, industrial, and health care sectors. Under the terms of the agreement, 60% of the purchase would be paid in cash, and the remainder in stock.

South Korea’s National Intelligence Service reported that North Korea was targeting cyberattacks at domestic semiconductor equipment companies, using a “living off the land” approach, in which the attacker uses minimal malware to attack common applications installed on the server. That makes it more difficult to spot an attack. According to the government, “In December last year, Company A, and in February this year, Company B, had their configuration management server and security policy server hacked, respectively, and product design drawings and facility site photos were stolen.”

As the memory market goes, so goes the broader chip industry. Last quarter, and heading into early 2024, both markets began showing signs of sustainable growth. DRAM revenue jumped 29.6% in Q4 for a total of $17.46 billion. TrendForce attributed some of that to  new efforts to stockpile chips and strategic production control. NAND flash revenue was up 24.5% in Q4, with solid growth expected to continue into the first part of this year, according to TrendForce. Revenue for the sector topped $11.4 billion in Q4, and it’s expected to grow another 20% this quarter. SSD prices rebounded in Q4, as well, up 15% to $23.1 billion. Across the chip industry, sales grew 15.2% in January compared to the same period in 2023, according to the Semiconductor Industry Association (SIA). This is the largest increase since May 2022, and that trend is expected to continue throughout 2024 with double-digit growth compared to 2023.

Marvell said it is working with TSMC to develop a technology platform for the rapid deployment of analog, mixed-signal, and foundational IP. The company plans to sell both custom and commercial chiplets at 2nm.

The Dutch government is concerned that ASML, the only maker of EUV/high-NA EUV lithography equipment in the world, is considering leaving the Netherlands, according to De Telegraaf.

Quick links to more news:

Design and Power
Manufacturing and Test
Automotive and Batteries
Security
Pervasive Computing and AI
Events

Design and Power

AMD appears to have hit a roadblock with the U.S. Department of Commerce (DoC) over a new AI chip it designed for the Chinese market, as reported by Bloomberg. U.S. officials told the company the new chip is too powerful to be sold without a license.

JEDEC released its new memory standard as a free download on its website. The JESD239 Graphics Double Data Rate SGRAM can reach speeds of 192 GB/s and improve signal-to-noise ratio.

Accellera rolled out its IEEE Std. 1800‑2023 Standard for SystemVerilog—Unified Hardware Design, Specification, and Verification Language, which is now available for free download. The decision to offer it at no cost is due to Accellera’s participation in the IEEE GET Program, which was founded in 2010 with the intention of providing  open access to some standards. Accellera also announced it had approved for release the Verilog-AMS 2023 standard, which offers enhancements to analog constructs, dynamic tolerance for event control statements, and other upgrades.

Chiplets are a hot topic these days. Six industry experts discuss chiplet standards, interoperability, and the need for highly customized AI chiplets.

Optimizing EDA hardware for the cloud can shorten the time required for large and complex simulations, but not all workloads will benefit equally, and much more can be done to improve those that can.

Flex Logix is developing InferX DSP for use with existing EFLX eFPGA from 40nm to 7nm. InferX achieves about 30 times the DSP performance/mm² than eFPGA.

The number of challenges is growing in power semiconductors, just as it is in traditional chips. This tech talk looks at integrating power semiconductors with other devices, different packaging impacts, and how these devices will degrade over time.

Vultr announced it will use NVIDIA’s HGX H100 GPU clusters to expand its Seattle-based cloud data center. The company said the expansion, which will be powered by hydroelectricity, will make the facility one of the cleanest, most power efficient data centers in the country.

Amazon Web Services will expand its presence in Saudi Arabia, announcing a new $5.3 billion infrastructure region in the country that will launch in 2026. The new region will offer developers, entrepreneurs and companies access to healthcare, education and other services.

Google is teaming up with the Geneva Science and Diplomacy Anticipator (GESDA) to launch the XPRIZE Quantum Applications, with a $5 million in prizes for winners who can demonstrate ways to use quantum computing to solve real-world problems. Teams must submit a proposal that includes analysis of how long their algorithm would need to run before reaching a solution to a problem, such as improving drug development or designing new battery materials.

South Korea’s nepes corporation has turned to Siemens EDA for solutions in the development of advanced 3D-IC packages. The deal will see nepes incorporating several Siemens technologies, including the Calibre nmPlatform, Hyperlynx software and Xpedition Substrate Integrator software.

Siemens also formalized a partnership with Nuclei System Technology in which the pair of companies will work together on solution support for Nuclei’s RISC-V processor cores. The collaboration will allow clients to monitor CPU program execution in real-time via Nuclei’s RISC-V CPU Ips.

Keysight and ETS-Lindgren announced a breakthrough test solution for cellular devices using non-terrestrial networks. The solution is capable of measuring and validating the performance of both the transmitter and receiver of devices capable of supporting the network.

Nearly fifty companies raised $800 million for power electronics, data center interconnects, and more last month.

Manufacturing and Test

SEMI Europe issued a position statement to the European Union, warning against additional export controls or rules on foreign investment. SEMI argued that free trade partnerships are a better method for ensuring security than bans or restrictions.

Revenues for the top five wafer fab equipment manufacturers declined 1% YoY in 2023 to $93.5 billion, according to Counterpoint Research. The drop was attributed to weak spending on memory, inventory adjustments, and low demand in consumer electronics. The tide is changing, though.

Bruker closed two acquisitions. One involved Chemspeed Technologies, a Switzerland-based provider of automated laboratory R&D and QC workflow solutions. The second involved Phasefocus, an image processing company based in the UK.

A Swedish company, SCALINQ, released a commercially available large-scale packaging solution capable of controlling quantum devices with hundreds of qubits.

Solid Sands, a provider of testing and qualification technology for compilers and libraries, will partner with California-based Emprog to establish a representative presence in the U.S.

Automotive

Tesla halted production at its Brandenberg, Germany, gigafactory after an environmental activist group attacked an electricity pylon, reports the Guardian.

Stellantis will invest €5.6 billion (~$6.1B) in South America to support more than 40 new products, decarbonization technologies, and business opportunities.

The amount of data being collected, processed, and stored in vehicles is exploding, and so is the value of that data. That raises questions that are still not fully answered about how that data will be used, by whom, and how it will be secured.

While industry experts expect many benefits of V2X technology, technological and social hurdles to cross. But there is progress.

Infineon released its next-gen silicon carbide (SiC) MOSFET trench technology with 650V and 1,200V options improving stored energies and charges by up to 20%, ideal for power semiconductor applications such as photovoltaics, energy storage, DC EV charging, motor drives, and industrial power supplies.

Hyundai selected Ansys to supply structural simulation solutions for vehicle body system analysis, providing end-to-end, predictively accurate capabilities for virtual performance validation.

ION Mobility used the Siemens Xcelerator portfolio for styling, mechanical engineering, and electric battery pack development for its ION M1-S electric motorbike.

Ethernovia sampled a family of automotive PHY transceivers that scale from 10 Gbps to 1 Gbps over 15 meters of automotive cabling.

The California Public Utilities Commission (CPUC) approved Waymo’s plan to expand its driverless robotaxi services to Los Angeles and other cities near San Francisco, reports Reuters.

By 2027, next-gen battery EVs (BEVs) will on average be cheaper to produce than comparable gas-powered cars, reports Gartner. But the firm noted that average cost of EV accident repair will rise by 30%, and 15% of EV companies founded in the last decade will be acquired or bankrupt.

University of California San Diego (UCSD) researchers developed a cathode material for solid-state lithium-sulfur batteries that is electrically conductive and structurally healable.

ION Storage Systems announced its anodeless and compressionless solid-state batteries (SSBs) achieved 125 cycles with under 5% capacity degradation in performance. ION has been working with the U.S. Department of Defense (DoD) to test its SSB before expanding into markets such as EVs, energy storage, consumer electronics, and aerospace.

Security

Advanced process nodes and higher silicon densities are heightening DRAM’s susceptibility to Rowhammer attacks, as reduced cell spacing significantly decreases the hammer count needed for bit flips. A multi-layered, system-level approach is crucial to DRAM protection.

Researchers at Bar-Ilan University and Rafael Defense Systems proposed an analytical electromagnetic model for IC shielding against hardware attacks.

Keysight acquired the IP of Firmalyzer, whose firmware security analysis technology will be integrated into the Keysight IoT Security Assessment and Automotive Security solutions, providing analysis into what is happening inside the IoT device itself.

Flex Logix joined the Intel Foundry U.S. Military Aerospace Government (USMAG) Alliance, ensuring U.S. defense industrial base and government customers have access to the latest technology, enabling successful designs for mission critical programs.

The EU Council presidency and European Parliament reached a provisional agreement on a Cyber Solidarity Act and an amendment to the Cybersecurity Act (CSA) concerning managed security services.

The EU Agency for Cybersecurity (ENISA) and partners updated the compendium on elections cybersecurity in response to issues such as AI deep fakes, hacktivists-for-hire, the sophistication of threat actors, and the current geopolitical context.

The Cybersecurity and Infrastructure Security Agency (CISA) launched efforts to help secure the open source software ecosystem; updated its Public Safety Communications and Cyber Resiliency Toolkit; and issued other alerts including security advisories for VMware, Apple, and Cisco.

Pervasive Computing and AI

Johns Hopkins University engineers used natural language prompts and ChatGPT4 to produce detailed instructions to build a spiking neural network (SNN) chip. The neuromorphic accelerators could power real-time machine intelligence for next-gen embodied systems like autonomous vehicles and robots.

The global AI hardware market size was estimated at $53.71 billion in 2023, and is expected to reach about $473.53 billion by 2033, at a compound annual growth rate of 24.5%, reports Precedence Research.

National Institute of Standards and Technology (NIST) researchers and partners built compact chips capable of converting light into microwaves, which could improve navigation, communication, and radar systems.

Fig. 1: NIST researchers test a chip for converting light into microwave signals. Pictured is the chip, which is the fluorescent panel that looks like two tiny vinyl records. The gold box to the left of the chip is the semiconductor laser that emits light to the chip. Credit: K. Palubicki/NIST

The Indian government is investing 103 billion rupees ($1.25B) in AI projects, including computing infrastructure and large language models (LLMs).

Infineon is collaborating with Qt Group, bringing Qt’s graphics framework to Infineon’s graphics-enabled TRAVEO T2G cluster MCUs to optimize graphical user interface (GUI) development.

Keysight leveraged fourth-generation AMD EPYC CPUs to develop a new benchmarking methodology to test mobile and 5G private network performance. The method uses realistic traffic generation to uncover a CPU’s true power and scalability while observing bandwidth requirements.

The AI industry is pushing a nuclear power revival, reports NBC, and Amazon bought a nuclear-powered data center in Pennsylvania from Talen Energy for $650 million, according to WNEP.

Bank of America was awarded 644 patents in 2023 for technology including information security, AI, machine learning (ML), online and mobile banking, payments, data analytics, and augmented and virtual reality (AR/VR).

Mistral AI’s large language model, Mistral Large, became available in the Snowflake Data Cloud for customers to securely harness generative AI with their enterprise data.

China’s smartphone unit sales declined 7% year over year in the first six weeks of 2024, with Apple declining 24%, reports Counterpoint.

Shipments of LCD TV panels are expected to reach 55.8 million units in Q1 2024, a 5.3% quarter over quarter increase, reports TrendForce. And an estimated 5.8 billion LED lamps and luminaires are expected to reach the end of their lifespan in 2024, triggering a wave of secondary replacements and boosting total LED lighting demand to 13.4 billion units.

Korea Institute of Science and Technology (KIST) researchers mined high-purity gold from electrical and electronic waste.

The San Diego Supercomputer Center (SDSC) and the University of Utah launched a National Data Platform pilot project, aimed at making access to and use of scientific data open and equitable.

Events

Find upcoming chip industry events here, including:

Event Date Location
ISS Industry Strategy Symposium Europe Mar 6 – 8 Vienna, Austria
GSA International Semiconductor Conference Mar 13 – 14 London
Device Packaging Conference (DPC 2024) Mar 18 – 21 Fountain Hills, AZ
GOMACTech Mar 18 – 21 Charleston, South Carolina
SNUG Silicon Valley Mar 20 – 21 Santa Clara, CA
SEMICON China Mar 20 – 22 Shanghai
OFC: Optical Communications & Networking Mar 24 – 28 Virtual; San Diego, CA
DATE: Design, Automation and Test in Europe Conference Mar 25 – 27 Valencia, Spain
SEMI Therm Mar 25- 28 San Jose, CA
MemCon Mar 26 – 27 Silicon Valley
All Upcoming Events

Upcoming webinars are here.

Further Reading and Newsletters

Read the latest special reports and top stories, or check out the latest newsletters:

Systems and Design
Low Power-High Performance
Test, Measurement and Analytics
Manufacturing, Packaging and Materials
Automotive, Security and Pervasive Computing

The post Chip Industry Week In Review appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Ultrathin vdW Ferromagnet at Room Temperature (MIT)Technical Paper Link
    A technical paper titled “Current-induced switching of a van der Waals ferromagnet at room temperature” was published by researchers at Massachusetts Institute of Technology (MIT). Abstract: “Recent discovery of emergent magnetism in van der Waals magnetic materials (vdWMM) has broadened the material space for developing spintronic devices for energy-efficient computation. While there has been appreciable progress in vdWMM discovery, a solution for non-volatile, deterministic switching of vdWMMs
     

Ultrathin vdW Ferromagnet at Room Temperature (MIT)

A technical paper titled “Current-induced switching of a van der Waals ferromagnet at room temperature” was published by researchers at Massachusetts Institute of Technology (MIT).

Abstract:

“Recent discovery of emergent magnetism in van der Waals magnetic materials (vdWMM) has broadened the material space for developing spintronic devices for energy-efficient computation. While there has been appreciable progress in vdWMM discovery, a solution for non-volatile, deterministic switching of vdWMMs at room temperature has been missing, limiting the prospects of their adoption into commercial spintronic devices. Here, we report the first demonstration of current-controlled non-volatile, deterministic magnetization switching in a vdW magnetic material at room temperature. We have achieved spin-orbit torque (SOT) switching of the PMA vdW ferromagnet Fe3GaTe2  using a Pt spin-Hall layer up to 320 K, with a threshold switching current density as low as Jsw = 1.69 × 106 A cm-2 at room temperature. We have also quantitatively estimated the anti-damping-like SOT efficiency of our Fe3GaTe2/Pt bilayer system to be ξDL = 0:093, using the second harmonic Hall voltage measurement technique. These results mark a crucial step in making vdW magnetic materials a viable choice for the development of scalable, energy-efficient spintronic devices.”

Find the technical paper here. Published February 2024. MIT’s related news article and video is here.

Kajale, S.N., Nguyen, T., Chao, C.A. et al. Current-induced switching of a van der Waals ferromagnet at room temperature. Nat Commun 15, 1485 (2024). https://doi.org/10.1038/s41467-024-45586-4

 

 

The post Ultrathin vdW Ferromagnet at Room Temperature (MIT) appeared first on Semiconductor Engineering.

K-Fault Resistant Partitioning To Assess Redundancy-Based HW Countermeasures To Fault Injections

A technical paper titled “Fault-Resistant Partitioning of Secure CPUs for System Co-Verification against Faults” was published by researchers at Université Paris-Saclay, Graz University of Technology, lowRISC, University Grenoble Alpes, Thales, and Sorbonne University.

Abstract:

“To assess the robustness of CPU-based systems against fault injection attacks, it is necessary to analyze the consequences of the fault propagation resulting from the intricate interaction between the software and the processor. However, current formal methodologies that combine both hardware and software aspects experience scalability issues, primarily due to the use of bounded verification techniques. This work formalizes the notion of k-fault resistant partitioning as an inductive solution to this fault propagation problem when assessing redundancy-based hardware countermeasures to fault injections. Proven security guarantees can then reduce the remaining hardware attack surface to consider in a combined analysis with the software, enabling a full co-verification methodology. As a result, we formally verify the robustness of the hardware lockstep countermeasure of the OpenTitan secure element to single bit-flip injections. Besides that, we demonstrate that previously intractable problems, such as analyzing the robustness of OpenTitan running a secure boot process, can now be solved by a co-verification methodology that leverages a k-fault resistant partitioning. We also report a potential exploitation of the register file vulnerability in two other software use cases. Finally, we provide a security fix for the register file, verify its robustness, and integrate it into the OpenTitan project.”

Find the technical paper here. Published 2024 (preprint).

Tollec, Simon, Vedad Hadžić, Pascal Nasahl, Mihail Asavoae, Roderick Bloem, Damien Couroussé, Karine Heydemann, Mathieu Jan, and Stefan Mangard. “Fault-Resistant Partitioning of Secure CPUs for System Co-Verification against Faults.” Cryptology ePrint Archive (2024).

Related Reading
RISC-V Micro-Architectural Verification
Verifying a processor is much more than making sure the instructions work, but the industry is building from a limited knowledge base and few dedicated tools.
New Concepts Required For Security Verification
Why it’s so difficult to ensure that hardware works correctly and is capable of detecting vulnerabilities that may show up in the field.

The post K-Fault Resistant Partitioning To Assess Redundancy-Based HW Countermeasures To Fault Injections appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Increased Automotive Data Use Raises Privacy, Security ConcernsJohn Koon
    The amount of data being collected, processed, and stored in vehicles is exploding, and so is the value of that data. That raises questions that are still not fully answered about how that data will be used, by whom, and how it will be secured. Automakers are competing based on the latest versions of advanced technologies such as ADAS, 5G, and V2X, but the ECUs, software-defined vehicles, and in-cabin monitoring also demand more and more data, and they are using that data for purposes that exten
     

Increased Automotive Data Use Raises Privacy, Security Concerns

Od: John Koon
7. Březen 2024 v 09:09

The amount of data being collected, processed, and stored in vehicles is exploding, and so is the value of that data. That raises questions that are still not fully answered about how that data will be used, by whom, and how it will be secured.

Automakers are competing based on the latest versions of advanced technologies such as ADAS, 5G, and V2X, but the ECUs, software-defined vehicles, and in-cabin monitoring also demand more and more data, and they are using that data for purposes that extend beyond just getting the vehicle from point A to point B safely. They now are vying to offer additional subscription-based services according to customers’ interests, as various entities, including insurance companies, indicate a willingness to pay for information on drivers’ habits.

Collecting this data can help OEMs gain insights and potentially generate additional revenue. However, gathering it raises privacy and security concerns about who will own this massive amount of data and how it should be managed and used. And as automotive data use increases, how will it impact future automotive design?

Fig. 1: Connected vehicles rely on software to communicate between vehicles and the cloud. Source: McKinsey & Co.

Fig. 1: Connected vehicles rely on software to communicate between vehicles and the cloud. Source: McKinsey & Co.

“Much of the data generated in the vehicle will have immense value to OEMs and their partners for analyzing driver behavior and vehicle performance and for developing new or enhanced features,” said Sven Kopacz, autonomous vehicle section manager at Keysight Technologies. “On the other hand, the privacy of data use can be viewed as a risk to some. But the real value – as already implemented and used by Tesla and others – is the constant feedback to improve those ADAS algorithms, enable a CI/CD DevOps software development model, and allow the rapid download of updates. Only time will tell if law enforcement and the courts will demand this data and how lawmakers will respond.”

Types of data generated
According to Precedence Research, the global automotive data market size will grow from $2.19 billion in 2022 to $14.29 billion by 2032, with many types of data collected, including:

  • Autonomous driving: Data on all levels, from L1 to L5, including that collected from the multiple sensors installed on vehicles.
  • Infrastructure: Remote monitoring, OTA updates, and data used for remote control by control centers, V2X, and traffic patterns.
  • Infotainment: Information on how customers are using applications, such as voice control, gesture, maps, and parking.
  • Connected information: Information on payment to third-party parking apps, accident information, data from dashboard cameras, handheld devices, mobile applications, and driver behavior monitoring.
  • Vehicle health: Repair and maintenance records, insurance underwriting, fuel consumption, telematics.

This information may be useful for future automotive design, predictive maintenance, and safety improvements, and insurance companies are expected to be able to reduce underwriting costs with more comprehensive information on accidents. Based on the information collected, OEMs should be able to design more reliable and safer cars, and to stay in close touch with customer wants. For example, experiments can be conducted to gauge customer demand for subscription-based services such as automatic parking and more sophisticated voice input and commands.

“Diagnostic data for service and repair has been a core of automotive data analytics for decades,” noted Lorin Kennedy, senior staff product management manager for SLM in-field analytics at Synopsys. “With the advent of connected vehicles and advanced machine learning (ML) analytics, which enable a greater quantity of data to be routinely processed, this data has gained exponentially in value. As data drives feature enhancements such as mobile-like experiences and advanced driver assist capabilities, OEMs increasingly need to better understand the dependability and reliability of the semiconductor systems powering these new features. The collection of monitoring and sensor data from electronic components and the semiconductors themselves will be a growing diagnostic data requirement across all types of automotive technologies like ADAS, IVI, ECUs, etc. to ensure quality and reliability on these more advanced nodes.”

Anticipated updates to ISO 26262 regulations regarding the application of predictive maintenance to hardware, identifying degrading intermittent faults caused by silicon aging, and over-stress conditions in the field are areas to be addressed, as well. Those can include silicon lifecycle management (SLM) technologies, which can deliver more comprehensive knowledge about the health and remaining useful life of silicon as it ages.

“That knowledge, in turn, will enable service updates and future OTA releases that leverage additional semiconductor compute power,” Kennedy said. “Overall fleet performance will benefit, and the semiconductor and system design process will, too, as new insights help achieve greater efficiencies. OEM, Tier One, and semiconductor supplier collaboration on what the data brings to light – from silicon to software system performance – will enable vehicles to meet the functional safety design parameters that are becoming increasingly crucial in advanced electronics.”

Still, for data generated in vehicles, OEMs will need to prioritize which data can provide value for drivers immediately, and which data should be sent to the cloud via 5G connections.

“Tradeoffs between on-board processing to reduce data volume and data transmission network costs will likely dictate prioritization,” Keysight’s Kopacz said. “For example, camera, lidar, and radar sensor data for ADAS applications may have value for training ADAS algorithms, but the volume of raw data will be very costly to transmit and store. Likewise, driver attention data can have high value in UI design, and would be best gathered in a meta-data form. V2X data has a relatively lower data volume and should ultimately be a key data source for ADAS, providing in-car non-line-of-sight visibility of other vehicles, road infrastructure, and road conditions. Sharing this over V2N links can enable effective safety applications, but angle random walk (ARW) sensor data needs to be considered more carefully due to its complex nature. Infotainment streaming content into the vehicle also can be a valuable revenue stream for OEMs, and the content providers as well, as network operators working together.”

Impacts on automotive cybersecurity
As vehicles become more autonomous and connected, data use will increase, and so will the value of that data. This raises cybersecurity and data privacy concerns. Hackers want to steal personal data collected by the vehicles, and can use ransomware and other attacks to do so. The idea of taking control of vehicles — or worse, stealing them — also attracts hackers. Techniques used include hacking vehicle apps and wireless connections on the vehicles (diagnostics, key fob attacks and keyless jamming). Protecting data access, vehicles, and infrastructure from attacks is increasingly important and challenging.

Cybersecurity risks increase with software-defined vehicles. Memory especially will need to be safeguarded.

“The integration of advanced technology into EVs poses significant cybersecurity challenges that demand immediate attention and sophisticated solutions,” said Ilia Stolov, center head of secure memory solution at Winbond. “Central to the digital fortresses within modern electronic platforms are flash non-volatile memories, housing invaluable assets like code, private data, and company credentials. Unfortunately, their ubiquity has rendered them attractive targets for hackers seeking unauthorized access to sensitive information.”

Stolov noted that Winbond has been actively working to secure flash memory from hacks.

Additionally, there are important considerations in securing memory designs, such as:

  • DICE root of trust: The Device Identifier Composition Engine (DICE) should be used to create the secure flash root of trust for hardware security. This secure identity forms the basis for building trust in the hardware. Other security measures can therefore rely on the authenticity and integrity of the boot code, protecting against firmware and software attacks. The initial boot process and subsequent software execution are based on trusted and verified measurements, helping prevent the injection of malicious code into the system.
  • Code and data protection: Protecting code and data is crucial for maintaining system-wide integrity. Unauthorized modifications to code or data can lead to malfunctions, system instability, or the introduction of malicious code, compromising the hardware’s intended functionality or exploiting system vulnerabilities.
  • Authentication protocols: Authentication is a fundamental and crucial component of cybersecurity, serving as the frontline defense against unauthorized access and potential security breaches. Employing authentication protocols to restrict access to authorized actors and approved software layers only using cryptography credentials is important.
  • Secure software updates with rollback protection: Regular updates extend beyond bug fixes including remote firmware over-the-air (OTA) updates, guards against rollback attacks, and ensures the execution of only legitimate updates.
  • Post-quantum cryptography: Anticipating the post-quantum computing era to include NIST 800-208 Leighton-Micali Signature (LMS) cryptography safeguards EVs against the potential threats posed by future quantum computers.
  • Platform resiliency: Automatic detection of unauthorized code changes enables swift recovery to a secure state, effectively thwarting potential cyber threats. Adhering to NIST 800-193 recommendations for platform resiliency ensures a robust defense mechanism.
  • Secure supply chain: Guaranteeing the origin and integrity of flash content throughout the supply chain, these secure flash devices prevent content tampering and misconfiguration during platform assembly, transportation, and configuration. This, in turn, safeguards against cyber adversaries.

Considering the transition to SDVs and connected cars, data vulnerability becomes even more significant.

“Depending on where data resides, different protection measures are in place,” said Keysight’s Kopacz. “Intrusion detection systems (IDS), crypto services, and key management are becoming standard solutions in vehicles. Especially sensitive data for safety features needs to be protected and verified. Thus, redundancy becomes more relevant. With SDVs, the vehicle software is constantly updated or changed throughout the entire vehicle life cycle. Ever-evolving cyber threats are particularly challenging. Accordingly, the entire vehicle software must be continuously checked for new security gaps. OEMs are going to need comprehensive testing solutions to minimize security threats. This will need to include the cybersecurity testing of the entire attack surface, covering all vehicle interfaces – wired vehicle communication networks such as CAN or automotive Ethernet or wireless connections via Wi-Fi, Bluetooth, or cellular communications. OEMs will also need to test the backend that provides over-the-air (OTA) software updates. Such solutions can reduce the risk of damage or data theft by cybercriminals.”

Data management and privacy concerns
Another issue to be resolved is how the massive amount of data collected will be managed and used. Ideally, data will be analyzed to yield commercial value without causing privacy concerns. For example, infotainment platform data might reveal what types of music are most popular, helping the music industry to improve marketing strategies. Who will monitor the transfer of such data, though? How will customers be made aware of the data collection? And will they have an opportunity to opt out of having their data sold?

As with airplanes, vehicle black boxes are installed to record information for analysis of the data after an accident occurs. The information recorded includes vehicle speed, the braking situation, and the activation of air bags, among other things. If an accident occurs resulting in a fatality, and the data from ADAS and ECU uncover vulnerability in the designs, could that data be used as evidence in court against manufacturers or their supply chains? Armed with this information, the insurance industry may decline claims. Would one or more manufacturers of the ADAS/ECU be required to hand over the data when ordered by the authorities?

“Quality requirements for sophisticated electronic parts will continue to become more rigid and strict, allowing only a few defective parts per billion (DPPB) due to the impact failed components can have on the safety and well-being of human life,” noted Guy Cortez, senior staff product management manager for SLM analytics at Synopsys. “SLM data analytics will continue to play a substantial role in the health, maintainability, and sustainability of these devices throughout their life within the vehicle. Through the power of analytics, you can do proper root cause analysis of any failed device (e.g., return merchandise authorization, or RMA). What’s more, you will also be able to find ‘like’ devices that ultimately may exhibit similar failed behavior over time. Thus empowered, you can proactively recall these like devices before they fail during operation in the field. Upon further analysis, the device(s) in question may require a design re-spin by the device developer in order to correct any identified issue. With a proper SLM solution deployed throughout the automotive ecosystem, you can achieve a higher level of predictability, and thus higher quality and safety for the automotive manufacturer and consumer.”

OEM impact
While modern cars have been described as computers on wheels, they are now more like mobile phones on wheels. OEMs are designing cars that do not skimp on features. Semi-autonomous driving, voice-controlled infotainment systems, and the monitoring of many functions—including driver behavior— are yielding a large amount of data. While that data can be used to improve future designs. OEMs’ approaches to security and privacy vary, with some offering stronger security and privacy protection than others.

Mercedes-Benz is paying attention to data security and privacy, and is compliant to UN ECE R155 / R156, a European norm for cybersecurity and software update management systems, according to the company. Which data is processed in connection with digital vehicle services depends on which services the customer selects. Only the data required for the respective service will be processed. Additionally, the “Mercedes me connect” app’s terms of use and privacy information make it transparent for customers to see what data is needed for and how it is processed. Customers can determine which services they want to use.

Hyundai indicated it would follow a user-centric focus, prioritizing safety, information security, and data privacy with fault-tolerant software architectures to enhance cybersecurity. Hyundai Motor Group’s global software center, 42dot, is currently developing integrated hardware/software security solutions that detect and block data tampering, hacking, and external cyber threats, as well as abnormal communication using big data and AI algorithms.

And according to the BMW Group, the company manages a connected fleet of more than 20 million vehicles globally. More than 6 million vehicles are updated over-the-air on a regular basis. Together with other services, more than 110 terabytes of data traffic per day are processed between the connected vehicles and cloud-backend. All BMW vehicle interfaces permit consumers to opt in or out of various types of data collection and processing that may happen on their vehicles. If preferred, BMW customers may opt out of all optional data collection relating to their vehicles at any time by visiting the BMW iDrive screen in their vehicle. Additionally, to completely stop the transfer of any data from BMW vehicles to BMW services, customers can contact the company to request that the embedded SIM on their vehicles be disabled.

Not all OEMs hold the same philosophy on privacy. According to a study on 25 brands conducted by the Mozilla Foundation, a nonprofit organization, 56% will share data with law enforcement in response to an informal request, 84% share or sell personal data, and 100% earned the foundation’s “privacy not included” warning label.

More importantly, are customers educated or informed on the privacy issue?

Fig. 2: Once data is collected from a vehicle, it can go to multiple destinations without the knowledge of customers. Source: Mozilla, *Privacy Not Included.

Fig. 2: Once data is collected from a vehicle, it can go to multiple destinations without the knowledge of customers. Source: Mozilla, *Privacy Not Included.

Applying data to automotive design in the future
OEMs collect many different types of automotive data in relation to autonomous driving, infrastructure, infotainment, connected vehicles, and vehicle health and maintenance. The ultimate goal, however, is not just to compile massive raw data; rather, it is to extract value from it. One of the questions OEMs need to ask is how to apply technology to extract information that is really useful in future automotive design.

“OEMs are trying to test and validate the various functions of their vehicles,” said David Fritz, vice president of virtual and hybrid systems at Siemens EDA. “This can involve millions of terabytes of data. Sometimes, a huge portion of the data is redundant and useless. The real value in the data is, once it gets distilled, that it’s in a form where humans can relate to the meaning of the data, and it also can be pushed into the systems while they’re being developed and tested and before the vehicles are even on the ground. We’ve known for quite some time that many countries and regulatory bodies around the world have been collecting what they call an accident database. When an accident occurs, the police show up on the scene collecting relevant data. ‘There was an intersection here, a stop sign there. And this car was traveling in this direction roughly this many miles an hour. The weather condition is this. The car entered the intersection in the yellow light and caused an accident, etc.’ This is an accident scenario. Technologies are available to take those scenarios and put them in a standard form called Open Scenario. Based on the information, a new set of data can be generated to determine what the sensors would be seeing in those accident situations, and then push it through both a virtual version of the vehicle and environment and in the future, and push those scenarios through the sensors in this physical vehicle itself. This is really the distillation of that data into a form that a human can wrap their mind around. Otherwise, you could collect billions of terabytes of raw data and try to push that into these systems, and it wouldn’t actually help you any more than if someone were sitting in a car and dragging those for billions of miles.”

But that data also can be very useful. “If an OEM wants to obtain safety certification, say in Germany, the OEM can provide a set of data of scenarios on how the vehicle will navigate,” Fritz said. “An OEM can provide a set of data to the German authority, with a set of scenarios to prove the vehicle will navigate in a safe manner under various conditions. By comparing that with the data in the accident database, the German government can say that as long as you avoid 95% of the accidents in that database, you’re certified. That’s actionable from the perspectives of human drivers, insurance, engineering, and visual simulation. The data prove the vehicle is going to behave as expected. The alternative is to drive around, as in the case of autonomous vehicles, and try to justify the accident was not caused by the vehicle, while facing the lawsuit. It does not seem to make sense, but that’s what’s happening today.”

Related Reading
Curbing Automotive Cybersecurity Attacks
A growing number of standards and regulations within the automotive ecosystem promises to save developments costs by fending off cyberattacks.
Software-Defined Vehicles Ready To Roll
New approach could have big effects on cost, safety, security, and time to market.

The post Increased Automotive Data Use Raises Privacy, Security Concerns appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Interoperability And Automation Yield A Scalable And Efficient Safety WorkflowAnn Keffer
    By Ann Keffer, Arun Gogineni, and James Kim Cars deploying ADAS and AV features rely on complex digital and analog systems to perform critical real-time applications. The large number of faults that need to be tested in these modern automotive designs make performing safety verification using a single technology impractical. Yet, developing an optimized safety methodology with specific fault lists automatically targeted for simulation, emulation and formal is challenging. Another challenge is c
     

Interoperability And Automation Yield A Scalable And Efficient Safety Workflow

7. Březen 2024 v 09:07

By Ann Keffer, Arun Gogineni, and James Kim

Cars deploying ADAS and AV features rely on complex digital and analog systems to perform critical real-time applications. The large number of faults that need to be tested in these modern automotive designs make performing safety verification using a single technology impractical.

Yet, developing an optimized safety methodology with specific fault lists automatically targeted for simulation, emulation and formal is challenging. Another challenge is consolidating fault resolution results from various fault injection runs for final metric computation.

The good news is that interoperability of fault injection engines, optimization techniques, and an automated flow can effectively reduce overall execution time to quickly close-the-loop from safety analysis to safety certification.

Figure 1 shows some of the optimization techniques in a safety flow. Advanced methodologies such as safety analysis for optimization and fault pruning, concurrent fault simulation, fault emulation, and formal based analysis can be deployed to validate the safety requirements for an automotive SoC.

Fig. 1: Fault list optimization techniques.

Proof of concept: an automotive SoC

Using an SoC level test case, we will demonstrate how this automated, multi-engine flow handles the large number of faults that need to be tested in advanced automotive designs. The SoC design we used in this test case had approximately three million gates. First, we used both simulation and emulation fault injection engines to efficiently complete the fault campaigns for final metrics. Then we performed formal analysis as part of finishing the overall fault injection.

Fig. 2: Automotive SoC top-level block diagram.

Figure 3 is a representation of the safety island block from figure 2. The color-coded areas show where simulation, emulation, and formal engines were used for fault injection and fault classification.

Fig. 3: Detailed safety island block diagram.

Fault injection using simulation was too time and resource consuming for the CPU core and cache memory blocks. Those blocks were targeted for fault injection with an emulation engine for efficiency. The CPU core is protected by a software test library (STL) and the cache memory is protected by ECC. The bus interface requires end-to-end protection where fault injection with simulation was determined to be efficient. The fault management unit was not part of this experiment. Fault injection for the fault management unit will be completed using formal technology as a next step.

Table 1 shows the register count for the blocks in the safety island.

Table 1: Block register count.

The fault lists generated for each of these blocks were optimized to focus on the safety critical nodes which have safety mechanisms/protection.

SafetyScope, a safety analysis tool, was run to create the fault lists for the FMs for both the Veloce Fault App (fault emulator) and the fault simulator and wrote the fault lists to the functional safety (FuSa) database.

For the CPU and cache memory blocks, the emulator inputs the synthesized blocks and fault injection/fault detection nets (FIN/FDN). Next, it executed the stimulus and captured the states of all the FDNs. The states were saved and used as a “gold” reference for comparison against fault inject runs. For each fault listed in the optimized fault list, the faulty behavior was emulated, and the FDNs were compared against the reference values generated during the golden run, and the results were classified and updated in the fault database with attributes.

Fig. 4: CPU cluster. (Source from https://developer.arm.com/Processors/Cortex-R52)

For each of the sub parts shown in the block diagram, we generated an optimized fault list using the analysis engine. The fault lists are saved into individual session in the FuSa database. We used the statistical random sampling on the overall faults to generate the random sample from the FuSa database.

Now let’s look at what happens when we take one random sample all the way through the fault injection using emulation. However, for this to completely close on the fault injection, we processed N samples.

Table 2: Detected faults by safety mechanisms.

Table 3 shows that the overall fault distribution for total faults is in line with the fault distribution of the random sampled faults. The table further captures the total detected faults of 3125 out of 4782 total faults. We were also able model the detected faults per sub part and provide an overall detected fault ratio of 65.35%. Based on the faults in the random sample and our coverage goal of 90%, we calculated that the margin of error (MOE) is ±1.19%.

Table 3: Results of fault injection in CPU and cache memory.

The total detected (observed + unobserved) 3125 faults provide a clear fault classification. The undetected observed also provide a clear classification for Residual faults. We did further analysis of undetected unobserved and not injected faults.

Table 4: Fault classification after fault injection.

We used many debug techniques to analyze the 616 Undetected Unobserved faults. First, we used formal analysis to check the cone of influence (COI) of these UU faults. The faults which were outside the COI were deemed safe, and there were five faults which were further dropped from analysis. For the faults which were inside the COI, we used engineering judgment with justification of various configurations like, ECC, timer, flash mem related etc. Finally, using formal and engineering judgment we were able to further classify 616 UU faults into safe faults and remaining UU faults into conservatively residual faults. We also reviewed the 79 residual faults and were able to classify 10 faults into safe faults. The not injected faults were also tested against the simulation model to check if any further stimulus is able to inject those faults. Since no stimulus was able to inject these faults, we decided to drop these faults from our consideration and against the margin of error accordingly. With this change our new MOE is ±1.293%.

In parallel, the fault simulator pulled the optimized fault lists for the failure modes of the bus block and ran fault simulations using stimulus from functional verification. The initial set of stimuli didn’t provide enough coverage, so higher quality stimuli (test vectors) were prepared, and additional fault campaigns were run on the new stimuli. All the fault classifications were written into the FuSa database. All runs were parallel and concurrent for overall efficiency and high performance.

Safety analysis using SafetyScope helped to provide more accuracy and reduce the iteration of fault simulation. CPU and cache mem after emulation on various tests resulted an overall SPFM of over 90% as shown in Table 5.

Table 5: Overall results.

At this time not all the tests for BUS block (end to end protection) doing the fault simulation had been completed. Table 6 shows the first initial test was able to resolve the 9.8% faults very quickly.

Table 6: Percentage of detected faults for BUS block by E2E SM.

We are integrating more tests which have high traffic on the BUS to mimic the runtime operation state of the SoC. The results of these independent fault injections (simulation and emulation) were combined for calculating the final metrics on the above blocks, with the results shown in Table 7.

Table 7: Final fault classification post analysis.

Conclusion

In this article we shared the details of a new functional safety methodology used in an SoC level automotive test case, and we showed how our methodology produces a scalable, efficient safety workflow using optimization techniques for fault injection using formal, simulation, and emulation verification engines. Performing safety analysis prior to running the fault injection was very critical and time saving. Therefore, the interoperability for using multiple engines and reading the results from a common FuSa database is necessary for a project of this scale.

For more information on this highly effective functional safety flow for ADAS and AV automotive designs, please download the Siemens EDA whitepaper Complex safety mechanisms require interoperability and automation for validation and metric closure.

Arun Gogineni is an engineering manager and architect for IC functional safety at Siemens EDA.

James Kim is a technical leader at Siemens EDA.

The post Interoperability And Automation Yield A Scalable And Efficient Safety Workflow appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Securing DRAM Against Evolving Rowhammer ThreatsAharon Etengoff
    Advanced process nodes and higher silicon densities are heightening DRAM’s susceptibility to Rowhammer attacks, as reduced cell spacing significantly decreases the hammer count needed for bit flips. Rowhammer exploits DRAM’s single-capacitor-per-bit design to trigger bit flips in adjacent cells through repeated memory row accesses. This vulnerability allows attackers to manipulate data, recover sensitive information, and crash processes or systems. First identified in 2014, evolving Rowhammer va
     

Securing DRAM Against Evolving Rowhammer Threats

7. Březen 2024 v 09:07

Advanced process nodes and higher silicon densities are heightening DRAM’s susceptibility to Rowhammer attacks, as reduced cell spacing significantly decreases the hammer count needed for bit flips.

Rowhammer exploits DRAM’s single-capacitor-per-bit design to trigger bit flips in adjacent cells through repeated memory row accesses. This vulnerability allows attackers to manipulate data, recover sensitive information, and crash processes or systems. First identified in 2014, evolving Rowhammer variants continue to target DRAM, successfully bypassing security techniques such as error correction code (ECC) and transactional row refresh (TRR).

Fig. 1: DRAMs on a DIMM, with corresponding mapping of row addresses and DRAM banks. A RowHammer attack can flip bits in the same victim row in multiple DRAMs, overwhelming ECC protection. Source: Rambus

Fig. 1: DRAMs on a DIMM, with corresponding mapping of row addresses and DRAM banks. A RowHammer attack can flip bits in the same victim row in multiple DRAMs, overwhelming ECC protection. Source: Rambus

Effectively protecting DRAM against Rowhammer requires a multi-layer, system-level implementation of robust security techniques, from encryption and obfuscation to enforced data isolation and advanced error correction schemes. This is easier said than done, however, as countermeasures can potentially impact power, performance, and area (PPA). Engineers should therefore evaluate PPA-security tradeoffs alongside key features and components at the start of the design process.

A top-down, system-level approach to securing DRAM
“Security is always a cat-and-mouse game, and the evolution of Rowhammer attacks and defenses is no different,” said Nicole Fern, senior security analyst at Riscure. “Researchers have demonstrated successful Rowhammer attacks on commercial DRAM modules employing both TRR and ECC, recovering TLS signing keys in several cryptographic libraries (Amazon-s2n (CVE-2022-42962), WolfSSL (CVE-2022-42961), and LibreSSL (CVE-2022-42963). Many speculate that real-world attacks are imminent. For countermeasures, the question should not be, ‘Will they ultimately be able to counter Rowhammer attacks in general?’ Rather, the question should be: ‘For a specific system and threat model, is the attack effort greater than the value of the assets being targeted and costs of a successful attack?’”

Traditionally, only PPA tradeoffs are considered during the silicon design process. However, recent hardware-based attacks, including Rowhammer, Meltdown, and Spectre, and those exploiting DVFS features to inject faults from software—such as clkscrew and Plundervolt—highlight the importance of prioritizing security during the design process. “Often, it is new features added for performance that create a foothold for attacks,” explained Fern. “As DRAM technology [nodes] shrink over time, with density and performance improving, susceptibility to Rowhammer increases. [Engineers] need to be aware of this effect and proactively design in appropriate countermeasures — with thorough testing ensuring these perform as expected as DRAM technology evolves.”

Jason Oberg, co-founder and CTO at Cycuity, agrees. “Hardware susceptibility is a key component of a larger chain of weaknesses used to exploit vulnerabilities. Rowhammer, a physical attack that’s done remotely, is one of those easy-to-exploit vectors, because if you can flip or modify a bit, you can chain that together with other software-based exploits. In isolation, it may be less of an issue, but in the context of a bigger strain of weaknesses that someone is exploiting, it’s problematic. Many systems vulnerable to Meltdown and Spectre, for example, are also points of concern for exploits like Rowhamer. You wouldn’t worry about these attacks on your smart light bulb or robot vacuum, but I would be concerned about my phone or laptop.”

To address these concerns, various encryption and obfuscation techniques have been proposed to protect DRAM from Rowhammer attacks. “If you encrypt or obfuscate your data, and then someone hammers a row and causes bits to flip, they won’t be able to target a specific bit,” Oberg explained. “They won’t know what the specific bit is. Whereas if it’s just plain text and it’s like a supervisor bit and they know where that supervisor bit is, then they can be very direct with what they’re doing.”

Although these techniques are crucial, Oberg emphasized that security considerations must be part of the design process, starting at the architectural level. “If I’m building a chip using licensed IP, I need to take a step back, analyze its function, and determine the assets that need to be protected,” Oberg noted. “From there, you can license a hardware-based root of trust. Maybe you trust one and not the other, even though it’s cheaper. These are the kind of decisions you should drive at the top level, and then try to manage as best you can without having full control of everything in your supply chain.”

Analyzing a system holistically also allows the design team to reduce the impact of security mitigation on PPA. “If you jump straight into saying, ‘I am concerned about memory,’ then you’re already very isolated,” he said. “If you start picking at each of the weaknesses independently, then the overhead goes up a lot higher because there may be an overlap between [mitigation techniques]. So you should take a higher-level view. It’s important to look at that top level and then drive your security program from that level. If you drive it from the bottom up, you’re going to have huge overheads, a lot of complexity, and you’re going to have problems.”

Ultimately, Oberg sees a combination of system-wide hardware and software solutions, paired with strict access controls and enforced data isolation, as a more effective method of countering exploits like Rowhammer. “In any multi-tenant or shared environment, containers are needed to isolate data. Data should also be assigned, for example, to processor thread A where it can’t be read by another thread. Of course, it can’t just be software. Foundation-level hardware protections are required. Otherwise, software protection will be subverted.”

Siloing processes and tagging memory
Kos Gitchev, senior technical market manager at Cadence, pointed to Arm’s confidential compute architecture (CCA) and memory tagging extension (MTE) as examples of a multi-layered, system-centric defense strategy against various attacks and exploits, including Rowhammer and RAMBleed. CCA ensures data protection during processing by isolating or siloing computation in a secure, hardware-backed environment, while MTE tags memory allocations with metadata that is verified during runtime operations. Although not specifically designed to counter Rowhammer or RAMBleed, both mechanisms help protect against such exploits.

“A Rowhammer attacker can’t say: ‘Well, I’ve taken over the machine and I want to go read this memory,’” Gitchev explained. “If you don’t have the appropriate MTE tags for your process, then you won’t be able to read it. The system will basically block it.”

To protect data held in DRAM, 128-bit or 256-bit AES encryption is also essential. “This is generally done by the memory subsystem, not the DRAM itself,” Gitchev noted. “Blocks of data will come in, they’ll get encrypted, and then pass to the memory. If anything happens to the encrypted data, it won’t properly decrypt. Encryption is almost always done in conjunction with ECC, so there are almost two layers of protection when you implement this scheme.”

Gitchev emphasized that encryption is only effective if keys are properly managed and secured. “A memory subsystem does the encryption. It has the algorithm and adds the XTS extension. Even when you write two blocks of the same data, they’ll look different on the bus to the memory. Of course, all of this can be overcome if someone compromises the encryption key.”

AES encryption can be added without major PPA penalties, making it an optimal choice for memory subsystems. “There are many different encryption schemes out there, but AES is easiest to implement,” said Gitchev. “Adding encryption, however, does increase the number of gates and power. To be fair, most of the memory subsystem power goes into driving the interface [for transferring data off-chip to the memory and back]. There is also a little bit of performance and area cost. The memory subsystem is now bigger because it needs to execute complex mathematical calculations for encryption and decryption in real time without significant latency.”

Tightly coupling encryption and decryption ciphering functions inside the DDR or LPDDR controllers facilitates maximum memory efficiency and lowest overall latency. “When doing both functions separately, certain functionality may have to be repeated, such as bus interface logic or support for read-modify-write operations,” said Ruud Derwig, system architect, solutions group at Synopsys. “When tightly integrated, the scheduler inside the controller can request encryption and decryption at the most optimal times, for example, when overlapping other controller operations or while waiting for data.”

Rowhammer and its variants aren’t necessarily the primary drivers for memory encryption solutions that require secure key management. “Inline memory encryption (IME) is mainly intended to defend against cold-boot attacks and provide confidential compute features,” Derwig said. “For example, a newly created virtual machine (VM) or process may get access to physical memory pages used previously by another VM or process when memory is not erased first, compromising the confidentiality of that previous computing context. With proper key management, IME mitigates these compromises. Or, when the hypervisor itself cannot be trusted, confidentiality of user data is still guaranteed by using different IME keys for different privilege levels and VMs.”

Nevertheless, IME contributes to Rowhammer attack countermeasures, as post-encryption data in the memory appears random to attackers. “Certain data patterns — rowstriped or checker patterns, for example — give the highest success rate for row hammering,” Derwig elaborated. “Moreover, when a single or a few bits are flipped, this is amplified to a full 128-bit decrypted block getting random data, so exploiting bit flips becomes much harder. When there is no attacker control over the changes, it is more likely to get detected by causing malfunctioning. IME [also offers] cryptographically strong integrity protection that mitigates bypassing less strong ECC protection.”

The cycle of Rowhammer attacks and countermeasures will continue as new vulnerabilities are identified and addressed. “Multi-level defenses and mitigations, such as hardware design of memory chips and memory controllers, as well as system software mitigations in hypervisors and operating systems, are needed to [counter] evolving threats,” Derwig added.

Bolstering DRAM reliability in data centers
Although Rowhammer can target any device equipped with DRAM, protecting the data center remains a priority for the semiconductor industry and many security researchers. “New memory used to debut in high-performance PCs and then move into servers,” said Steven Woo, fellow and distinguished inventor at Rambus Labs. “These days, new memory technologies debut for AI [applications] in data centers. The concern is, ‘What if somebody gains access to many servers in the data center and launches programs that intentionally try to repeatedly activate addresses?’ If enough bits flip and can’t be corrected, it could cause what looks like a large hardware fault. You might have to take down memory channels or a machine.”

While the risks of Rowhammer and other exploits in the data center are well known, the semiconductor industry may need more time to comprehensively bolster DRAM security and reliability at the design and system levels. “If you go back 25 or 30 years, nobody was really that concerned about power,” Woo stated. “You can dissipate the heat. You just burn a little more power to get more performance. But today, power is a first-class design parameter that everybody thinks about. Reliability is in that same place that power was in the 2000 to 2005 timeframe, where people are starting to realize, ‘Well, wait a minute, things aren’t infinitely reliable. We’re now going to have to consider DRAM reliability as a first-class design parameter.'”

As DRAM process geometries continue to shrink, electronic engineers will need to develop new or improved architectures and techniques that resist deliberate and repeated errors caused by attackers. “And the tradeoff is, ‘What are you willing to pay to do that? Is there a performance hit? Is there an area hit? Are we storing lots of extra bits?’ In 10 years, we’ll look back and we’ll be talking about reliability in the same way that we talk about power today,” he said.

Bolstering DRAM security and reliability without significantly impacting PPA was the primary driver behind the development of Rambus Labs’ RAMPART: Rowhammer mitigation and repair for server memory systems. Essentially, RAMPART mitigates Rowhammer attacks and improves server memory system reliability by remapping addresses in each DRAM, confining bit flips to a single device for any victim row address. When paired with existing error detection and correction methods, such as single-device data correction (SDDC) and patrol scrub, the system successfully detects and corrects bit flips. To effectively minimize mitigation overhead, RAMPART employs BRC-VL, a variation of DDR5’s bounded refresh configuration (BRC).

Fig. 2: RAMPART row address mappings produce unique neighbors, so Rowhammer attacks have different victim addresses in each DRAM. (a) Circular left shifts of controller row addresses based on unique DRAM IDs are shown. The tables at the bottom illustrate how controller row addresses map to internal bank rows in each DRAM. Row addresses 0x0000 and 0x0001 are bolded to highlight increasing separation with larger shifts. (b) Hammering controller row address 0x0001 flips bits in controller row addresses 0x0000 and 0x0002 in DRAM 0, but controller row addresses 0x8000 and 0x8001 in DRAM 1. A subsequent read to controller row address 0x0000 sees errors only from DRAM 0 that can be corrected with SDDC ECC. Source: Rambus

Fig. 2: RAMPART row address mappings produce unique neighbors, so Rowhammer attacks have different victim addresses in each DRAM. (a) Circular left shifts of controller row addresses based on unique DRAM IDs are shown. The tables at the bottom illustrate how controller row addresses map to internal bank rows in each DRAM. Row addresses 0x0000 and 0x0001 are bolded to highlight increasing separation with larger shifts. (b) Hammering controller row address 0x0001 flips bits in controller row addresses 0x0000 and 0x0002 in DRAM 0, but controller row addresses 0x8000 and 0x8001 in DRAM 1. A subsequent read to controller row address 0x0000 sees errors only from DRAM 0 that can be corrected with SDDC ECC. Source: Rambus

Assuming 70% area utilization and conservative routing, RAMPART reaches a speed of 2.85GHz in an area of 3910µm², or roughly 51K NAND2 gates. For a server with 1,024 banks, the total area required is only 0.1251mm². “We did a sample implementation at TSMC’s 7nm process, showing RAMPART’s small [footprint],” Woo said. “The controller side of it that does the tracking and figures out how often to issue a mitigation operation is very small, just a few gates. It’s very reasonable to implement something like this in a memory controller, and it has no die size impact as far as we can tell. There’s no latency impact on the accesses. It’s a very simple remapping change. And the DRAM is already doing remapping, so it’s not like asking for a new function. It’s simply modifying an existing function.

Conclusion
The continued proliferation of new and improved Rowhammer variants highlights the critical importance of implementing multi-layered, system-level countermeasures to protect DRAM, alongside of other key components and features. These should encompass a wide range of security techniques, from encryption and obfuscation to advanced error correction, address remapping, and data isolation. Still, to fully optimize performance and minimize latency, PPA security tradeoffs must be assessed from the top down at the start of the design process.

Related Reading
Power/Performance Costs Of Securing Systems
Security requires significant overhead, but it is no longer an option to ignore it. Cybercriminals will continue to exploit weak components.
Developing An Unbreakable Cybersecurity System
New approaches are in research, but threats continue to grow.
DRAM Choices Are Suddenly Much More Complicated
The number of options and tradeoffs is exploding as multiple flavors of DRAM are combined in a single design.

The post Securing DRAM Against Evolving Rowhammer Threats appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Maximizing Energy Efficiency For Automotive ChipsWilliam Ruby
    Silicon chips are central to today’s sophisticated advanced driver assistance systems, smart safety features, and immersive infotainment systems. Industry sources estimate that now there are over 1,000 integrated circuits (ICs), or chips, in an average ICE car, and twice as many in an average EV. Such a large amount of electronics translates into kilowatts of power being consumed – equivalent to a couple of dishwashers running continuously. For an ICE vehicle, this puts a lot of stress on the ve
     

Maximizing Energy Efficiency For Automotive Chips

7. Březen 2024 v 09:06

Silicon chips are central to today’s sophisticated advanced driver assistance systems, smart safety features, and immersive infotainment systems. Industry sources estimate that now there are over 1,000 integrated circuits (ICs), or chips, in an average ICE car, and twice as many in an average EV. Such a large amount of electronics translates into kilowatts of power being consumed – equivalent to a couple of dishwashers running continuously. For an ICE vehicle, this puts a lot of stress on the vehicle’s electrical and charging system, leading automotive manufacturers to consider moving to 48V systems (vs. today’s mainstream 12V systems). These 48V systems reduce the current levels in the vehicle’s wiring, enabling the use of lower cost smaller-gauge wire, as well as delivering higher reliability. For EVs, higher energy efficiency of on-board electronics translates directly into longer range – the primary consideration of many EV buyers (second only to price). Driver assistance and safety features often employ redundant component techniques to ensure reliability, further increasing vehicle energy consumption. Lack of energy efficiency for an EV also means more frequent charging, further stressing the power grid and producing a detrimental effect on the environment. All these considerations necessitate the need for a comprehensive energy-efficient design methodology for automotive ICs.

What’s driving demand for compute power in cars?

Classification and processing of massive amounts of data from multiple sources in automotive applications – video, audio, radar, lidar – results in a high degree of complexity in automotive ICs as software algorithms require large amounts of compute power. Hardware architectural decisions, and even hardware-software partitioning, must be done with energy efficiency in mind. There is a plethora of tradeoffs at this stage:

  • Flexibility of a general-purpose CPU-based architecture vs. efficiency of a dedicated digital signal processor (DSP) vs. a hardware accelerator
  • Memory sub-system design: how much is required, how it will be partitioned, how much precision is really needed, just to name a few considerations

In order to enable reliable decisions, architects must have access to a system that models, in a robust manner, power, performance, and area (PPA) characteristics of the hardware, as well as use cases. The idea is to eliminate error-prone estimates and guesswork.

To improve energy efficiency, automotive IC designers also must adopt many of the power reduction techniques traditionally used by architects and engineers in the low-power application space (e.g. mobile or handheld devices), such as power domain shutoff, voltage and frequency scaling, and effective clock and data gating. These techniques can be best evaluated at the hardware design level (register transfer level, or RTL) – but with the realistic system workload. As a system workload – either a boot sequence or an application – is millions of clock cycles long, only an emulation-based solution delivers a practical turnaround time (TAT) for power analysis at this stage. This power analysis can reveal intervals of wasted power – power consumption bugs – whether due to active clocks when the data stream is not active, redundant memory access when the address for the read operation doesn’t change for many clock cycles (and/or when the address and data input don’t change for the write operation over many cycles), or unnecessary data toggles while clocks are gated off.

To cope with the huge amount of data and the requirement to process that data in real time (or near real time), automotive designers employ artificial intelligence (AI) algorithms, both in software and in hardware. Millions of multiply-accumulate (MAC) operations per second and other arithmetic-intensive computations to process these algorithms give rise to a significant amount of wasted power due to glitches – multiple signal transitions per clock cycle. At the RTL stage, with the advanced RTL power analysis tools available today, it is possible to measure the amount of wasted power due to glitches as well as to identify glitch sources. Equipped with this information, an RTL design engineer can modify their RTL source code to lower the glitch activity, reduce the size of the downstream logic, or both, to reduce power.

Working together with the RTL design engineer is another critical persona – the verification engineer. In order to verify the functional behavior of the design, the verification engineer is no longer dealing just with the RTL source: they also have to verify the proper functionality of the global power reduction techniques such as power shutoff and voltage/frequency scaling. Doing so requires a holistic approach that leverages a comprehensive description of power intent, such as the Unified Power Format (UPF). All verification technologies – static, formal, emulation, and simulation – can then correctly interpret this power intent to form an effective verification methodology.

Power intent also carries through to the implementation part of the flow, as well as signoff. During the implementation process, power can be further optimized through physical design techniques while conforming to timing and area constraints. Highly accurate power signoff is then used to check conformance to power specifications before tape-out.

Design and verification flow for more energy-efficient automotive SoCs

Synopsys delivers a complete end-to-end solution that allows IC architects and designers to drive energy efficiency in automotive designs. This solution spans the entire design flow from architecture to RTL design and verification, to emulation-driven power analysis, to implementation and, ultimately, to power signoff. Automotive IC design teams can now put in place a rigorous methodology that enables intelligent architectural decisions, RTL power analysis with consistent accuracy, power-aware physical design, and foundry-certified power signoff.

The post Maximizing Energy Efficiency For Automotive Chips appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Microarchitecture Vulnerabilities: Uncovering The Root Cause WeaknessesJason Oberg
    In early 2018, the tech industry was shocked by the discovery of hardware microarchitecture vulnerabilities that bypassed decades of work put into software and application security. Meltdown and Spectre exploited performance features in modern application processors to leak sensitive information about victim programs to an adversary. This leakage occurs through the hardware itself, meaning that malicious software can extract secret information from users even if software protections are in place
     

Microarchitecture Vulnerabilities: Uncovering The Root Cause Weaknesses

7. Březen 2024 v 09:05

In early 2018, the tech industry was shocked by the discovery of hardware microarchitecture vulnerabilities that bypassed decades of work put into software and application security. Meltdown and Spectre exploited performance features in modern application processors to leak sensitive information about victim programs to an adversary. This leakage occurs through the hardware itself, meaning that malicious software can extract secret information from users even if software protections are in place because the leakages happen below the view of software in hardware. Since these so-called transient execution vulnerabilities were first publicly disclosed, dozens of variants have been identified that all share a set of common root cause weaknesses, but the specifics of that commonality were not well understood broadly by the security community.

In early 2020, Intel Corporation, MITRE, Cycuity, and others set off to establish a set of common weaknesses for hardware to enable a more proactive approach to hardware security to reduce the risk of a hardware vulnerability in the future. The initial set of weaknesses, in the form of Common Weakness Enumerations (CWE), were broad and covered weaknesses beyond just transient execution vulnerabilities like Meltdown and Spectre. While this initial set of CWEs was extremely effective at covering the root causes across the entire hardware vulnerability landscape, the precise and specific coverage of transient execution vulnerabilities was still lacking. This was primarily because of the sheer complexity, volume, and cleverness of each of these vulnerabilities.

In the fall of 2022, technical leads from AMD, Arm, Intel (special kudos to Intel for initiating and leading the effort), Cycuity, and Riscure came together to dig into the details of publicly disclosed transient execution vulnerabilities to really understand their root cause and come up with a set of precise, yet comprehensive, root cause weaknesses expressed as CWEs to help the industry not only understand the root cause for these microarchitecture vulnerabilities but to help prevent future, unknown vulnerabilities from being discovered. The recent announcement of the four transient execution weaknesses was a result of this collaborative effort over the last year.

CWEs for microarchitecture vulnerabilities

To come up with these root cause weaknesses, we researched every known publicly disclosed microarchitecture vulnerability (Common Vulnerabilities and Exposures [CVEs]) to understand the exact characteristics of the vulnerabilities and what the root causes were. As a result of this, the following common weaknesses were discovered, with a brief summary provided in layman terms from my perspective:

CWE-1421: Exposure of Sensitive Information in Shared Microarchitectural Structures during Transient Execution

  • Potentially leaky microarchitectural resources are shared with an adversary. For example, sharing a CPU cache between victim and attacker programs has shown to result in timing side channels that can leak secrets about the victim.

CWE-1422: Exposure of Sensitive Information caused by Incorrect Data Forwarding during Transient Execution

  • The forwarding or “flow” of information within the microarchitecture can result in security violations. Often various events (speculation, page faults, etc.) will cause data to be incorrectly forwarded from one location of the processor to another (often to a leaky microarchitecture resource like the one listed in CWE-1421)

CWE-1423: Exposure of Sensitive Information caused by Shared Microarchitectural Predictor State that Influences Transient Execution

  • An attacker being able to affect or “poison” a microarchitecture predictor used within the processor. For example, branch prediction is commonly used to increase performance to speculatively fetch instructions based on the expected outcome of a branch in a program. If an adversary is able to affect the branch prediction itself, they can cause the victim to execute code in branches of their choosing.

CWE-1420: Exposure of Sensitive Information during Transient Execution

  • A general transient execution weakness if one of the other weaknesses above do not quite fit the need.

Within each of the CWEs listed above, you can find details about observed examples, or vulnerabilities, which are a result of these weaknesses. Some vulnerabilities, Spectre-V1, for example, requires the presence of CWE-1421, CWE-1422, and CWE-1423. While others, like Meltdown, only require CWE-1422 and CWE-1423.

Since detecting these weaknesses can be a daunting task, each of the CWEs outline a set of detection methods. One detection method that is highlighted in each CWE entry is the use of information flow to track the flow of information in the microarchitecture to ensure data is being handled securely. Information flow can be used for each of the CWEs as follows:

  • CWE-1421: information flow analysis can be used to ensure that secrets never end up in a shared microarchitectural resource.
  • CWE-1422: ensure that secret information is never improperly forwarded within the microarchitecture.
  • CWE-1423: ensure that an attacker can never affect or modify the predictor state in a way that is observable by the victim. In other words, information from the attacker should not flow to the predictor if that information can affect the integrity of the predictor for the victim.

Our Radix products use information flow at their core and we have already shown success in demonstrating Radix’s ability to detect Meltdown and Spectre. We look forward to continuing to work with the industry and our customers and partners to further advance the state of hardware security and reduce the risk of vulnerabilities being discovered in the future.

The post Microarchitecture Vulnerabilities: Uncovering The Root Cause Weaknesses appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • V2X Path To Deployment Still MurkyAnn Mutschler
    Experts at the Table: Semiconductor Engineering sat down to discuss Vehicle-To-Everything (V2X) technology and the path to deployment, with Shawn Carpenter, program director for 5G and space at Ansys; Lang Lin, principal product manager at Ansys; Daniel Dalpiaz, senior manager product marketing, Americas, green industrial power division at Infineon; David Fritz, vice president of virtual and hybrid systems at Siemens EDA; and Ron DiGiuseppe, senior marketing manager, automotive IP segment at Syn
     

V2X Path To Deployment Still Murky

7. Březen 2024 v 09:05

Experts at the Table: Semiconductor Engineering sat down to discuss Vehicle-To-Everything (V2X) technology and the path to deployment, with Shawn Carpenter, program director for 5G and space at Ansys; Lang Lin, principal product manager at Ansys; Daniel Dalpiaz, senior manager product marketing, Americas, green industrial power division at Infineon; David Fritz, vice president of virtual and hybrid systems at Siemens EDA; and Ron DiGiuseppe, senior marketing manager, automotive IP segment at Synopsys. What follows are excerpts from that conversation.

L-R: Ansys' Carpenter; Ansys' Lin; Infineon’s Dalpiaz; Siemens EDA’s Fritz; Synopsys‘ DiGiuseppe.

L-R: Ansys’ Carpenter; Ansys’ Lin; Infineon’s Dalpiaz; Siemens EDA’s Fritz; Synopsys‘ DiGiuseppe.

SE: What is the potential of vehicle-to-everything technology, and what role will the semiconductor ecosystem play in making this a reality?

DiGiuseppe: V2X is a technology that’s not just years, but decades, in the making. It initially started as a dedicated short-range communications (DSRC) type of technology, and has globally transitioned into a cellular technology, although many of those V2X applications are not just cellular. There are other spectrum allocations V2X can run on, including WiFi or other general-use technology. So it’s not limited to cellular. Also, it’s not just a technology. It’s an application, an outcome, and there are a lot of valuable uses, many of which are safety-related, but there are others, such as efficiency of traffic management notifications. V2X has a wide number of uses. The deployment will be done in stages, and there’s a lot of activity even though it’s taken a long time.

Lin: When I see the keyword V2X, it reminds me of everything about how the car can communicate with anything in the world. It’s a very exciting moment that we’re here today to be able to make some kind of technology to enable great communication between vehicles and people, in network infrastructures and car to car communications. Today, there is already something implemented. For instance, in car network systems we can connect our phone to the car already, but we’re still in the first mile. We’ve started on the journey, but we have a long way to go as far as how to connect car-to-car, how to connect the car to the entire infrastructure of networks, and to the internet. There are a lot of unknowns on the road while we start driving on this journey, and safety and security are definitely the biggest concerns. What if my network is being jeopardized?

Dalpiaz: V2X is part of a much bigger smart grid ecosystem. This will certainly play a very important role, especially as the grid becomes smart and decentralized. This is what will enable the future energy ecosystem, having renewable energies, energy storage systems all connected. And as we see more EVs being used as mobile battery storage. this is something that will certainly enable, and is part of, a smart grid ecosystem that everybody’s talking about.

Fritz: The days of independent semiconductor and software development are over. It is the need for OEMs to control their own destiny, driven by growing consumer and competitive demand, that has all but eliminated the ability to sell a one-size-fits-all product. We’ve known for a very long time that software needs to drive semiconductors, and semiconductors need to drive software. This symbiotic relationship, and the tools and methodologies needed to support this paradigm shift, are essential to producing a highly successful, complex, and competitive solution that meets consumer demands.

SE: What are the discrete pieces of V2X that need to be connected?

Dalpiaz: From the semiconductor point of view, especially with the usage of wide bandgap materials, a few companies are seeing that it’s possible to increase efficiency and power density. Being able to not only provide such solutions, but have everything connected in one box, is part of the smart ecosystem. Then, having the electric vehicles, energy storage, solar — everything combined into one box. Twenty years ago, before the iPhone, we used to have a fax machine, a camera for photographs, a computer. The future of this ecosystem is going to have one box sitting in your home, and have all this stuff connected together. So from the semiconductor point of view, especially with silicon carbide, it is something that is possible today, and it can achieve a very high level of efficiency — about 99%, very close to 100%. And of course, we need to make the system smaller to fit in a vehicle.

DiGiuseppe: One of the key stakeholders is the cellular companies. When we look at cellular V2X, one of the main challenges is interoperability. You have different devices in different model-year cars, so for the vehicle-to-vehicle communications, those different devices need to be interoperable. Then, the car will be talking to the infrastructure, so the roadside units need to be interoperable with the cars and devices in the cars. Then, of course, you have vehicle-to-pedestrians, vehicle-to-e-mobility like vehicle-to-bicycles, vehicle-to-motorcycles interoperability between all the devices over the medium. Whether it’s cellular or Wi-Fi or other technologies, it all needs to be interoperable. That will allow deployments in one locality to work in another locality, because even if they’re interoperable in one deployment in one region, we’ve got to make sure they’re also interoperable in other regions. So it’s a large scale interoperability goal.

Lin: Ron, you’re talking about interoperability, and Daniel talked about the ecosystem. From my side, I would also mention some standards are necessary. For EDA, to help build such an ecosystem and chips, we need some rules to give to engineers as to what’s to be followed. There are two important standards in my mind. One is the vehicle safety standard ISO 26262, which regulates a couple of safety standards for on-road vehicle chip design. Another is the cyber security standard, ISO 21434. If I make a tool, I probably will follow those standards, and then think about how the tool could help users decide a pass/failure criteria regarding their design, making sure to meet the security and safety target from the standard.

DiGiuseppe: In addition to standards, last October the U.S. Department of Transportation released its national V2X deployment plan. That plan, which is still in draft feedback stage, lays out — at least in the U.S. — the whole timeline for deployments. That kind of oversight plan overlays onto the standards that Lang was just talking about. That deployment plan outlines the different contributions from all the different stakeholders, from the automakers/OEMs to the software developers for the applications. So overlaid on top of standards is a deployment plan, and a government deployment plan outlines that. Plus, there are a lot of government stakeholders, like the FCC allocating spectrum, and the Department of Transportation deploying all these deployments, and that’s in addition to the technology providers.

Fritz: It would take days to adequately answer those questions, but at the core, the root design components are connectivity, power, performance, and acceleration. Connectivity with the proper protocols allows computational tasks to be distributed. This is particularly important in automotive, where the physical distance between sensing, actuating and computing nodes is critical for predictable performance. In the case of V2X, connectivity enables the normalization of external data, whether it involves smart city infrastructure or another vehicle. It’s important to note that the form of the shared data grows exponentially with the capacity to describe the environment, and therefore the compute requirements to process and understand it. For example, a data form that can describe signage in the U.S. is relatively small, but one that is universal with variations recognizable is much larger and more ambiguous. This drives design parameters that directly impact manufacturing, development, and service cost functions. Further, the normalization of the data has an impact on the overall design and design component interactions. In the case of power, it goes without saying that high compute requirements, and the associated necessary cooling, can have a significant impact on EV range and manufacturing costs. Performance can take many forms, but as software loads increase with hypervisors, specialized operating systems, and protocol stacks, not to mention very complex application software, all must meet stringent mission critical requirements. Finally, acceleration is of growing importance because it allows workloads to be handed off to specialized hardware that is better equipped to handle that load. An example is running AI inferencing on a CPU is typically far slower and more power-hungry than on an NPU, but a GPU could be idle and available to do the same task. On the other hand, a small CNN can be handled quite easily on a CPU with a few simple instructions. It is at the intersection of these major design components where an OEM will find its differentiation. Therefore, having a system capable of exploring this complex hardware and software space quickly, and with a small team, is critical for an OEM to demand of its suppliers what is required for the success of its platforms. Again, controlling your own destiny is essential to survival.

SE: With all of this interoperability, what happens when there are parts of the ‘everything’ — whether it’s the car or the infrastructure or pedestrians — that are not updated with the latest technologies or different aspects of what needs to be there for conductivity?

DiGiuseppe: In addition to that challenge, this includes backward compatibility for automotive. For someone buying a car in 2025, you would expect any V2X technology to work in 2040. But in the meantime, all those standards that we’re talking about are continuing to evolve, so they need to be backward compatible.

Carpenter: This highlights the need for a digital twin capability for modeling this infrastructure to be able to understand that when we get two years down the road, some devices may not be reprogrammable. We may not be able to flash a particular device. We need to be able to look at that, and be able to simulate that in advance to understand what will happen. What will this do? We’re seeing this show a little bit, even giving a nod to what Ron was talking about earlier with interoperability. We have customers who want to be able to validate real hardware stuff that they’re developing on the lab bench, but they want to do it with the fidelity of a real system operating on a car, in a virtual city, with the live interaction of the channel with a gNodeB 5G base station mounted up on a building someplace, and they want to know how this will work in the context of the situation that it’s supposed to serve. And if something goes wrong in that scene, can we introduce something into this device and run our real silicon development platform against it to understand what happens here. If we go into a deep shadow, a deep fade area, and I’m not getting updates, yet I’m hurtling down the road at a certain speed, how long can I do this before I receive corrective information? What if someone’s software deck out there doesn’t get reprogrammed or doesn’t get the latest version of the standard safety protocols or something like that? We’re going to need this ability to carry models of stuff that was built two or three years ago in today’s infrastructure, model that, and understand in advance what’s going to happen with it so that we have an approach to do this. This is what the Department of Defense is doing today with their digital thread enablement, to have a way to capture that with legacy models of what they built years ago, but apply it in modern missions and understand, ‘Does it work? Does it fit? Does it not fit? What do we need to do to the existing system to make sure that we’re safe here?’ That is an approach we clearly see the automakers beginning to look at as a way to future-proof some of these systems and make sure that they’ve got a way to test them as they go forward.

Fritz: It’s become very clear from several popularized incidents that simply stopping and waiting for tech support to find you and get you going again is not going to be a successful strategy. In the end, the vehicle must make decisions at least as thoughtful as an average human would make. This is entirely possible, but not if too much emphasis is placed in the design phase on the dependencies between communicating (or non-communicating) actors. For this reason, we will always require sophisticated decision-making in-vehicle to be widely accepted.

SE: How does the design team stay up to date with everything?

DiGiuseppe: On the vehicle side, they’re going to be relying on over-the-air (OTA) software updates, which is relatively new in the automotive industry. But clearly, once we identify a software update, we’re going to need to roll out that software update, and OTA is obviously going to be used hand-in-hand with the updates to V2X as it moves forward.

SE: From a developer standpoint, they have to design to these all these regulations. What are the issues here?

Lin: As a software developer, if you think about a vehicle 10 years ago, you mainly just replaced hardware. You replaced your brakes, you replaced your engine, adding some fluid. These are all old styles. Right now, if you have the V2X network, you’d expect probably daily updates because software is evolving daily, and your whole communication system infrastructure is under the whole internet evolution, so you’re going to have to keep pace with it. That’s a lot of work for developers.

Carpenter: There could be implications on edge processing. The telecommunications providers are going to need to put a lot more compute closer to the radio head, and clearly they’re already exploring the possibilities of getting not just central processing cores, CPU cores, but there will be GPU cores and Tensor Processing Units, and we don’t know what all yet for AI, that will be a part of this safety infrastructure and information/infotainment delivery. There’s a lot more compute that’s going to have to happen with a much shorter latency. Augmented reality with heads-up displays — imagine the possibilities coming in safety systems with heads-up displays in cars. Then imagine the amount of processing that it’s going to take. So the telecom providers will need to be a major part of that, together with most of the local government regulatory groups that are going to foster that safety system. Each municipality probably has to decide what do they adopt, what level of standard will they use, and deliver. Who invests in that? The future is really exciting, but there are a few things yet to be sorted out in terms of the investment needed to really deliver that promise.

Dalpiaz: I’m more in the infrastructure side, and one of the questions we always have is, ‘With all this focus on renewables and decentralization of the grid, can the grid handle such expectations or such projects?’ Having more people connecting and feeding energy back into the grid, and managing all of this, that’s always the question that you have to go through and consider.

Fritz: The fact is that keeping up to date is not practical. However, that doesn’t mean that a methodology cannot be employed to accept changes into the development system, and therefore be folded into the development process. CI/CD systems with digital twin golden models already are being developed, with nightly regressions run against complex (and possibly changing) requirements. In this way, requirement changes are automatically addressed as they occur, and solutions can be rolled into an Agile methodology through nightly regressions. This is an important benefit of a modern development methodology that has been used in other industries for years, but it’s just now finding purchase in progressive automotive companies.

Related Reading
Growing Challenges For Increasingly Connected Vehicles
OEMs have high expectations for connected vehicles and global growth opportunities, but it’s not that simple.
Software-Defined Vehicles Ready To Roll
New approach could have big effects on cost, safety, security, and time to market.

The post V2X Path To Deployment Still Murky appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Accelerate Complex Algorithms With Adaptable Signal Processing SolutionsJayson Bethurem
    Technology is continuously advancing and exponentially increasing the amount of data produced. Data comes from a multitude of sources and formats, requiring systems to process different algorithms. Each of these algorithms present their own challenges including low-latency and deterministic processing to keep up with incoming data rates and rapid response time. Considering that many of these semiconductors are designed years in advance of evolving technology, it places a challenging problem for
     

Accelerate Complex Algorithms With Adaptable Signal Processing Solutions

7. Březen 2024 v 09:03

Technology is continuously advancing and exponentially increasing the amount of data produced. Data comes from a multitude of sources and formats, requiring systems to process different algorithms. Each of these algorithms present their own challenges including low-latency and deterministic processing to keep up with incoming data rates and rapid response time. Considering that many of these semiconductors are designed years in advance of evolving technology, it places a challenging problem for IC designers. Take video systems for example, the resolution and color depth of imaging sensors is doubling every few years, affecting how the generated data gets processed. Alternatively, AI algorithms update nearly every year. In both cases, not only do the data paths need to increase in width and throughput, but so does the memory for weights and activations. It’s nearly unimaginable to build an IC today that doesn’t have some level of adaptability.

Fig. 1: Common adaptive noise filter design.

This is not a new concept. For years, many engineers have utilized FPGAs for this type of processing, which spans from I/Q data from radio comms to video streams from image sensors to BLDC motor control algorithms to AI models. FPGAs are perfect for handling complex algorithms that benefit from parallel and pipelined processing. Additionally, FPGA architectures are loaded with embedded memory, which can be tightly coupled to increase determinism and performance of algorithms, whereas processors can be bogged down with memory fetching, cache misses and low-level interrupts.

As data evolves and becomes more complex, FPGAs have followed suit. They have greatly increased their processing capability by building hardened signal processing blocks, which too have increased in capability over time. Many SoCs and ASICs have adjacent FPGAs to solve these processing challenges and cover this capability. However, discrete FPGA implementations have a few drawbacks, namely price and power, but also limited data transactions between the FPGA and external components like processors. But with Flex Logix eFPGA IP, any device can adopt this level of capability and reduce discrete overhead of FPGA cost and power by nearly 90%.

Like traditional FPGAs, Flex Logix EFLX IP includes 6-input LUT programmable logic, embedded memory and DSP blocks with 22×22 multipliers with 48-bit accumulators.

Fig. 2: EFLX eFPGA IP.

Unlike traditional FPGAs, these IP blocks can be scaled to fit your specific application. And depending on your application, you can select more or less DSP vs. logic as well as memory vs logic ratios. Thus, algorithms needing more memory and multipliers than logic can utilize higher ratios of DSP cores.

Flex Logix IP has evolved with signal processing demands and recently introduced InferX IP, which can dramatically increase performance and lower power consumption. InferX is effectively a scalable one-dimensional tensor processor (vector & matrix) controlled by the eFPGA fabric, which allows this IP to adapt to any signal processing algorithm implementation, including AI models. InferX has roughly 10 times the DSP performance of the aforementioned DSP IP and uses only one-quarter of the area. And while many associate TPUs with AI applications, this IP is ideal for any vector/matrix computation.

Fig. 3: InferX IP scalable from 1/8th of a tile to > 8 tiles.

InferX achieves up to dozens of TeraMACs/second at TSMC 5nm node. It is ideal for applications including FFT, FIR, IIR, Beam Forming, Matrix/Vector operations, Matrix Inversions, Kalman functions and more. It can handle Real or Complex, INT16x16 with accumulation at INT40 for accuracy. Multiple DSP operations can be pipelined in streaming mode or packet mode. See below for more benchmarks for common algorithms running on TSMC’s 5nm node.

InferX DSP solutions are easily programmed via common tools like Matlab Simulink. Flex Logix has built a ready-to-use standard Simulink block set that provides a simplified configuration, bit-accurate modeling with flexible precision.

Fig. 4: Simulink design flow.

Fig. 5: Cycle-accurate simulation of InferX soft logic driving InferX TPUs ensures functionality.

InferX IP works seamlessly with Flex Logix EFLX eFPGA IP and can be reconfigured in microseconds, enabling ICs to adapt to any data stream and the appropriate algorithm in near real-time. For ASIC manufacturers to accomplish this, they would have to multiplex between several hardened algorithms, forfeiting future algorithm and new data stream change. Flex Logix IP is the perfect adaptable accelerator for all semiconductors and is available for many nodes including advanced nodes like TSMC 5nm and 3nm as well as planned for Intel 18A.

Want to learn more about Flex Logix EFLX IP and signal processing solutions?
Contact us at [email protected] to learn more or visit our website https://flex-logix.com.

The post Accelerate Complex Algorithms With Adaptable Signal Processing Solutions appeared first on Semiconductor Engineering.

❌
❌