FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • Keeping Up With New ADAS And IVI SoC TrendsHezi Saar
    In the automotive industry, AI-enabled automotive devices and systems are dramatically transforming the way SoCs are designed, making high-quality and reliable die-to-die and chip-to-chip connectivity non-negotiable. This article explains how interface IP for die-to-die connectivity, display, and storage can support new developments in automotive SoCs for the most advanced innovations such as centralized zonal architecture and integrated ADAS and IVI applications. AI-integrated ADAS SoCs The aut
     

Keeping Up With New ADAS And IVI SoC Trends

Od: Hezi Saar
1. Srpen 2024 v 09:10

In the automotive industry, AI-enabled automotive devices and systems are dramatically transforming the way SoCs are designed, making high-quality and reliable die-to-die and chip-to-chip connectivity non-negotiable. This article explains how interface IP for die-to-die connectivity, display, and storage can support new developments in automotive SoCs for the most advanced innovations such as centralized zonal architecture and integrated ADAS and IVI applications.

AI-integrated ADAS SoCs

The automotive industry is adopting a new electronic/electric (EE) architecture where a centralized compute module executes multiple applications such as ADAS and in-vehicle infotainment (IVI). With the advent of EVs and more advanced features in the car, the new centralized zonal architecture will help minimize complexity, maximize scalability, and facilitate faster decision-making time. This new architecture is demanding a new set of SoCs on advanced process technologies with very high performance. More traditional monolithic SoCs for single functions like ADAS are giving way to multi-die designs where various dies are connected in a single package and placed in a system to perform a function in the car. While such multi-die designs are gaining adoption, semiconductor companies must remain cost-conscious as these ADAS SoCs will be manufactured at high volumes for a myriad of safety levels. One example is the automated driving central compute system. The system can include modules for the sensor interface, safety management, memory control and interfaces, and dies for CPU, GPU, and AI accelerator, which are then connected via a die-to-die interface such as the Universal Chiplet Interconnect Express (UCIe). Figure 1 illustrates how semiconductor companies can develop SoCs for such systems using multi-die designs. For a base ADAS or IVI SoC, the requirement might just be the CPU die for a level 2 functional safety. A GPU die can be added to the base CPU die for a base ADAS or premium IVI function at a level 2+ driving automation. To allow more compute power for AI workloads, an NPU die can be added to the base CPU or the base CPU and GPU dies for level 3/3+ functional safety. None of these scalable scenarios are possible without a solution for die-to-die connectivity.

Fig. 1: A simplified view of automotive systems using multi-die designs.

The adoption of UCIe for automotive SoCs

The industry has come together to define, develop, and deploy the UCIe standard, a universal interconnect at the package-level. In a recent news release, the UCIe Consortium announced “updates to the standard with additional enhancements for automotive usages – such as predictive failure analysis and health monitoring – and enabling lower-cost packaging implementations.” Figure 2 shows three use cases for UCIe. The first use case is for low-latency and coherency where two Network on a Chip (NoC) are connected via UCIe. This use case is mainly for applications requiring ADAS computing power. The second automotive use case is when memory and IO are split into two separate dies and are then connected to the compute die via CXL and UCIe streaming protocols. The third automotive use case is very similar to what is seen in HPC applications where a companion AI accelerator die is connected to the main CPU die via UCIe.

Fig. 2: Examples of common and new use cases for UCIe in automotive applications.

To enable such automotive use cases, UCIe offers several advantages, all of which are supported by the Synopsys UCIe IP:

  • Latency optimized architecture: Flit-Aware Die-to-Die Interface (FDI) or Raw Die-to-Die Interface (RDI) operate with local 2GHz system clock. Transmitter and receiver FIFOs accommodate phase mismatch between clock domains. There is no clock domain crossing (CDC) between the PHY and Adapter layers for minimum latency. The reference clock has the same frequency for the two dies.
  • Power-optimized architecture: The transmitter provides the CMOS driver without source termination. IT offers programmable drive strength without a Feed-Forward Equalizer (FFE). The receiver provides a continuous-time linear equalizer (CTLE) without VGA and decision feedback equalizer (DFE), clock forwarding without Clock and Data Recovery (CDR), and optional receiver termination.
  • Reliability and test: Signal integrity monitors track the performance of the interconnect through the chip’s lifecycle. This can monitor inaccessible paths in the multi-die package, test and repair the PHY, and execute real time reporting for preventative maintenance.

Synopsys UCIe IP is integrated with Synopsys 3DIC Compiler, a unified exploration-to-signoff platform. The combination eases package design and provides a complete set of IP deliverables, automated UCIe routing for better quality of results, and reference interposer design for faster integration.

Fig. 3: Synopsys 3DIC Compiler.

New automotive SoC design trends for IVI applications

OEMs are attracting consumers by providing the utmost in cockpit experience with high-resolution, 4K, pillar-to-pillar displays. Multi-Stream Transport (MTR) enables a daisy-chained display topology using a single port, which consists of a single GPU, one DP TX controller, and PHY, to display images on multiple screens in the car. This revision clarifies the components involved and maintains the original meaning. This daisy-chained set up simplifies the display wiring in the car. Figure 4 illustrates how connectivity in the SoC can enable multi-display environments in the car. Row 1: Multiple image sources from the application processor are fed into the daisy-chained display set up via the DisplayPort (DP) MTR interface. Row 2: Multiple image sources from the application processor are fed to the daisy-chained display set up but also to the left or right mirrors, all via the DP MTR interface. Row 3: The same set up in row 2 can be executed via the MIPI DSI or embedded DP MTR interfaces, depending on display size and power requirements.

An alternate use case is USB/DP. A single USB port can be used for silicon lifecycle management, sentry mode, test, debug, and firmware download. USB can be used to avoid the need for very large numbers of test pings, speed up test by exceeding GPIO test pin data rates, repeat manufacturing test in-system and in-field, access PVT monitors, and debug.

Fig. 4: Examples of display connectivity in software-defined vehicles.

ISO/SAE 21434 automotive cybersecurity

ISO/SAE 21434 Automotive Cybersecurity is being adopted by industry leaders as mandated by the UNECE R155 regulation. Starting in July 2024, automotive OEMs must comply with the UNECE R155 automotive cybersecurity regulation for all new vehicles in Europe, Japan, and Korea.

Automotive suppliers must develop processes that meet the automotive cybersecurity requirements of ISO/SAE 21434, addressing the cybersecurity perspective in the engineering of electrical and electronic (E/E) systems. Adopting this methodology involves embracing a cybersecurity culture which includes developing security plans, setting security goals, conducting engineering reviews and implementing mitigation strategies.

The industry is expected to move towards enabling cybersecurity risk-managed products to mitigate the risks associated with advancement in connectivity for software-defined vehicles. As a result, automotive IP needs to be ready to support these advancements.

Synopsys ARC HS4xFS Processor IP has achieved ISO/SAE 21434 cybersecurity certification by SGS-TṺV Saar, meeting stringent automotive regulatory requirements designed to protect connected vehicles from malicious cyberattacks. In addition, Synopsys has achieved certification of its IP development process to the ISO/SAE 21434 standard to help ensure its IP products are developed with a security-first mindset through every phase of the product development lifecycle.

Conclusion

The transformation to software-defined vehicles marks a significant shift in the automotive industry, bringing together highly integrated systems and AI to create safer and more efficient vehicles while addressing sophisticated user needs and vendor serviceability. New trends in the automotive industry are presenting opportunities for innovations in ADAS and IVI SoC designs. Centralized zonal architecture, multi-die design, daisy-chained displays, and integration of ADAS/IVI functions in a single SoC are among some of the key trends that the automotive industry is tracking. Synopsys is at the forefront of automotive SoC innovations with a portfolio of silicon-proven automotive IP for the highest levels of functional safety, security, quality, and reliability. The IP portfolio is developed and assessed specifically for ISO 26262 random hardware faults and ASIL D systematic. To minimize cybersecurity risks, Synopsys is developing IP products as per the ISO/SAE 21434 standard to provide automotive SoC developers a safe, reliable, and future proof solution.

The post Keeping Up With New ADAS And IVI SoC Trends appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Overcoming Chiplet Integration Challenges With AdaptabilityJayson Bethurem
    Chiplets are exploding in popularity due to key benefits such as lower cost, lower power, higher performance and greater flexibility to meet specific market requirements. More importantly, chiplets can reduce time-to-market, thus decreasing time-to-revenue! Heterogeneous and modular SoC design can accelerate innovation and adaptation for many companies. What’s not to like about chiplets? Well, as chiplets come to fruition, we are starting to realize many of the complications of chiplet designs.
     

Overcoming Chiplet Integration Challenges With Adaptability

9. Květen 2024 v 09:06

Chiplets are exploding in popularity due to key benefits such as lower cost, lower power, higher performance and greater flexibility to meet specific market requirements. More importantly, chiplets can reduce time-to-market, thus decreasing time-to-revenue! Heterogeneous and modular SoC design can accelerate innovation and adaptation for many companies. What’s not to like about chiplets? Well, as chiplets come to fruition, we are starting to realize many of the complications of chiplet designs.

Fig. 1: Expanded chiplet system view.

The interface challenge

The primary concept of chiplets is integration of ICs (Integrated Circuits) from multiple companies. However, many of these did not fully consider interoperation with other ICs. That’s partially based on a lack of interconnect standards around chiplets. Moreover, ICs have their own computational and bandwidth requirements. This is further complicated as competing interfaces standards vie for adoption as shown in table 1.

Standard d Throughput Density Max Delay
Advanced Interface Bus (Intel, AIB) 2 Gbps 504 Gbps/mm 5 ns
Bandwidth Engine 10.3 Gbps N/A 2.4 ns
BoW (Bunch of Wires) 16 Gbps 1280 Gbps/mm 5 ns
HBM3 (JEDEC) 4.8 Gbps N/A N/A
Infinity Fabric (AMD) 10.6 Gbps N/A 9 ns
Lipincon (TSMC) 2.8 Gbps 536 Gbps/mm 14ns
Multi-Die I/O (Intel) 5.4 Gbps 1600 Gbps/mm N/A
XSR/USR (Rambus) 112 Gbps N/A N/A
UCIe 32 Gbps 1350 Gbps/mm 2 ns

Table 1: Chiplet interconnect options.

Most chiplet interconnects are dominated by UCIe (Universal Chiplet Interconnect) and the unimaginatively named BoW (Bunch of Wires). UCIe introduced the 1.0 spec, and as with any first edition of a specification, it is inevitable updates will follow. UCIe 1.1 fixes several holes and gaps in 1.0. It addresses gray areas, missing definitions, ECNs and more. And it is very likely not the last update, as UCIe’s vision is to grow up the stack – adding additional protocol layers on top of the system layers.

Because of newness and expected evolution of UCIe and BoW protocols, designing them in is risky. Additionally, there will always be a place for multiple die-to-die interfaces, beyond UCIe. Specific use cases and designs will inherently be matched to different metrics leading for many designs to fall back to proprietary interfaces.

As you can see, there are many choices, and many of these have tradeoffs. Integration into a chiplet with a variety of these protocols would greatly benefit from adaptability via data/protocol adaptation that can easily be enabled with embedded programmable logic, or eFPGA. A lightweight protocol shim implemented in eFPGA IP can not only reformat data but also buffer data to maximize internal processing. Finally, consider that data between ICs in a chiplet can be globally asynchronous – another easy task resolved with eFPGA IP with FIFO synchronizers.

The security challenge

Beyond the interfaces, security is another emerging challenge. A few factors of chiplets must be cautiously considered:

  • Varying ICs from unknown and possibly unreputable manufacturers
  • IC can contain internal IP from additional third-party sources
  • Each IC may receive and introduce external data into the system

Naturally, this begs for attestation and provenance to ensure vendor confidence. As such, root of trust generally starts with the supply chain and auditing all vendors. However, it only takes one failed component, the least secure component, to jeopardize the entire system.

Root of trust suddenly becomes an issue and uncovers another issue. Which IC, or ICs, in the chiplet manage root of trust? As we’ve seen time and time again, security threats evolve at an alarming rate. But chiplets have an opportunity here. Again, embedded FPGAs have the flexible nature to adapt, thus thwarting these evolving security threats. eFPGA IP can also physically disable unused interfaces – minimizing surface attack vectors.

Adaptable cryptography cores can perform a variety of tasks with high performance in eFPGA IP. These tasks include authentication/digital signing, key generation, encapsulation/decapsulation, random number generation and much more. Further, post-quantum security cores that run very efficiently on eFPGA are becoming available. Figure 2 shows a ML Kyber Encapsulation Module from Xiphera that fits into only four Flex Logix EFLX tiles, efficiently packed at 98% utilization with a throughput of over 2 Gbps.

Fig. 2: ML-KEM IP core from Xiphera implemented on Flex Logix EFLX eFPGA IP.

Managing all data communication within a chiplet seems daunting; however, it is feasible. Designers have the choice of implementing eFPGA on every IC in the chiplet for adaptable data signage. Or standalone on the interposer, where system designers can define a secure enclave in which all data is authenticated and encrypted by an independent IC with eFPGA. eFPGA can also process streaming data at a very high rate. And in most cases can keep up with line rate, as seen with programmable data planes in SmartNICs.

eFPGA can add another critical security benefit. Every instance of eFPGA in the chiplet offers the ability to obfuscate critical algorithms, cryptography and protocols. This enables manufacturers to protect design secrets by not only programming these features in a controlled environment, but also adapting these as threats evolve.

The validation problem

Again, the absence of fully defined industry standards presents integration challenges. Conventional methods of qualification, testing, and validation become increasingly more complex. Yet this becomes another opportunity for eFPGA IP. It can be configured as an in-system diagnostic tool that provides testing, debugging and observability. Not only during IC bring up, but also during run time – eliminating finger pointing between independent companies.

The reconfigurability solution

While we’ve discussed a few different chiplet issues and solutions with adaptable eFPGA, it is important to realize that a singular instance of this IP can perform all these functions in a chiplet, as eFPGA IP is completely reconfigurable. It can be time-sliced and uniquely configured differently during specific operational phases of the chiplet. As mentioned in the examples above, during IC bring up it can provide insightful debug visibility into the system. During boot, it can enable secure boot and attested firmware updates to all ICs in the chiplet. During run time, it can perform cryptographic functions as well independently manage a secure enclave environment. eFPGA is also perfect for any other software acceleration your applications need, as its heavily parallel and pipelined nature is perfect for complex signal processing tasks. Lastly, during an RMA process it can also investigate and determine system failures. This is just a short list of the features eFPGA IP can enable in a chiplet.

Customizable for the perfect solution

Flex Logix EFLX IP delivers excellent PPA (Power, Performance and Area) and is available on the most advanced nodes, including Intel 18A and TSMC 7nm and 5nm. Furthermore, Flex Logix eFPGA IP is scalable – enabling you to choose the best balance of programmable logic, embedded memory and signal processing resources.

Fig. 3: Scalable Flex Logix eFPGA IP.

Want to learn more about Flex Logix IP? Contact us at [email protected] or visit our website https://flex-logix.com.

The post Overcoming Chiplet Integration Challenges With Adaptability appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • UCIe And Automotive Electronics: Pioneering The Chiplet RevolutionVinod Khera
    The automotive industry stands at the brink of a profound transformation fueled by the relentless march of technological innovation. Gone are the days of the traditional, one-size-fits-all system-on-chip (SoC) design framework. Today, we are witnessing a paradigm shift towards a more modular approach that utilizes diverse chiplets, each optimized for specific functionalities. This evolution promises to enhance the automotive system’s flexibility and efficiency and revolutionize how vehicles are
     

UCIe And Automotive Electronics: Pioneering The Chiplet Revolution

9. Květen 2024 v 09:02

The automotive industry stands at the brink of a profound transformation fueled by the relentless march of technological innovation. Gone are the days of the traditional, one-size-fits-all system-on-chip (SoC) design framework. Today, we are witnessing a paradigm shift towards a more modular approach that utilizes diverse chiplets, each optimized for specific functionalities. This evolution promises to enhance the automotive system’s flexibility and efficiency and revolutionize how vehicles are designed, built, and operated.

At the heart of this transformation lies the Universal Chiplet Interconnect Express (UCIe), a groundbreaking standard introduced in March 2022. UCIe is designed to drastically simplify the integration process across different chiplets from various manufacturers by standardizing die-to-die connections. This initiative caters to a critical need within the industry for a modular and scalable semiconductor architecture, thus setting the stage for unparalleled innovation in automotive electronics.

Understanding the core benefit of UCIe

UCIe isn’t merely about facilitating smoother communication between chiplets; it’s a visionary standard that ensures interoperability, reduces design complexity and costs, and, crucially, supports the seamless incorporation of chiplets into cohesive packages. Developed through the collaborative efforts of the UCIe consortium, this standard enables the use of chiplets from different vendors, thereby fostering a highly customizable and scalable solution. This mainly benefits the automotive sector, where high performance, reliability, and efficiency demand is paramount.

The pivotal role of UCIe in automotive electronics

Implementing UCIe within automotive electronics unlocks a plethora of advantages. Foremost among these is the ability to design more compact, powerful, and energy-efficient electronic systems. UCIe offers savings in energy per bit compared to other serial or parallel interfaces.

Given the increasing complexity and functionality of modern vehicles — which can be likened to data centers on wheels — the modular design principle of UCIe is invaluable. It facilitates the addition of new functionalities and ensures that automotive electronics can seamlessly adapt to and incorporate emerging technologies and standards.

Secondly, the adoption of UCIe marks a significant stride toward sustainable electronic design within the automotive industry. By promoting the reuse and integration of chiplets across different platforms and vehicle models, UCIe significantly mitigates electronic waste and streamlines the lifecycle management of electronic components. This benefits manufacturers regarding cost and efficiency and aligns with broader industry trends focused on sustainability and environmental stewardship.

Offerings and solutions

Cadence offers a range of Intellectual Properties (IPs), Electronic Design Automation (EDA) tools, and 3D Integrated Circuits (ICs) tailored for chiplets.

Since the beginning of UCIe, Cadence has engaged with over 100 customers and enables automotive solutions with UCIe and its working groups. The implementation features for automotive protect the mainband, and Cadence also adds protection for the sideband, particularly for the automotive industry. UCIe is being developed across multiple foundries, process nodes, and standard and advanced packaging enablement types.

Cadence has a history of chiplet experience with an interface IP portfolio, including our chiplet die-to-die interconnects, including the UCIe IP, and proprietary 40G UltraLink D2D PHY that enables SoC providers to deliver more customized solutions that offer higher performance and yields while also shortening development cycles and reducing costs through greater IP reuse. Cadence’s UCIe PHY solutions facilitate high-speed communication and interoperability among chiplets. Additionally, the Cadence Simulation VIP for UCIe ensures that systems using UCIe are reliable and performant, addressing one of the industry’s key concerns.

The Cadence UCIe PHY is a high-bandwidth, low-power, and low-latency die-to-die solution that enables multi-die system in package integration for high-performance compute, AI/ML, 5G, automotive and networking applications. The UCIe physical layer includes link initialization, training, power management states, lane mapping, lane reversal, and scrambling. The UCIe controller includes the die-to-die adapter layer and the protocol layer. The adapter layer ensures reliable transfer through link state management, protocol, and flit formats parameter negotiation. The UCIe architecture supports multiple standard protocols such as PCIe, CXL and streaming raw mode.

For interface IP, we usually create various test chips using different process technologies and measure the silicon in the lab to prove that our controller and PHY IPs fully comply with the standard. Hence, we designed a test chip with seven chiplets connected via UCIe over different interconnect distances. To learn more, click here.

The Cadence Verification IP (VIP) for Universal Chiplet Interconnect Express (UCIe) is designed for easy integration in test benches at the IP, system-on-chip (SoC), and system level. The VIP for UCIe runs on all simulators and supports SystemVerilog and the widely adopted Universal Verification Methodology (UVM). This enables verification teams to reduce the time spent on environment development and redirect it to cover a larger verification space, accelerate verification closure, and ensure end-product quality. With a layered architecture and powerful callback mechanism, verification engineers can verify UCIe features at each functional layer (PHY, D2D, Protocol) and create highly targeted designs while using the latest design methodologies for random testing to cover a larger verification space. The VIP for UCIe can be used as a standalone stack or layered with PCIe VIP.

Challenges and future prospects

The transition to chiplet-based designs and the widespread adoption of the UCIe standard are not without their challenges. Critical concerns include functional safety, reliability, quality, security, thermal management, and mechanical stress. Hence, ensuring dependable high-speed interconnects between chiplets necessitates ongoing research and development. Successful implementation also hinges on industry-wide collaboration and establishing a robust chiplet ecosystem that supports the UCIe standard.

UCIe stands as a pioneering innovation poised to redefine automotive electronics. By offering a scalable and flexible framework, UCIe promises to enhance in-vehicle electronic systems’ performance, efficiency, and versatility and marks a significant milestone in the automotive industry’s evolution. As we look to the future, the impact of UCIe in the automotive sector is poised to be profound, turning vehicles into dynamic platforms that continuously adapt to technological advancements and user needs. The realization of modular, customizable designs underscored by efficiency, performance, and innovation heralds the dawn of a new era in semiconductor development, one where the possibilities are as boundless as the technological horizon.

The post UCIe And Automotive Electronics: Pioneering The Chiplet Revolution appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Overcoming Chiplet Integration Challenges With AdaptabilityJayson Bethurem
    Chiplets are exploding in popularity due to key benefits such as lower cost, lower power, higher performance and greater flexibility to meet specific market requirements. More importantly, chiplets can reduce time-to-market, thus decreasing time-to-revenue! Heterogeneous and modular SoC design can accelerate innovation and adaptation for many companies. What’s not to like about chiplets? Well, as chiplets come to fruition, we are starting to realize many of the complications of chiplet designs.
     

Overcoming Chiplet Integration Challenges With Adaptability

9. Květen 2024 v 09:06

Chiplets are exploding in popularity due to key benefits such as lower cost, lower power, higher performance and greater flexibility to meet specific market requirements. More importantly, chiplets can reduce time-to-market, thus decreasing time-to-revenue! Heterogeneous and modular SoC design can accelerate innovation and adaptation for many companies. What’s not to like about chiplets? Well, as chiplets come to fruition, we are starting to realize many of the complications of chiplet designs.

Fig. 1: Expanded chiplet system view.

The interface challenge

The primary concept of chiplets is integration of ICs (Integrated Circuits) from multiple companies. However, many of these did not fully consider interoperation with other ICs. That’s partially based on a lack of interconnect standards around chiplets. Moreover, ICs have their own computational and bandwidth requirements. This is further complicated as competing interfaces standards vie for adoption as shown in table 1.

Standard d Throughput Density Max Delay
Advanced Interface Bus (Intel, AIB) 2 Gbps 504 Gbps/mm 5 ns
Bandwidth Engine 10.3 Gbps N/A 2.4 ns
BoW (Bunch of Wires) 16 Gbps 1280 Gbps/mm 5 ns
HBM3 (JEDEC) 4.8 Gbps N/A N/A
Infinity Fabric (AMD) 10.6 Gbps N/A 9 ns
Lipincon (TSMC) 2.8 Gbps 536 Gbps/mm 14ns
Multi-Die I/O (Intel) 5.4 Gbps 1600 Gbps/mm N/A
XSR/USR (Rambus) 112 Gbps N/A N/A
UCIe 32 Gbps 1350 Gbps/mm 2 ns

Table 1: Chiplet interconnect options.

Most chiplet interconnects are dominated by UCIe (Universal Chiplet Interconnect) and the unimaginatively named BoW (Bunch of Wires). UCIe introduced the 1.0 spec, and as with any first edition of a specification, it is inevitable updates will follow. UCIe 1.1 fixes several holes and gaps in 1.0. It addresses gray areas, missing definitions, ECNs and more. And it is very likely not the last update, as UCIe’s vision is to grow up the stack – adding additional protocol layers on top of the system layers.

Because of newness and expected evolution of UCIe and BoW protocols, designing them in is risky. Additionally, there will always be a place for multiple die-to-die interfaces, beyond UCIe. Specific use cases and designs will inherently be matched to different metrics leading for many designs to fall back to proprietary interfaces.

As you can see, there are many choices, and many of these have tradeoffs. Integration into a chiplet with a variety of these protocols would greatly benefit from adaptability via data/protocol adaptation that can easily be enabled with embedded programmable logic, or eFPGA. A lightweight protocol shim implemented in eFPGA IP can not only reformat data but also buffer data to maximize internal processing. Finally, consider that data between ICs in a chiplet can be globally asynchronous – another easy task resolved with eFPGA IP with FIFO synchronizers.

The security challenge

Beyond the interfaces, security is another emerging challenge. A few factors of chiplets must be cautiously considered:

  • Varying ICs from unknown and possibly unreputable manufacturers
  • IC can contain internal IP from additional third-party sources
  • Each IC may receive and introduce external data into the system

Naturally, this begs for attestation and provenance to ensure vendor confidence. As such, root of trust generally starts with the supply chain and auditing all vendors. However, it only takes one failed component, the least secure component, to jeopardize the entire system.

Root of trust suddenly becomes an issue and uncovers another issue. Which IC, or ICs, in the chiplet manage root of trust? As we’ve seen time and time again, security threats evolve at an alarming rate. But chiplets have an opportunity here. Again, embedded FPGAs have the flexible nature to adapt, thus thwarting these evolving security threats. eFPGA IP can also physically disable unused interfaces – minimizing surface attack vectors.

Adaptable cryptography cores can perform a variety of tasks with high performance in eFPGA IP. These tasks include authentication/digital signing, key generation, encapsulation/decapsulation, random number generation and much more. Further, post-quantum security cores that run very efficiently on eFPGA are becoming available. Figure 2 shows a ML Kyber Encapsulation Module from Xiphera that fits into only four Flex Logix EFLX tiles, efficiently packed at 98% utilization with a throughput of over 2 Gbps.

Fig. 2: ML-KEM IP core from Xiphera implemented on Flex Logix EFLX eFPGA IP.

Managing all data communication within a chiplet seems daunting; however, it is feasible. Designers have the choice of implementing eFPGA on every IC in the chiplet for adaptable data signage. Or standalone on the interposer, where system designers can define a secure enclave in which all data is authenticated and encrypted by an independent IC with eFPGA. eFPGA can also process streaming data at a very high rate. And in most cases can keep up with line rate, as seen with programmable data planes in SmartNICs.

eFPGA can add another critical security benefit. Every instance of eFPGA in the chiplet offers the ability to obfuscate critical algorithms, cryptography and protocols. This enables manufacturers to protect design secrets by not only programming these features in a controlled environment, but also adapting these as threats evolve.

The validation problem

Again, the absence of fully defined industry standards presents integration challenges. Conventional methods of qualification, testing, and validation become increasingly more complex. Yet this becomes another opportunity for eFPGA IP. It can be configured as an in-system diagnostic tool that provides testing, debugging and observability. Not only during IC bring up, but also during run time – eliminating finger pointing between independent companies.

The reconfigurability solution

While we’ve discussed a few different chiplet issues and solutions with adaptable eFPGA, it is important to realize that a singular instance of this IP can perform all these functions in a chiplet, as eFPGA IP is completely reconfigurable. It can be time-sliced and uniquely configured differently during specific operational phases of the chiplet. As mentioned in the examples above, during IC bring up it can provide insightful debug visibility into the system. During boot, it can enable secure boot and attested firmware updates to all ICs in the chiplet. During run time, it can perform cryptographic functions as well independently manage a secure enclave environment. eFPGA is also perfect for any other software acceleration your applications need, as its heavily parallel and pipelined nature is perfect for complex signal processing tasks. Lastly, during an RMA process it can also investigate and determine system failures. This is just a short list of the features eFPGA IP can enable in a chiplet.

Customizable for the perfect solution

Flex Logix EFLX IP delivers excellent PPA (Power, Performance and Area) and is available on the most advanced nodes, including Intel 18A and TSMC 7nm and 5nm. Furthermore, Flex Logix eFPGA IP is scalable – enabling you to choose the best balance of programmable logic, embedded memory and signal processing resources.

Fig. 3: Scalable Flex Logix eFPGA IP.

Want to learn more about Flex Logix IP? Contact us at [email protected] or visit our website https://flex-logix.com.

The post Overcoming Chiplet Integration Challenges With Adaptability appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • UCIe And Automotive Electronics: Pioneering The Chiplet RevolutionVinod Khera
    The automotive industry stands at the brink of a profound transformation fueled by the relentless march of technological innovation. Gone are the days of the traditional, one-size-fits-all system-on-chip (SoC) design framework. Today, we are witnessing a paradigm shift towards a more modular approach that utilizes diverse chiplets, each optimized for specific functionalities. This evolution promises to enhance the automotive system’s flexibility and efficiency and revolutionize how vehicles are
     

UCIe And Automotive Electronics: Pioneering The Chiplet Revolution

9. Květen 2024 v 09:02

The automotive industry stands at the brink of a profound transformation fueled by the relentless march of technological innovation. Gone are the days of the traditional, one-size-fits-all system-on-chip (SoC) design framework. Today, we are witnessing a paradigm shift towards a more modular approach that utilizes diverse chiplets, each optimized for specific functionalities. This evolution promises to enhance the automotive system’s flexibility and efficiency and revolutionize how vehicles are designed, built, and operated.

At the heart of this transformation lies the Universal Chiplet Interconnect Express (UCIe), a groundbreaking standard introduced in March 2022. UCIe is designed to drastically simplify the integration process across different chiplets from various manufacturers by standardizing die-to-die connections. This initiative caters to a critical need within the industry for a modular and scalable semiconductor architecture, thus setting the stage for unparalleled innovation in automotive electronics.

Understanding the core benefit of UCIe

UCIe isn’t merely about facilitating smoother communication between chiplets; it’s a visionary standard that ensures interoperability, reduces design complexity and costs, and, crucially, supports the seamless incorporation of chiplets into cohesive packages. Developed through the collaborative efforts of the UCIe consortium, this standard enables the use of chiplets from different vendors, thereby fostering a highly customizable and scalable solution. This mainly benefits the automotive sector, where high performance, reliability, and efficiency demand is paramount.

The pivotal role of UCIe in automotive electronics

Implementing UCIe within automotive electronics unlocks a plethora of advantages. Foremost among these is the ability to design more compact, powerful, and energy-efficient electronic systems. UCIe offers savings in energy per bit compared to other serial or parallel interfaces.

Given the increasing complexity and functionality of modern vehicles — which can be likened to data centers on wheels — the modular design principle of UCIe is invaluable. It facilitates the addition of new functionalities and ensures that automotive electronics can seamlessly adapt to and incorporate emerging technologies and standards.

Secondly, the adoption of UCIe marks a significant stride toward sustainable electronic design within the automotive industry. By promoting the reuse and integration of chiplets across different platforms and vehicle models, UCIe significantly mitigates electronic waste and streamlines the lifecycle management of electronic components. This benefits manufacturers regarding cost and efficiency and aligns with broader industry trends focused on sustainability and environmental stewardship.

Offerings and solutions

Cadence offers a range of Intellectual Properties (IPs), Electronic Design Automation (EDA) tools, and 3D Integrated Circuits (ICs) tailored for chiplets.

Since the beginning of UCIe, Cadence has engaged with over 100 customers and enables automotive solutions with UCIe and its working groups. The implementation features for automotive protect the mainband, and Cadence also adds protection for the sideband, particularly for the automotive industry. UCIe is being developed across multiple foundries, process nodes, and standard and advanced packaging enablement types.

Cadence has a history of chiplet experience with an interface IP portfolio, including our chiplet die-to-die interconnects, including the UCIe IP, and proprietary 40G UltraLink D2D PHY that enables SoC providers to deliver more customized solutions that offer higher performance and yields while also shortening development cycles and reducing costs through greater IP reuse. Cadence’s UCIe PHY solutions facilitate high-speed communication and interoperability among chiplets. Additionally, the Cadence Simulation VIP for UCIe ensures that systems using UCIe are reliable and performant, addressing one of the industry’s key concerns.

The Cadence UCIe PHY is a high-bandwidth, low-power, and low-latency die-to-die solution that enables multi-die system in package integration for high-performance compute, AI/ML, 5G, automotive and networking applications. The UCIe physical layer includes link initialization, training, power management states, lane mapping, lane reversal, and scrambling. The UCIe controller includes the die-to-die adapter layer and the protocol layer. The adapter layer ensures reliable transfer through link state management, protocol, and flit formats parameter negotiation. The UCIe architecture supports multiple standard protocols such as PCIe, CXL and streaming raw mode.

For interface IP, we usually create various test chips using different process technologies and measure the silicon in the lab to prove that our controller and PHY IPs fully comply with the standard. Hence, we designed a test chip with seven chiplets connected via UCIe over different interconnect distances. To learn more, click here.

The Cadence Verification IP (VIP) for Universal Chiplet Interconnect Express (UCIe) is designed for easy integration in test benches at the IP, system-on-chip (SoC), and system level. The VIP for UCIe runs on all simulators and supports SystemVerilog and the widely adopted Universal Verification Methodology (UVM). This enables verification teams to reduce the time spent on environment development and redirect it to cover a larger verification space, accelerate verification closure, and ensure end-product quality. With a layered architecture and powerful callback mechanism, verification engineers can verify UCIe features at each functional layer (PHY, D2D, Protocol) and create highly targeted designs while using the latest design methodologies for random testing to cover a larger verification space. The VIP for UCIe can be used as a standalone stack or layered with PCIe VIP.

Challenges and future prospects

The transition to chiplet-based designs and the widespread adoption of the UCIe standard are not without their challenges. Critical concerns include functional safety, reliability, quality, security, thermal management, and mechanical stress. Hence, ensuring dependable high-speed interconnects between chiplets necessitates ongoing research and development. Successful implementation also hinges on industry-wide collaboration and establishing a robust chiplet ecosystem that supports the UCIe standard.

UCIe stands as a pioneering innovation poised to redefine automotive electronics. By offering a scalable and flexible framework, UCIe promises to enhance in-vehicle electronic systems’ performance, efficiency, and versatility and marks a significant milestone in the automotive industry’s evolution. As we look to the future, the impact of UCIe in the automotive sector is poised to be profound, turning vehicles into dynamic platforms that continuously adapt to technological advancements and user needs. The realization of modular, customizable designs underscored by efficiency, performance, and innovation heralds the dawn of a new era in semiconductor development, one where the possibilities are as boundless as the technological horizon.

The post UCIe And Automotive Electronics: Pioneering The Chiplet Revolution appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Accellera Preps New Standard For Clock-Domain CrossingBrian Bailey
    Part of the hierarchical development flow is about to get a lot simpler, thanks to a new standard being created by Accellera. What is less clear is how long will it take before users see any benefit. At the register transfer level (RTL), when a data signal passes between two flip flops, it initially is assumed that clocks are perfect. After clock-tree synthesis and place-and-route are performed, there can be considerable timing skew between the clock edges arriving those adjacent flops. That mak
     

Accellera Preps New Standard For Clock-Domain Crossing

29. Únor 2024 v 09:06

Part of the hierarchical development flow is about to get a lot simpler, thanks to a new standard being created by Accellera. What is less clear is how long will it take before users see any benefit.

At the register transfer level (RTL), when a data signal passes between two flip flops, it initially is assumed that clocks are perfect. After clock-tree synthesis and place-and-route are performed, there can be considerable timing skew between the clock edges arriving those adjacent flops. That makes timing sign-off difficult, but at least the clocks are still synchronous.

But if the clocks come from different sources, are at different frequencies, or a design boundary exists between the flip flops — which would happen with the integration of IP blocks — it’s impossible to guarantee that no clock edges will arrive when the data is unstable. That can cause the output to become unknown for a period of time. This phenomenon, known as metastability, cannot be eliminated, and the verification of those boundaries is known as clock-domain crossing (CDC) analysis.

Special care is required on those boundaries. “You have to compensate for metastability by ensuring that the CDC crossings follow a specific set of logic design principles,” says Prakash Narain, president and CEO of Real Intent. “The general process in use today follows a hierarchical approach and requires that the clock-domain crossing internal to an IP is protected and safe. At the interface of the IP, where the system connects with the IP, two different teams share the problem. An IP provider may recommend an integration methodology, which often is captured in an abstraction model. That abstraction model enables the integration boundary to be verified while the internals of it will not be checked for CDC. That has already been verified.”

In the past, those abstract models differentiated the CDC solutions from veracious vendors. That’s no longer the case. Every IP and tool vendor has different formats, making it costly for everyone. “I don’t know that there’s really anything new or differentiating coming down the pipe for hierarchical modeling,” says Kevin Campbell, technical product manager at Siemens Digital Industries Software. “The creation of the standard will basically deliver much faster results with no loss of quality. I don’t know how much more you can differentiate in that space other than just with performance increases.”

While this has been a problem for the whole industry for quite some time, Intel decided it was time for a solution. The company pushed Accellera to take up the issue, and helped facilitate the creation of the standard by chairing the committee. “I’m going to describe three methods of building a product,” says Iredamola “Dammy” Olopade, chair of the Accellera working group, and a principal engineer at Intel. “Method number one is where you build everything in a monolithic fashion. You own every line of code, you know the architecture, you use the tool of your choice. That is a thing of the past. The second method uses some IP. It leverages reuse and enables the quick turnaround of new SoCs. There used to be a time when all IPs came from the same source, and those were integrating into a product. You could agree upon the tools. We are quickly moving to a world where I need to source IPs wherever I can get them. They don’t use the same tools as I do. In that world, common standards are critical to integrating quickly.”

In some cases, there is a hierarchy of IP. “Clock-domain crossings are a central part of our business,” says Frank Schirrmeister, vice president of solutions and business development at Arteris. “A network-on-chip (NoC) can be considered as ‘CDC central’ because most blocks connected to the NoC have different clocks. Also, our SoC integration tools see all of the blocks to be integrated, and those touch various clock domains and therefore need to deal with the CDC code that is inserted.”

This whole thing can become very messy. “While every solution supports hierarchical modeling, every tool has its own model solution and its own model representation,” says Siemens’ Campbell. “Vendors, or users, are stuck with a CDC solution, because the models were created within a certain solution. There’s no real transportability between any of the hierarchical modeling solutions unless they want to go regenerate models for another solution.”

That creates a lot of extra work. “Today, when dealing with customer CDC issues, we have to consider the customer’s specific environment, and for CDC, a potential mix of in-house flows and commercial tools from various vendors,” says Arteris’ Schirrmeister. “The compatibility matrix becomes very complex, very fast. If adopted, the new Accellera CDC standard bears the potential to make it easier for IP vendors, like us, to ensure compatibility and reduce the effort required to validate IP across multiple customer toolsets. The intent, as specified in the requirements is that ‘every IP provider can run its tool of choice to verify and produce collateral and generate the standard format for SoCs that use a different tool.'”

Everyone benefits. “IP providers will not need to provide extra documentation of clock domains for the SoC integrator to use in their CDC analysis,” says Ahmed Nasr, digital design manager at Mixel. “The standard CDC attributes generated by the EDA tool will be self-contained.”

The use model is relatively simple. “An IP developer signs off on CDC and then exports the abstract model,” says Real Intent’s Narain. “It is likely they will write this out in both the Accellera format and the native format to provide backward compatibility. At the next level of hierarchy, you read in the abstract model instead of reading in the full view of the design. They have various views of the IP, including the CDC view of the IP, which today is on the basis of whatever tool they use for CDC sign-off.”

The potential is significant. “If done right and adopted, the industry may arrive at a common language to describe CDC aspects that can streamline the validation process across various tools and environments used by different users,” says Schirrmeister. “As a result, companies will be able to integrate and validate IP more efficiently than before, accelerating development cycles and reducing the complexity associated with SoC integration.”

The standard
Intel’s Olopade describes the approach that was taken during the creation of the standard. “You take the most complex situations you are likely to find, you box them, and you co-design them in order to reduce the risk of bugs,” he said. “The boundaries you create are supposed to be simple boundaries. We took that concept, and we brought it into our definition to say the following: ‘We will look at all kinds of crossings, we will figure out the simple common uses, and we will cover that first.’ That is expected to cover 95% to 98% of the community. We are not trying to handle 700 different exceptions. It is common. It is simple. It is what guarantees production quality, not just from a CDC standpoint, but just from a divide-and-conquer standpoint.”

That was the starting point. “Then we added elements to our design document that says, ‘This is how we will evaluate complexity, and this is how we’ll determine what we cover first,'” he says. “We broke things down into three steps. Step one is clock-domain crossing. Everyone suffers from this problem. Step two is reset-domain crossing (RDC). As low power is getting into more designs, there are a lot more reset domains, and there is risk between these reset domains. Some companies care, but many companies don’t because they are not in a power-aware environment. It became a secondary consideration. Beyond the basic CDC in phase one, and RDC in phase two, all other interesting, small usage complexities will be handled in phase three as extensions to the standard. We are not going to get bogged down supporting everything under the sun.”

Within the standards group there are two sub-groups — a mapping team and a format team. Common standards, such as AMBA, UCIe, and PCIe have been looked at to make sure that these are fully covered by the standard. That means that the concepts should be useful for future markets.

“The concepts contained in the standard are extensible to hardened chiplets,” says Mixel’s Nasr. “By providing an accurate standard CDC view for the chiplet, it will enable integration with other chiplets.”

Some of those issues have yet to be fully explored. “The standard’s current documentation primarily focuses on clock-domain crossing within an SoC itself,” says Schirrmeister. “Its direct applicability to the area of chiplets would depend on further developments. The interfaces between fully hardened IP blocks on chiplets would communicate through standard interfaces like UCIe, BoW, or XSR, so the synchronization issues between chiplets on substrates would appear to be elevated to the protocol levels.”

Reset-domain crossings have yet to appear in the standard. “The genesis of CDC is asynchronous clocks,” says Narain. “But the genesis for reset-domain crossing is asynchronous resets. While the destination is due to the clock, the source of the problem is somewhere else. And as a result, the nature of the problem, the methodology that people use to manage that problem, are very different. The kind of information that you need to retain, and the kind of information that you can throw away, is different for every problem. Hence, abstractions are actually very customized for the application.”

Does the standard cover enough ground? That is part of the purpose of the review period that was used to collect information. “I can see some room for future improvement — for example, making some attributes mandatory like logic, associated_clocks, clock_period for clock ports,” says Nasr. “Another proposed improvement is adding reconvergence information, to be able to detect reconverging outputs of parallel synchronizers.”

The impact of all of this, if realized, is enormous. “If you truly run a collaborative, inclusive, development cycle, two things will happen,” says Olopade. “One, you are going to be able to find multiple ways to solve each problem. You need to understand the pros and cons against the real problems you are trying to solve and agree on the best way we should do it together. For each of those, we record the options, the pros and cons, and the reason one was selected. In a public review, those that couldn’t be part of that discussion get to weigh in. We weigh it against what they are suggesting versus why did we choose it. In the cases where it is part of what we addressed, and we justified it, we just respond, and we do not make a change. If you’re truly inclusive, you do allow that feedback to cause you to change your mind. We received feedback on about three items that we had debated, where the feedback challenged the decisions and got us to rehash things.”

The big challenge
Still, the creation of a standard is just the first step. Unless a standard is fully adopted, its value becomes diminished. “It’s a commendable objective and a worthy endeavor,” says Schirrmeister. “It will make interoperability easier and eventually allow us, and the whole industry, to reduce the compatibility matrix we maintain to deal with vendor tools individually. It all will depend on adoption by the vendors, though.”

It is off to a good start. “As with any standard, good intentions sometimes get severed by reality,” says Campbell. “There has been significant collaboration and agreements on how the standard is being pushed forward. We did not see self-submarining, or some parties playing nice just to see what’s going on but not really supporting it. This does seem like good collaboration and good decision making across the board.”

Implementation is another hurdle. “Will it actually provide the benefit that it is supposed to provide?” asks Narain. “That will depend upon how completely and how quickly EDA tool vendors provide support for the standard. From our perception, the engineering challenge for implementing this is not that large. When this is standardized, we will provide support for it as soon as we can.”

Even then, adoption isn’t a slam dunk. “There are short- and long-term problems,” warns Campbell. “IP vendors already have to support multiple formats, but now you have to add Accellera on top of that. There’s going to be some pain both for the IP vendors and for EDA vendors. We are going to have to be backward-compatible and some programs go on for decades. There’s a chance that some of these models will be around for a very long time. That’s the short-term pain. But the biggest hurdle to overcome for a third-party IP vendor, and EDA vendor, is quality assurance. The whole point of a hierarchical development methodology is faster CDC closure with no loss in quality. The QA load here is going to be big, because no customer is going to want to take the risk if they’ve got a solution that is already working well.”

Some of those issues and fears are expected to be addressed at the upcoming DVCon conference. “We will be providing a tutorial on CDC,” says Olopade. “The first 30 minutes covers the basics of CDC for those who haven’t been doing this for the last 10 years. The next hour will talk about the Accellera solution. It will concentrate on those topics which were hotly debated, and we need to help people understand, or carry people along with what we recommend. Then it may become more acceptable and more adoptive.”

Related Reading
Design And Verification Methodologies Breaking Down
As chips become more complex, existing tools and methodologies are stretched to the breaking point.

The post Accellera Preps New Standard For Clock-Domain Crossing appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • UCIe Goes Back To The Drawing BoardGregory Haley
    The integration of multiple dies within a single package increasingly is viewed as the next evolution for extending Moore’s Law, but it also presents myriad challenges — particularly in achieving a universally accepted standard integrating plug-and-play chiplets from different vendors. “In some respects, people are already doing this,” says Debendra Das Sharma, Intel senior fellow and chair of the UCIe Consortium. “They’re putting multiple dies on the same package, and we’ve been doing it for de
     

UCIe Goes Back To The Drawing Board

22. Únor 2024 v 09:08

The integration of multiple dies within a single package increasingly is viewed as the next evolution for extending Moore’s Law, but it also presents myriad challenges — particularly in achieving a universally accepted standard integrating plug-and-play chiplets from different vendors.

“In some respects, people are already doing this,” says Debendra Das Sharma, Intel senior fellow and chair of the UCIe Consortium. “They’re putting multiple dies on the same package, and we’ve been doing it for decades back to what was multi-chip modules (MCMs). And if you look in our mainstream CPUs today, they’re all multiple chips on the same package.”

Combining more than one chip in a package becomes a lot more complicated, however, when those chips have different functions or come from different vendors or foundries. That’s where a standard like UCIe becomes necessary.

“For most of the multi-chip products that are in the market, the same company is designing and providing the multiple dies, so they know exactly how they talk to each other and how to divide or partition the chip,” says Vik Chaudhry, senior director of product marketing and business development at Amkor. “That makes it a little easier to understand how one part talks to the other. What UCIe is trying to do is standardize that interconnect between multiple vendors.”

While other protocols like Bunch of Wires (BoW) have made significant strides in recent years and are still being developed, UCIe stands out for its backing by many of the largest chip manufacturers and its support for all major packaging technologies, including organic substrates, silicon interposers, and RDL fan-outs.

Fig. 1: Chiplet diagram with UCIe interconnect highlighted. Source: Keysight
Fig. 1: Chiplet diagram with UCIe interconnect highlighted. Source: Keysight

But the move toward UCIe compatibility necessitates more than a mere afterthought in the chip creation process. It requires a foundational shift back to the drawing board, where compatibility must be conceived as an integral component of the chip, not retrofitted as an expedient solution. As this standard evolves, it has become increasingly apparent that for chiplets to truly embrace UCIe, the blueprint for their design must be reimagined from the ground up.

“UCIe is a layout,” says Chaudry. “It’s designed. But keep in mind, these chiplets can be from different fab nodes. One could be 5nm, another could be 3nm, and third could be 14nm. Somehow you have to connect these dies together. You need to be compatible in terms of how much space you have to run the routes, and that’s what UCIe is addressing.”

The transition to UCIe is not merely about different vendors adapting to a new standard. It requires a willingness among manufacturers throughout the industry to align their design and production processes with a common protocol that is still, in many respects, a work in progress.

While it is commonly assumed that chiplets plus advanced packaging represent the next evolution for extending Moore’s Law, the lack of a fully defined standard, coupled with the uncertainty surrounding the integration with existing technologies, means investing in new designs for UCIe is currently limited to the largest players in the market.

“Anytime you put multiple dies on a substrate or interposer, it’s challenging,” adds Chaudhry. “As we are seeing AI come into the picture, we are seeing a lot of vendors putting multiple dies on a chip, and not just 3 or 4, but 8, 10, or 12 dies. The complexity exponentially grows as you have more and more dies on the same interposer or substrate. You also have to test everything in between, and that increases the complexity and cost. That’s a huge challenge for anybody, and right now only a few companies in the world are capable of committing those kinds of resources and those kinds of expenses to put a line together.”

Moreover, the adoption of UCIe still must overcome significant hurdles in terms of scalability, compatibility with existing systems, and ensuring that the cost implications do not outweigh the benefits.

The chiplet evolution
Large chipmakers have been constrained by the size of the reticle field for at least the last several process nodes, which sharply limited the number of features that could be crammed onto a planar SoC. Today, with node shrinks becoming more costly and challenging, the best solution available is to decompose the SoC into individual blocks, or chiplets.

“Once the dies become really big, you’re up against the reticle limit,” says Intel’s Das Sharma. “That’s where you will see a lot of people deploying chiplets. You’re basically having multiple sets of chips being packaged together to deliver a certain set of functionality.”

Take, for instance, the leap to 50 Tb per second switches that are challenging the limits of reticle size. There’s a growing need to dissect and distribute the functionality of these chips across multiple components. Whether it’s the I/O, memory, or SRAM, the key lies in strategically breaking down the SoC into smaller units. This not only makes the manufacturing process more feasible, but also opens doors to more innovative and efficient design architectures.

It also provides some immediate benefits. Smaller dies yield better than larger ones, which is why in 2012 Xilinx split its 28nm FPGA into four different dies, connected through an interposer. It also provides room to grow, because the individual chiplets are still well below the reticle limit.

But all of the early implementations were homogeneous. They were all developed by the same vendor using the same process technology. A big benefit of advanced packaging is the ability to combine heterogeneous chiplets in the same package, allowing analog circuits and less-critical features to be developed at whatever process node makes sense. This is the challenge facing large chipmakers, foundries, and OSATs today, and it’s one that has not yet been fully solved.

Nevertheless, the chip industry agrees on one thing. There needs to be a common way to connect all of these chiplets together, and this is where UCIe fits into the picture.

The UCIe standard
Achieving a consensus on the electrical characteristics that underpin the UCIe is akin to orchestrating a symphony with a multitude of instruments, each with its own acoustic signature. Ensuring that chiplets from different corners of the industry can connect and communicate efficiently necessitates bridging gaps in voltage levels, signal timing, and power distribution.

In March, 2022, the UCIe consortium released UCIe 1.0 that included specifications for a standardized physical die-to-die interface designed to facilitate seamless communication between chiplets, regardless where they were manufactured or by whom. The specifications encompassed key aspects, such as electrical properties, physical dimensions, and protocols necessary for ensuring compatibility and efficient data transfer between diverse chip components.

“On advanced packages at 45 microns, the numbers are pretty stellar,” says Das Sharma. “You have 188 gigabytes per second per square millimeter as a starting point, up to 1.35 terabytes per second per square millimeter. People will have a hard time even absorbing that kind of bandwidth and processing it.”

UCIe 1.0 uses a layered protocol approach. The physical layer underpins the protocol stack, dedicated to defining and managing electrical signaling, such as clock synchronization and link training, while also incorporating sideband communication channels essential for non-data interactions between chiplets.

At the heart of UCIe’s mechanics is the Die-to-Die (D2D) adapter. This crucial interface acts as the gatekeeper, managing link state and facilitating negotiation parameters for chiplets, crucial for establishing reliable chiplet communication. It optionally extends a safeguard for data integrity through mechanisms like cyclic redundancy check (CRC) and link-level retry capabilities. This not only ensures accuracy in high-speed data transfer, but also aligns different chiplet protocols by providing an arbitration system enabling multiple chips to interact efficiently.

“UCIe is pretty flexible in that way,” says Chaudhry. “It supports your PCIe protocols, XML protocol, or streaming, so you can decide which protocol you want to support. And it has different data rates that it supports. It’s the lowest common denominator that everybody will support. If you’re on a 3nm process, you can support a much higher data rate, but if the other chiplet is at a different process node, then both the parts will support the basic lowest common denominator of the spec, and then you can talk on that.”

UCIe also incorporates strategies to mitigate interconnect defects, such as stuck-at faults and signal discontinuities. Stipulations within UCIe include the implementation of auxiliary pathways, furnishing a means to maintain connectivity if the primary lanes fail. This redundancy helps sustain system functionality by providing avenues for fault tolerance and repair.

UCIe also embraces existing standards such as PCI Express (PCIe) and Compute Express Link (CXL) natively, ensuring a broad resonance across the industry by capitalizing on these well-established protocols. The layered approach of UCIe also encompasses comprehensive usage models.

In August 2023, the consortium published UCIe version 1.1, extending reliability mechanisms to more protocols and supporting additional usage models. These enhancements are not merely incremental. They are geared towards pivotal segments such as automotive, which is gravitating toward chiplets.

One key area where the evolution from UCIe 1.0 to 1.1 becomes evident is in the standard’s preventive monitoring features. UCIe 1.1 expands the protocol with new registers designed to capture detailed Eye Margin information — viewing both width and height — which provides standardized reporting formats and proactive link health monitoring. Rather than reinventing the wheel, UCIe 1.1 leverages the existing periodic parity Flit injection mechanism from version 1.0, enhancing error detection and reporting capabilities through a new error log register. That, in turn, allows for improved assessment of link repair necessities. UCIe 1.1 also offers enhancements for compliance testing.

Another notable aspect is the advent of new and emerging usages, particularly with streaming protocols. Whereas UCIe 1.0’s support for such protocols was restricted to Raw Mode, UCIe 1.1 extends the utility of the die-to-die (D2D) adapter on the FDI interface to streaming protocols. This extension enables a blend of CRC retry power management features and facilitates the coexistence of multiple protocols.

UCIe 1.1 also considers cost optimization for advanced packaging solutions in anticipation of shrinking bump pitches and the advent of 3D integration. The introduction of additional column arrangements in UCIe 1.1 creates broader opportunities for mix-and-match dies.

“In a chiplet environment, the dies are very close to each other and your shoreline is very limited,” says Chaudhry. “You have limited space to connect the dies, and how the number of pins are connected, facing each other, that becomes critical. That is one thing that UCIe is addressing. What should be the pin location? Whether it’s 6-, 8-, or 16-column, how do you arrange it so that when one vendor has an 8-column configuration, they can talk to one with a 12 column configuration and connect to it physically, not just in terms of pins, but also connectivity and shoreline compatibility?”

Designing for interoperability
There are a number of technical hurdles that still stand in the way of the widespread adoption of UCIe. These include a need for precise electrical conformity, predictable signaling realms, and systematic physical interconnects catering to a variety of nodes and manufacturing processes.

“You can also have HBM in there, which can be very tall compared to a single ASIC,” says Amkor’s Chaudhry. “How do you address those height differences? A lot of different issues come into play when you’re putting different dies and different chiplets together.”

Thermal management is also a key element for high-density packaging. Disparate process nodes inevitably present distinct power profiles and heat dissipation characteristics. Bridging these gaps necessitates innovative heat distribution methodologies and sophisticate warpage control to ensure structural integrity and reliable function in complex modules.

“There are a lot of challenges in thermal,” adds Chaudhry. “When you have two dies from different process nodes, how do you make sure that you have a way to dissipate the power equally? Those are some of the challenges as we go along and there’s no general solution to that yet. Those are kind of things that the consortium is looking at right now.”

Continued evolution
Another goal of the UCIe consortium is to ensure that anyone developing a chiplet today will still be able to use that design five years from now, despite progress in the standard during that time.

“It will absolutely evolve,” adds Chaudhry. “PCI did the same thing. They are on Gen 5 or Gen 6 now. USB is the same way with USB 4.0 coming soon. CXL is at 3.1. We expect the same thing to happen to UCIe. It will continuously improve and come up with new and more flexible solutions that our members can adopt.”

“The more people get involved, the more they’re going to start tweaking things,” adds Das Sharma. “Some of them are not going to work out, and some of them are going to work out really well. This is a multi-decade journey, and the key is to learn and adapt and keep moving on.”

Conclusion
The UCIe initiative aims to revolutionize chip package interconnectivity by emulating the success of Peripheral Component Interconnect Express (PCIe) at the PCB level. By facilitating direct inter-die connections within the chip package, UCIe endeavors to drastically cut power usage, enhance bandwidth efficiency, and, ultimately, reduce production costs.

“The good thing about UCIe is that it’s an open standard,” says Chaudhry. “In all, there are about 120 members, and all of them are working together. There are six different working groups that range from mechanical to electrical to security to software and marketing, where they are bringing up new things as they are developing their chiplet-based designs. A lot of things that have happened between UCIe 1.0 and 1.1 are basically due to their input.”

—Ed Sperling contributed to this report.

The post UCIe Goes Back To The Drawing Board appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Why Chiplets Are So Critical In AutomotiveJohn Koon
    Chiplets are gaining renewed attention in the automotive market, where increasing electrification and intense competition are forcing companies to accelerate their design and production schedules. Electrification has lit a fire under some of the biggest and best-known carmakers, which are struggling to remain competitive in the face of very short market windows and constantly changing requirements. Unlike in the past, when carmakers typically ran on five- to seven-year design cycles, the latest
     

Why Chiplets Are So Critical In Automotive

Od: John Koon
20. Únor 2024 v 09:10

Chiplets are gaining renewed attention in the automotive market, where increasing electrification and intense competition are forcing companies to accelerate their design and production schedules.

Electrification has lit a fire under some of the biggest and best-known carmakers, which are struggling to remain competitive in the face of very short market windows and constantly changing requirements. Unlike in the past, when carmakers typically ran on five- to seven-year design cycles, the latest technology in vehicles today may well be considered dated within several years. And if they cannot keep up, there is a whole new crop of startups producing cheap vehicles with the ability to update or change out features as quickly as a software update.

But software has speed, security, and reliability limitations, and being able to customize the hardware is where many automakers are now putting their efforts. This is where chiplets fit in, and the focus now is on how to build enough interoperability across large ecosystems to make this a plug-and-play market. The key factors to enable automotive chiplet interoperability include standardization, interconnect technologies, communication protocols, power and thermal management, security, testing, and ecosystem collaboration.

Similar to non-automotive applications at the board level, many design efforts are focusing on a die-to-die approach, which is driving a number of novel design considerations and tradeoffs. At the chip level, the interconnects between various processors, chips, memory, and I/O are becoming more complex due to increased design performance requirements, spurring a flurry of standards activities. Different interconnect and interface types have been proposed to serve varying purposes, while emerging chiplet technologies for dedicated functions — processors, memories, and I/Os, to name a few — are changing the approach to chip design.

“There is a realization by automotive OEMs that to control their own destiny, they’re going to have to control their own SoCs,” said David Fritz, vice president of virtual and hybrid systems at Siemens EDA. “However, they don’t understand how far along EDA has come since they were in college in 1982. Also, they believe they need to go to the latest process node, where a mask set is going to cost $100 million. They can’t afford that. They also don’t have access to talent because the talent pool is fairly small. With all that together comes the realization by the OEMs that to control their destiny, they need a technology that’s developed by others, but which can be combined however needed to have a unique differentiated product they are confident is future-proof for at least a few model years. Then it becomes economically viable. The only thing that fits the bill is chiplets.”

Chiplets can be optimized for specific functions, which can help automakers meet reliability, safety, security requirements with technology that has been proven across multiple vehicle designs. In addition, they can shorten time to market and ultimately reduce the cost of different features and functions.

Demand for chips has been on the rise for the past decade. According to Allied Market Research, global automotive chip demand will grow from $49.8 billion in 2021 to $121.3 billion by 2031. That growth will attract even more automotive chip innovation and investment, and chiplets are expected to be a big beneficiary.

But the marketplace for chiplets will take time to mature, and it will likely roll out in phases.  Initially, a vendor will provide different flavors of proprietary dies. Then, partners will work together to supply chiplets to support each other, as has already happened with some vendors. The final stage will be universally interoperable chiplets, as supported by UCIe or some other interconnect scheme.

Getting to the final stage will be the hardest, and it will require significant changes. To ensure interoperability, large enough portions of the automotive ecosystem and supply chain must come together, including hardware and software developers, foundries, OSATs, and material and equipment suppliers.

Momentum is building
On the plus side, not all of this is starting from scratch. At the board level, modules and sub-systems always have used onboard chip-to-chip interfaces, and they will continue to do so. Various chip and IP providers, including Cadence, Diode, Microchip, NXP, Renesas, Rambus, Infineon, Arm, and Synopsys, provide off-the-shelf interface chips or IP to create the interface silicon.

The Universal Chiplet Interconnect Express (UCIe) Consortium is the driving force behind the die-to-die, open interconnect standard. The group released its latest UCIe 1.1 specification in August 2023. Board members include Alibaba, AMD, Arm, ASE, Google Cloud, Intel, Meta, Microsoft, NVIDIA, Qualcomm, Samsung, and others. Industry partners are showing widespread support. AIB and Bunch of Wires (BoW) also have been proposed. In addition, Arm just released its own Chiplet System Architecture, along with an updated AMBA spec to standardize protocols for chiplets.

“Chiplets are already here, driven by necessity,” said Arif Khan, senior product marketing group director for design IP at Cadence. “The growing processor and SoC sizes are hitting the reticle limit and the diseconomies of scale. Incremental gains from process technology advances are lower than rising cost per transistor and design. The advances in packaging technology (2.5D/3D) and interface standardization at a die-to-die level, such as UCIe, will facilitate chiplet development.”

Nearly all of the chiplets used today are developed in-house by big chipmakers such as Intel, AMD, and Marvell, because they can tightly control the characteristics and behavior of those chiplets. But there is work underway at every level to open this market to more players. When that happens, smaller companies can begin capitalizing on what the high-profile trailblazers have accomplished so far, and innovating around those developments.

“Many of us believe the dream of having an off-the-shelf, interoperable chiplet portfolio will likely take years before becoming a reality,” said Guillaume Boillet, senior director strategic marketing at Arteris, adding that interoperability will emerge from groups of partners who are addressing the risk of incomplete specifications.

This also is raising the attractiveness of FPGAs and eFPGAs, which can provide a level of customization and updates for hardware in the field. “Chiplets are a real thing,” said Geoff Tate, CEO of Flex Logix. “Right now, a company building two or more chiplets can operate much more economically than a company building near-reticle-size die with almost no yield. Chiplet standardization still appears to be far away. Even UCIe is not a fixed standard yet. Not all agree on UCIe, bare die testing, and who owns the problem when the integrated package doesn’t work, etc. We do have some customers who use or are evaluating eFPGA for interfaces where standards are in flux like UCIe. They can implement silicon now and use the eFPGA to conform to standards changes later.”

There are other efforts supporting chiplets, as well, although for somewhat different reasons — notably, the rising cost of device scaling and the need to incorporate more features into chips, which are reticle-constrained at the most advanced nodes. But those efforts also pave the way for chiplets in automotive, and there is strong industry backing to make this all work. For example, under the sponsorship of SEMI, ASME, and three IEEE Societies, the new Heterogeneous Integration Roadmap (HIR) looks at various microelectronics design, materials, and packaging issues to come up with a roadmap for the semiconductor industry. Their current focus includes 2.5D, 3D-ICs, wafer-level packaging, integrated photonics, MEMS and sensors, and system-in-package (SiP), aerospace, automotive, and more.

At the recent Heterogeneous Integration Global Summit 2023, representatives from AMD, Applied Materials, ASE, Lam Research, MediaTek, Micron, Onto Innovation, TSMC, and others demonstrated strong support for chiplets. Another group that supports chiplets is the Chiplet Design Exchange (CDX) working group , which is part of the Open Domain Specific Architecture (ODSA) and the Open Compute Project Foundation (OCP). The Chiplet Design Exchange (CDX) charter focuses on the various characteristics of chiplet and chiplet integration, including electrical, mechanical, and thermal design exchange standards of the 2.5D stacked, and 3D Integrated Circuits (3D-ICs). Its representatives include Ansys, Applied Materials, Arm, Ayar Labs, Broadcom, Cadence, Intel, Macom, Marvell, Microsemi, NXP, Siemens EDA, Synopsys, and others.

“The things that automotive companies want in terms of what each chiplet does in terms of functionality is still in an upheaval mode,” Siemens’ Fritz noted. “One extreme has these problems, the other extreme has those problems. This is the sweet spot. This is what’s needed. And these are the types of companies that can go off and do that sort of work, and then you could put them together. Then this interoperability thing is not a big deal. The OEM can make it too complex by saying, ‘I have to handle that whole spectrum of possibilities.’ The alternative is that they could say, ‘It’s just like a high speed PCIe. If I want to communicate from one to the other, I already know how to do that. I’ve got drivers that are running my operating system. That would solve an awful lot of problems, and that’s where I believe it’s going to end up.”

One path to universal chiplet development?

Moving forward, chiplets are a focal point for both the automotive and chip industries, and that will involve everything from chiplet IP to memory interconnects and customization options and limitations.

For example, Renesas Electronics announced in November 2023 plans for its next-generation SoCs and MCUs. The company is targeting all major applications across the automotive digital domain, including advance information about its fifth-generation R-Car SoC for high-performance applications with advanced in-package chiplet integration technology, which is meant to provide automotive engineers greater flexibility to customize their designs.

Renesas noted that if more AI performance is required in Advanced Driver Assistance Systems (ADAS), engineers will have the capability to integrate AI accelerators into a single chip. The company said this roadmap comes after years of collaboration and discussions with Tier 1 and OEM customers, which have been clamoring for a way to accelerate development without compromising quality, including designing and verifying the software even before the hardware is available.

“Due to the ever increasing needs to increase compute on demand, and the increasing need for higher levels of autonomy in the cars of tomorrow, we see challenges in monolithic solutions scaling and providing the performance needs of the market in the upcoming years,” said Vasanth Waran, senior director for SoC Business & Strategies at Renesas. “Chiplets allows for the compute solutions to scale above and beyond the needs of the market.”

Renesas announced plans to create a chiplet-based product family specifically targeted at the automotive market starting in 2025.

Standard interfaces allow for SoC customization
It is not entirely clear how much overlap there will be between standard processors, which is where most chiplets are used today, and chiplets developed for automotive applications. But the underlying technologies and developments certainly will build off each other as this technology shifts into new markets.

“Whether it is an AI accelerator or ADAS automotive application, customers need standard interface IP blocks,” noted David Ridgeway, senior product manager, IP accelerated solutions group at Synopsys. “It is important to provide fully verified IP subsystems around their IP customization requirements to support the subsystem components used in the customers’ SoCs. When I say customization, you might not realize how customizable IP has become over the course of the last 10 to 20 years, on the PHY side as well as the controller side. For example, PCI Express has gone from PCIe Gen 3 to Gen 4 to Gen 5 and now Gen 6. The controller can be configured to support multiple bifurcation modes of smaller link widths, including one x16, two x8, or four x4. Our subsystem IP team works with customers to ensure all the customization requirements are met. For AI applications, signal and power integrity is extremely important to meet their performance requirements. Almost all our customers are seeking to push the envelope to achieve the highest memory bandwidth speeds possible so that their TPU can process many more transactions per second. Whenever the applications are cloud computing or artificial intelligence, customers want the fastest response rate possible.”

Fig 1: IP blocks including processor, digital, PHY, and verification help developers implement the entire SoC. Source: Synopsys

Fig 1: IP blocks including processor, digital, PHY, and verification help developers implement the entire SoC. Source: Synopsys

Optimizing PPA serves the ultimate goal of increasing efficiency, and this makes chiplets particularly attractive in automotive applications. When UCIe matures, it is expected to improve overall performance exponentially. For example, UCIe can deliver a shoreline bandwidth of 28 to 224 GB/s/mm in a standard package, and 165 to 1317 GB/s/mm in an advanced package. This represents a performance improvement of 20- to 100-fold. Bringing latency down from 20ns to 2ns represents a 10-fold improvement. Around 10 times greater power efficiency, at 0.5 pJ/b (standard package) and 0.25 pJ/b (advanced package), is another plus. The key is shortening the interface distance whenever possible.

To optimize chiplet designs, the UCIe Consortium provides some suggestions:

  • Careful planning consideration of architectural cut-lines (i.e. chiplet boundaries), optimizing for power, latency, silicon area, and IP reuse. For example, customizing one chiplet that needs a leading-edge process node while re-using other chiplets on older nodes may impact cost and time.
  • Thermal and mechanical packaging constraints need to be planned out for package thermal envelopes, hot spots, chiplet placements and I/O routing and breakouts.
  • Process nodes need to be carefully selected, particularly in the context of the associated power delivery scheme.
  • Test strategy for chiplets and packaged/assembled parts need to be developed up front to ensure silicon issues are caught at the chiplet-level testing phase rather than after they are assembled into a package.

Conclusion
The idea of standardizing die-to-die interfaces is catching on quickly but the path to get there will take time, effort, and a lot of collaboration among companies that rarely talk with each other. Building a vehicle takes one determine carmaker. Building a vehicle with chiplets requires an entire ecosystem that includes the developers, foundries, OSATs, and material and equipment suppliers to work together.

Automotive OEMs are experts at putting systems together and at finding innovative ways to cut costs. But it remains to seen how quickly and effectively they can build and leverage an ecosystem of interoperable chiplets to shrink design cycles, improve customization, and adapt to a world in which the leading edge technology may be outdated by the time it is fully designed, tested, and available to consumers.

— Ann Mutschler contributed to this report.

Related Reading
Automotive Relationships Shifting With Chiplets
As the automotive ecosystem balances the best approaches for designing in increasingly advanced features, how companies interact is still evolving.

The post Why Chiplets Are So Critical In Automotive appeared first on Semiconductor Engineering.

❌
❌