FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • Achieving Zero Defect Manufacturing Part 2: Finding Defect SourcesPrasad Bachiraju
    Semiconductor manufacturing creates a wealth of data – from materials, products, factory subsystems and equipment. But how do we best utilize that information to optimize processes and reach the goal of zero defect manufacturing? This is a topic we first explored in our previous blog, “Achieving Zero Defect Manufacturing Part 1: Detect & Classify.” In it, we examined real-time defect classification at the defect, die and wafer level. In this blog, the second in our three-part series, we will
     

Achieving Zero Defect Manufacturing Part 2: Finding Defect Sources

6. Srpen 2024 v 09:07

Semiconductor manufacturing creates a wealth of data – from materials, products, factory subsystems and equipment. But how do we best utilize that information to optimize processes and reach the goal of zero defect manufacturing?

This is a topic we first explored in our previous blog, “Achieving Zero Defect Manufacturing Part 1: Detect & Classify.” In it, we examined real-time defect classification at the defect, die and wafer level. In this blog, the second in our three-part series, we will discuss how to use root cause analysis to determine the source of defects. For starters, we will address the software tools needed to properly conduct root cause analysis for a faster understanding of visual, non-visual and latent defect sources.

About software

The software platform fabs choose impacts how well users are able to integrate data, conduct database analytics and perform server-side and real-time analytics. Manufacturers want the ability to choose a platform that can scale by data volume, type and multisite integration. In addition, all of this data – whether it is coming from metrology, inspection or testing – must be normalized before fabs can apply predictive modeling and machine learning based analytics to find the root cause of defects and failures. This search, however, goes beyond a simple examination of process steps and tools; manufacturers also need a clear understanding of each device’s genealogy. In addition, fabs should employ an AI-based yield optimizer capable of running multiple models and offering potential optimization measures that can be taken in the factory to improve the process.

Now that we have discussed software needs, we will turn our attention to two use cases to further our examination of root cause analysis in zero defect manufacturing.

Root Cause Case No. 1

The first root cause value case we would like to discuss involves the integration of wafer probe, photoluminescence and epitaxial (epi) data. Previously, integrating these three kinds of data was not possible because the identification for wafers and lots – pre- and post-epi – were generally not linked. Wafers and lots were often identified by entirely different names before and after the epi step. For reasons that do not need to be explained, this was a huge hindrance to advancing the goal of zero defect manufacturing because the impact of the epi process on yield was not detected in a timely manner, resulting in higher defectivity and yield loss.

But the challenge is not as simple as identification and naming practices. Typical wafer ID trackers are not applied prior to the post-epi step because of technical and logistical constraints. The solution is for fabs to employ defect and yield analytics software that will enable genealogy that can link data from the epi and pre-epi processes to post-epi processes. The real innovation occurs when the genealogical information is normalized and interpolated with electrical test data. Once integrated, this data offers users a more complete understanding of where yield limiting events are occurring.

Fig. 1: Photoluminescence map (left) and electrical test performance by epi tool (right).

For example, let us consider the following scenario: in figure 1 (left) we show a group of dies that negatively affect performance on the upper left edge of the wafer. Through more traditional measures, this pocket of defectivity may have gone unnoticed, allowing for bad die to move forward in the process. But by applying integrated data, genealogical information and electrical test data, this trouble-plagued area was identified down to the epi tool and chamber (figure 1, right), and the defective material was prevented from going forward in the process. As significant as this is, with the right software platform this approach enables root cause analysis to be conducted in minutes, not days.

Now, onto the second use case in which we look at how to problem solve within the supply chain.

Root Cause Case No. 2

During final test and measurement, chips sometimes fail. In many cases, the faulty chips were previously determined to be good chips and were advanced forward in the process as a result of combining multiple chips coming from different products, lots, or wafers. The important thing here is to understand why this happens.

When there is a genealogy model in a yield software platform, fabs are able to pick the lots and wafers where bad chips come from and then run this information through pattern analysis software. In one particular scenario (figure 2), users were able to apply pattern analysis software to discover that all of the defective die arose from a spin coater issue, in this case, a leak negatively impacting the underbump metallization area following typical preventive maintenance measures.

To compensate for this, the team used integrated analytics to create a fault detection and classification (FDC) model to identify similar circumstances going forward. In this case, the FDC model monitors the suction power of the spin coater. If suction power for more than 10 consecutive samples are above the set limit, alarms are triggered and an appropriate Out of Control Action Plan (OCAP) process is executed that includes notification to tool owner.

Fig. 2: Proactive zero defect manufacturing at-a-glance.

The above explains how fabs are able to turn reactive root cause analytics into proactive monitoring. With such an approach, manufacturers can monitor for this and other issues and avoid the advancement of future defective die. Furthermore, the number of defect signatures that can be monitored inline can be as high as 40 different signatures, if not more. And in case these defects are missed at the process level, they can be identified at the inspection level or post-inspection, avoiding hundreds of issues further along in the process.

Conclusion

Zero defect manufacturing is not so much of a goal as it is a commitment to root out defects before they happen. To accomplish this, fabs need a wealth of data from the entire process to achieve a clear picture of what is going wrong, where it is going wrong and why it is going wrong. In this blog, we offered specific scenarios where root cause analysis was used to find defects across wafers and dies. However, these are just a few examples of how software can be used to find difficult-to-find defects. It can be beneficial in many different areas across the entire process, with each application further strengthening a fab’s efforts to employ a zero defect manufacturing approach, increasing yield and meeting the stringent requirements of some of the industry’s most advanced customers.

In our next blog, we will discuss how to detect dormant defects, use feedback and feedforward measures, and monitor the health of process control equipment. We hope you join us as we continue to explore methods for achieving zero defect manufacturing.

The post Achieving Zero Defect Manufacturing Part 2: Finding Defect Sources appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Metrology And Inspection For The Chiplet EraGregory Haley
    New developments and innovations in metrology and inspection will enable chipmakers to identify and address defects faster and with greater accuracy than ever before, all of which will be required at future process nodes and in densely-packed assemblies of chiplets. These advances will affect both front-end and back-end processes, providing increased precision and efficiency, combined with artificial intelligence/machine learning and big data analytics. These kinds of improvements will be crucia
     

Metrology And Inspection For The Chiplet Era

6. Srpen 2024 v 09:03

New developments and innovations in metrology and inspection will enable chipmakers to identify and address defects faster and with greater accuracy than ever before, all of which will be required at future process nodes and in densely-packed assemblies of chiplets.

These advances will affect both front-end and back-end processes, providing increased precision and efficiency, combined with artificial intelligence/machine learning and big data analytics. These kinds of improvements will be crucial for meeting the industry’s changing needs, enabling deeper insights and more accurate measurements at rates suitable for high-volume manufacturing. But gaps still need to be filled, and new ones are likely to show up as new nodes and processes are rolled out.

“As semiconductor devices become more complex, the demand for high-resolution, high-accuracy metrology tools increases,” says Brad Perkins, product line manager at Nordson Test & Inspection. “We need new tools and techniques that can keep up with shrinking geometries and more intricate designs.”

The shift to high-NA EUV lithography (0.55 NA EUV) at the 2nm node and beyond is expected to exacerbate stochastic variability, demanding more robust metrology solutions on the front end. Traditional critical dimension (CD) measurements alone are insufficient for the level of analysis required. Comprehensive metrics, including line-edge roughness (LER), line-width roughness (LWR), local edge-placement error (LEPE), and local CD uniformity (LCDU), alongside CD measurements, are necessary for ensuring the integrity and performance of advanced semiconductor devices. These metrics require sophisticated tools that can capture and analyze tiny variations at the nanometer scale, where even slight discrepancies can significantly impact device functionality and yield.

“Metrology is now at the forefront of yield, especially considering the current demands for DRAM and HBM,” says Hamed Sadeghian, president and CEO of Nearfield Instruments. “The next generations of HBMs are approaching a stage where hybrid bonding will be essential due to the increasing stack thickness. Hybrid bonding requires high resolutions in vertical directions to ensure all pads, and the surface height versus the dielectric, remain within nanometer-scale process windows. Consequently, the tools used must be one order of magnitude more precise.”

To address these challenges, companies are developing hybrid metrology systems that combine various measurement techniques for a comprehensive data set. Integrating scatterometry, electron microscopy, and/or atomic force microscopy allows for more thorough analysis of critical features. Moreover, AI and ML algorithms enhance the predictive capabilities of these tools, enabling process adjustments.

“Our customers who are pushing into more advanced technology nodes are desperate to understand what’s driving their yield,” says Ronald Chaffee, senior director of applications engineering at NI/Emerson Test & Measurement. “They may not know what all the issues are, but they are gathering all possible data — metrology, AEOI, and any measurable parameters — and seeking correlations.”

Traditional methods for defect detection, pattern recognition, and quality control typically used spatial pattern-recognition modules and wafer image-based algorithms to address wafer-level issues. “However, we need to advance beyond these techniques,” says Prasad Bachiraju, senior director of business development at Onto Innovation. “Our observations show that about 20% of wafers have systematic issues that can limit yield, with nearly 4% being new additions. There is a pressing need for advanced metrology for in-line monitoring to achieve zero-defect manufacturing.”

Several companies recently announced metrology innovations to provide more precise inspections, particularly for difficult-to-see areas, edge effects, and highly reflective surfaces.

Nordson unveiled its AMI SpinSAM acoustic rotary scan system. The system represents a significant departure from traditional raster scan methods, utilizing a rotational scanning approach. Rather than moving the wafer in an x,y pattern relative to a stationary lens, the wafer spins, similar to a record player. This reduces motion over the wafer and increases inspection speed, negating the need for image stitching and improving image quality.

“For years, we’d been trying to figure out this technique, and it’s gratifying to finally achieve it. It’s something we’ve always thought would be incredibly beneficial,” says Perkins. “The SpinSAM is designed primarily to enhance inspection speed and efficiency, addressing the common industry demand for more product throughput and better edge inspection capabilities.”

Meanwhile, Nearfield Instruments introduced a multi-head atomic force microscopy (AFM) system called QUADRA. It is a high-throughput, non-destructive metrology tool for HVM that features a novel multi-miniaturized AFM head architecture. Nearfield claims the parallel independent multi-head scanner can deliver a 100-fold throughput advantage versus conventional single-probe AFM tools. This architecture allows for precise measurements of high-aspect-ratio structures and complex 3D features, critical for advanced memory (3D NAND, DRAM, HBM) and logic processes.


Fig. 1: Image capture comparison of standard AFM and multi-head AFM. Source: Nearfield Instruments

In April, Onto Innovation debuted an advancement in subsurface defect inspection technology with the release of its Dragonfly G3 inspection system. The new system allows for 100% wafer inspection, targeting subsurface defects that can cause yield losses, such as micro-cracks and other hidden flaws that may lead to entire wafers breaking during subsequent processing steps. The Dragonfly G3 utilizes novel infrared (IR) technology combined with specially designed algorithms to detect these defects, which previously were undetectable in a production environment. This new capability supports HBM, advanced logic, and various specialty segments, and aims to improve final yield and cost savings by reducing scrapped wafers and die stacks.

More recently, researchers at the Paul Scherrer Institute announced a high-performance X-ray tomography technique using burst ptychography. This new method can provide non-destructive, detailed views of nanostructures as small as 4nm in materials like silicon and metals at a fast acquisition rate of 14,000 resolution elements per seconds. The tomographic back-propagation reconstruction allows imaging of samples up to ten times larger than the conventional depth of field.

There are other technologies and techniques for improving metrology in semiconductor manufacturing, as well, including wafer-level ultrasonic inspection, which involves flipping the wafer to inspect from the other side. New acoustic microscopy techniques, such as scanning acoustic microscopy (SAM) and time-of-flight acoustic microscopy (TOF-AM), enable the detection and characterization of very small defects, such as voids, delaminations, and cracks within thin films and interfaces.

“We used to look at 80 to 100 micron resist films, but with 3D integrated packaging, we’re now dealing with films that are 160 to 240 microns—very thick resist films,” says Christopher Claypool, senior application scientist at Bruker OCD. “In TSVs and microbumps, the dominant technique today is white light interferometry, which provides profile information. While it has some advantages, its throughput is slow, and it’s a focus-based technique. This limitation makes it difficult to measure TSV structures smaller than four or five microns in diameter.”

Acoustic metrology tools equipped with the newest generation of focal length transducers (FLTs) can focus acoustic waves with precision down to a few nanometers, allowing for non-destructive detailed inspection of edge defects and critical stress points. This capability is particularly useful for identifying small-scale defects that might be missed by other inspection methods.

The development and integration of smart sensors in metrology equipment is instrumental in collecting the vast amounts of data needed for precise measurement and quality control. These sensors are highly sensitive and capable of operating under various environmental conditions, ensuring consistent performance. One significant advantage of smart sensors is their ability to facilitate predictive maintenance. By continuously monitoring the health and performance of metrology equipment, these sensors can predict potential failures and schedule maintenance before significant downtime occurs. This capability enhances the reliability of the equipment, reduces maintenance costs, and improves overall operational efficiency.

Smart sensors also are being developed to integrate seamlessly with metrology systems, offering real-time data collection and analysis. These sensors can monitor various parameters throughout the manufacturing process, providing continuous feedback and enabling quick adjustments to prevent defects. Smart sensors, combined with big data platforms and advanced data analytics, allow for more efficient and accurate defect detection and classification.

Critical stress points

A persistent challenge in semiconductor metrology is the identification and inspection of defects at critical stress points, particularly at the silicon edges. For bonded wafers, it’s at the outer ring of the wafer. For chip-on-wafer packaging, it’s at the edge of the chips. These edge defects are particularly problematic because they occur at the highest stress points from the neutral axis, making them more prone to failures. As semiconductor devices continue to involve more intricate packaging techniques, such as chip-on-wafer and wafer-level packaging, the focus on edge inspection becomes even more critical.

“When defects happen in a factory, you need imaging that can detect and classify them,” says Onto’s Bachiraju. “Then you need to find the root causes of where they’re coming from, and for that you need the entire data integration and a big data platform to help with faster analysis.”

Another significant challenge in semiconductor metrology is ensuring the reliability of known good die (KGD), especially as advanced packaging techniques and chiplets become more prevalent. Ensuring that every chip/chiplet in a stacked die configuration is of high quality is essential for maintaining yield and performance, but the speed of metrology processes is a constant concern. This leads to a balancing act between thoroughness and efficiency. The industry continuously seeks to develop faster machines that can handle the increasing volume and complexity of inspections without compromising accuracy. In this race, innovations in data processing and analysis are key to achieving quicker results.

“Customers would like, generally, 100% inspection for a lot of those processes because of the known good die, but it’s cost-prohibitive because the machines just can’t run fast enough,” says Nordson’s Perkins.

Metrology and Industry 4.0

Industry 4.0 — a term introduced in Germany in 2011 for the fourth industrial revolution, and called smart manufacturing in the U.S. — emphasizes the integration of digital technologies such as the Internet of Things, artificial intelligence, and big data analytics into manufacturing processes. Unlike past revolutions driven by mechanization, electrification, and computerization, Industry 4.0 focuses on connectivity, data, and automation to enhance manufacturing capabilities and efficiency.

“The better the data integration is, the more efficient the yield ramp,” says Dieter Rathei, CEO of DR Yield. “It’s essential to integrate all available data into the system for effective monitoring and analysis.”

In semiconductor manufacturing, this shift toward Industry 4.0 is particularly transformative, driven by the increasing complexity of semiconductor devices and the demand for higher precision and yield. Traditional metrology methods, heavily reliant on manual processes and limited automation, are evolving into highly interconnected systems that enable real-time data sharing and decision-making across the entire production chain.

“There haven’t been many tools to consolidate different data types into a single platform,” says NI’s Chaffee. “Historically, yield management systems focused on testing, while FDC or process systems concentrated on the process itself, without correlating the two. As manufacturers push into the 5, 3, and 2nm spaces, they’re discovering that defect density alone isn’t the sole governing factor. Process control is also crucial. By integrating all data, even the most complex correlations that a human might miss can be identified by AI and ML. The goal is to use machine learning to detect patterns or connections that could help control and optimize the manufacturing process.”

IoT forms the backbone of Industry 4.0 by connecting various devices, sensors, and systems within the manufacturing environment. In semiconductor manufacturing, IoT enables seamless communication between metrology tools, production equipment, and factory management systems. This interconnected network facilitates real-time monitoring and control of manufacturing processes, allowing for immediate adjustments and optimization.

“You need to integrate information from various sources, including sensors, metrology tools, and test structures, to build predictive models that enhance process control and yield improvement,” says Michael Yu, vice president of advanced solutions at PDF Solutions. “This holistic approach allows you to identify patterns and correlations that were previously undetectable.”

AI and ML are pivotal in processing and analyzing the vast amounts of data generated in a smart factory. These technologies can identify patterns, predict equipment failures, and optimize process parameters with a level of precision and speed unattainable by human operators alone. In semiconductor manufacturing, AI-driven analytics enhance process control, improve yield rates, and reduce downtime. “One of the major trends we see is the integration of artificial intelligence and machine learning into metrology tools,” says Perkins. “This helps in making sense of the vast amounts of data generated and enables more accurate and efficient measurements.”

AI’s role extends further as it assists in discovering anomalies within the production process that might have gone unnoticed with traditional methods. AI algorithms integrated into metrology systems can dynamically adjust processes in real-time, ensuring that deviations are corrected before they affect the end yield. This incorporation of AI minimizes defect rates and enhances overall production quality.

“Our experience has shown that in the past 20 years, machine learning and AI algorithms have been critical for automatic data classification and die classification,” says Bachiraju. “This has significantly improved the efficiency and accuracy of our metrology tools.”

Big data analytics complements AI/ML by providing the infrastructure necessary to handle and interpret massive datasets. In semiconductor manufacturing, big data analytics enables the extraction of actionable insights from data generated by IoT devices and production systems. This capability is crucial for predictive maintenance, quality control, and continuous process improvement.

“With big data, we can identify patterns and correlations that were previously impossible to detect, leading to better process control and yield improvement,” says Perkins.

Big data analytics also helps in understanding the lifecycle of semiconductor devices from production to field deployment. By analyzing product performance data over time, manufacturers can predict potential failures and enhance product designs, increasing reliability and lifecycle management.

“In the next decade, we see a lot of opportunities for AI,” says DR Yield’s Rathei. “The foundation for these advancements is the availability of comprehensive data. AI models need extensive data for training. Once all the data is available, we can experiment with different models and ideas. The ingenuity of engineers, combined with new tools, will drive exponential progress in this field.”

Metrology gaps remain

Despite recent advancements in metrology, analytics, and AI/ML, several gaps still remain, particularly in the context of high-volume manufacturing (HVM) and next-generation devices. The U.S. Commerce Department’s CHIPS R&D Metrology Program, along with industry stakeholders, have highlighted seven “grand challenges,” areas where current metrology capabilities fall short:

Metrology for materials purity and properties: There is a critical need for new measurements and standards to ensure the purity and physical properties of materials used in semiconductor manufacturing. Current techniques lack the sensitivity and throughput required to detect particles and contaminants throughout the supply chain.

Advanced metrology for future manufacturing: Next-generation semiconductor devices, such as gate-all-around (GAA) FETs and complementary FETs (CFETs), require breakthroughs in both physical and computational metrology. Existing tools are not yet capable of providing the resolution, sensitivity, and accuracy needed to characterize the intricate features and complex structures of these devices. This includes non-destructive techniques for characterizing defects and impurities at the nanoscale.

“There is a secondary challenge with some of the equipment in metrology, which often involves sampling data from single points on a wafer, much like heat test data that only covers specific sites,” says Chaffee. “To be meaningful, we need to move beyond sampling methods and find creative ways to gather information from every wafer, integrating it into a model. This involves building a knowledge base that can help in detecting patterns and correlations, which humans alone might miss. The key is to leverage AI and machine learning to identify these correlations and make sense of them, especially as we push into the 5, 3, and 2nm spaces. This process is iterative and requires a holistic approach, encompassing various data points and correlating them to understand the physical boundaries and the impact on the final product.”

Metrology for advanced packaging: The integration of sophisticated components and novel materials in advanced packaging technologies presents significant metrology challenges. There is a need for rapid, in-situ measurements to verify interfaces, subsurface interconnects, and internal 3D structures. Current methods do not adequately address issues such as warpage, voids, substrate yield, and adhesion, which are critical for the reliability and performance of advanced packages.

Modeling and simulating semiconductor materials, designs, and components: Modeling and simulating semiconductor processes require advanced computational models and data analysis tools. Current capabilities are limited in their ability to seamlessly integrate the entire semiconductor value chain, from materials inputs to system assembly. There is a need for standards and validation tools to support digital twins and other advanced simulation techniques that can optimize process development and control.

“Predictive analytics is particularly important,” says Chaffee. “They aim to determine the probability of any given die on a wafer being the best yielding or presenting issues. By integrating various data points and running different scenarios, they can identify and understand how specific equipment combinations, sequences and processes enhance yields.”

Modeling and simulating semiconductor processes: Current capabilities are limited in their ability to seamlessly integrate the entire semiconductor value chain, from materials inputs to system assembly. There is a need for standards and validation tools to support digital twins and other advanced simulation techniques that can optimize process development and control.

“Part of the problem comes from the back-end packaging and assembly process, but another part of the problem can originate from the quality of the wafer itself, which is determined during the front-end process,” says PDF’s Yu. “An effective ML model needs to incorporate both front-end and back-end information, including data from equipment sensors, metrology, and structured test information, to make accurate predictions and take proactive actions to correct the process.”

Standardizing new materials and processes: The development of future information and communication technologies hinges on the creation of new standards and validation methods. Current reference materials and calibration services do not meet the requirements for next-generation materials and processes, such as those used in advanced packaging and heterogeneous integration. This gap hampers the industry’s ability to innovate and maintain competitive production capabilities.

Metrology to enhance security and provenance of components and products: With the increasing complexity of the semiconductor supply chain, there is a need for metrology solutions that can ensure the security and provenance of components and products. This involves developing methods to trace materials and processes throughout the manufacturing lifecycle to prevent counterfeiting and ensure compliance with regulatory standards.

“The focus on security and sharing changes the supplier relationship into more of a partnership and less of a confrontation,” says Chaffee. “Historically, there’s always been a concern of data flowing across that boundary. People are very protective about their process, and other people are very protective about their product. But once you start pushing into the deep sub-micron space, those barriers have to come down. The die are too expensive for them not to communicate, but they can still do so while protecting their IP. Companies are starting to realize that by sharing parametric test information securely, they can achieve better yield management and process optimization without compromising their intellectual property.”

Conclusion

Advancements in metrology and testing are pivotal for the semiconductor industry’s continued growth and innovation. The integration of AI/ML, IoT, and big data analytics is transforming how manufacturers approach process control and yield improvement. As adoption of Industry 4.0 grows, the role of metrology will become even more critical in ensuring the efficiency, quality, and reliability of semiconductor devices. And by leveraging these advanced technologies, semiconductor manufacturers can achieve higher yields, reduce costs, and maintain the precision required in this competitive industry.

With continuous improvements and the integration of smart technologies, the semiconductor industry will keep pushing the boundaries of innovation, leading to more robust and capable electronic devices that define the future of technology. The journey toward a fully realized Industry 4.0 is ongoing, and its impact on semiconductor manufacturing undoubtedly will shape the future of the industry, ensuring it stays at the forefront of global technological advancements.

“Anytime you have new packaging technologies and process technologies that are evolving, you have a need for metrology,” says Perkins. “When you are ramping up new processes and need to make continuous improvements for yield, that is when you see the biggest need for new metrology solutions.”

The post Metrology And Inspection For The Chiplet Era appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Driving Cost Lower and Power Higher With GaNAnne Meixner
    Gallium nitride is starting to make broader inroads in the lower-end of the high-voltage, wide-bandgap power FET market, where silicon carbide has been the technology of choice. This shift is driven by lower costs and processes that are more compatible with bulk silicon. Efficiency, power density (size), and cost are the three major concerns in power electronics, and GaN can meet all three criteria. However, to satisfy all of those criteria consistently, the semiconductor ecosystem needs to deve
     

Driving Cost Lower and Power Higher With GaN

6. Srpen 2024 v 09:02

Gallium nitride is starting to make broader inroads in the lower-end of the high-voltage, wide-bandgap power FET market, where silicon carbide has been the technology of choice. This shift is driven by lower costs and processes that are more compatible with bulk silicon.

Efficiency, power density (size), and cost are the three major concerns in power electronics, and GaN can meet all three criteria. However, to satisfy all of those criteria consistently, the semiconductor ecosystem needs to develop best practices for test, inspection, and metrology, determining what works best for which applications and under varying conditions.

Power ICs play an essential role in stepping up and down voltage levels from one power source to another. GaN is used extensively today in smart phone and laptop adapters, but market opportunities are beginning to widen for this technology. GaN likely will play a significant role in both data centers and automotive applications [1]. Data centers are expanding rapidly due to the focus on AI and a build-out at the edge. And automotive is keen to use GaN power ICs for inverter modules because they will be cheaper than SiC, as well as for onboard battery chargers (OBCs) and various DC-DC conversions from the battery to different applications in the vehicle.


Fig. 1: Current and future fields of interest for GaN and SiC power devices. Source A. Meixner/Semiconductor Engineering

But to enter new markets, GaN device manufactures need to more quickly ramp up new processes and their associated products. Because GaN for power transistors is a developing process technology, measurement data is critical to qualify both the manufacturing process and the reliability of the new semiconductor technology and resulting product.

Much of GaN’s success will depend on metrology and inspection solutions that offer high throughput, as well as non-destructive testing methods such as optical and X-ray. Electron microscopy is useful for drilling down into key device parameters and defect mechanisms. And electrical tests provide complementary data that assists with product/process validation, reliability and qualification, system-level validation, as well as being used for production screening.

Silicon carbide (SiC) remains the material of choice for very high-voltage applications. It offers better performance and higher efficiency than silicon. But SiC is expensive. It requires different equipment than silicon, it’s difficult to grow SiC ingots, and today there is limited wafer capacity.

In contrast, GaN offers some of the same desirable characteristics as SiC and can operate at even higher switching speeds. GaN wafer production is cheaper because it can be created on a silicon substrate utilizing typical silicon processing equipment other than the GaN epitaxial deposition tool. That enables a fab/foundry with a silicon CMOS process to ramp a GaN process with an engineering team experienced in GaN.

The cost comparison isn’t entirely apples-to-apples, of course. The highest-voltage GaN on the market today uses silicon on sapphire (SoS) or other engineered substrates, which are more expensive. But below those voltages, GaN typically has a cost advantage, and that has sparked renewed interest in this technology.

“GaN-based products increase the performance envelopes relative to the incumbent and mature silicon-based technologies,” said Vineet Pancholi, senior director of test technology at Amkor. “Switching speeds with GaN enable the application in ways never possible with silicon. But as the GaN production volumes ramp, these products have extreme economic pressures. The production test list includes static attributes. However, the transient and dynamic attributes are the primary benefit of GaN in the end application.”

Others agree. “The world needs cheaper material, and GaN is easy to build,” said Frank Heidemann, vice president and technology leader of SET at NI/Emerson Test & Measurement. “Gallium nitride has a huge success in the lower voltages ranges — anything up to 500V. This is where the GaN process is very well under control. The problem now is building in higher voltages is a challenge. In the near future there will be products at even higher voltage levels.”

Those higher-voltage applications require new process recipes, new power IC designs, and subsequently product/process validation and qualification.

GaN HEMT properties
Improving the processes needed to create GaN high-electron-mobility transistors (HEMTs) requires a deep understanding of the material properties and the manufacturing consequences of layering these materials.

The underlying physics and structure of wide-bandgap devices significantly differs from silicon high-voltage transistors. Silicon transistors rely on doping of p and n materials. When voltage is applied at the gate, it creates a channel for current to flow from source to drain. In contrast, wide-bandgap transistors are built by layering thin films of different materials, which differ in their bandgap energy. [2] Applying a voltage to the gate enables an electron exchange between the two materials, driving those electrons along the channel between source and drain.


Fig. 2. Cross-sectional animation of e-mode GaN HEMT device. Source: Zeiss Microscopy

“GaN devices rely on two-dimensional electron gas (2DEG) created at the GaN and AlGaN interface to conduct current at high speed,” said Jiangtao Hu, senior director of product marketing at Onto Innovation. “To enable high electron mobility, the epitaxy process creating complex multi-layer crystalline films must be carefully monitored and controlled, ensuring critical film properties such as thickness, composition, and interface roughness are within a tight spec. The ongoing trend of expanding wafer sizes further requires the measurement to be on-product and non-destructive for uniformity control.”


Fig. 3: SEM cross-section of enhancement-mode GaN HEMT built on silicon which requires a superlattice. Source: Zeiss Microscopy

Furthermore, each layer’s electrical properties need to be understood. “It is of utmost importance to determine, as early as possible in the manufacturing process, the electrical characteristics of the structures, the sheet resistance of the 2DEG, the carrier concentration, and the mobility of carriers in the channel, preferably at the wafer level in a non-destructive assessment,” said Christophe Maleville, CTO and senior executive vice president of innovation at Soitec.

Developing process recipes for GaN HEMT devices at higher operating ranges require measurements taken during wafer manufacturing and device testing, both for qualification of a process/product and production manufacturing. Inspection, metrology, and electrical tests focus on process anomalies and defects, which impact the device performance.

“Crystal defects such as dislocations and stacking faults, which can form during deposition and subsequently be grown over and buried, can create long-term reliability concerns even if the devices pass initial testing,” said David Taraci, business development manager of electronics strategic accounts at ZEISS Research Microscopy Solutions. “Gate oxides can pinch off during deposition, creating voids which may not manifest as an issue immediately.”

The quality of the buffer layer is critical because it affects the breakdown voltage. “The maximum breakdown voltage of the devices will be ultimately limited by the breakdown of the buffer layer grown in between the Si substrate and the GaN channel,” said Soitec’s Maleville. “An electrical assessment (IV at high voltage) requires destructive measurements as well as device isolation. This is performed on a sample basis only.”

One way to raise the voltage limit of a GaN device is to add a ‘gate driver’ which keeps it reliable at higher voltages. But to further expand GaN technology’s performance envelope to higher voltage operation engineers need to comprehend a new GaN device reliability properties.

“We are supporting GaN lifetime validation, which is the prediction of a mission characteristic of lifetime for gallium nitride power devices,” said Emerson’s Heidemann. “Engineers build physics-based failure models of these devices. Next, they investigate the acceleration factors. How can we really make tests and verification properly so that we can assess lifetime health?”

The qualification procedures necessitate life-stressing testing, which duplicates predicated mission profile usage, as well as electrical testing, after each life-stress period. That allows engineers to determine shifts in transistor characteristics and outright failures. For example, life stress periods could start with 4,000 hours and increase in 1,000-hour increments to 12,000 hours, during which time the device is turned on/off with specific durations of ‘on’ times.

“Reliability predictions are based upon application mission profiles,” said Stephanie Watts Butler, independent consultant and vice president of industry and standards in the IEEE Power Electronics Society. “In some cases, GaN is going into a new application, or being used differently than silicon, and the mission profile needs to be elucidated. This is one area that the industry is focused upon together.”

As an example of this effort, Butler pointed to JEDEC JEP186 spec [3], which provides guidelines for specifying the breakdown voltage for GaN HEMT devices. “JEDEC and IEC both are issuing guideline documents for methods for test and characterization of wide-bandgap devices, as well as reliability and qualification procedures, and datasheet parameters to enable wide bandgap devices, including GaN, to ramp faster with higher quality in the marketplace,” she said.

Electrical tests remain essential to screening for both time-zero and reliability-associated defects (e.g. infant mortality and reduced lifetime). This holds true for screening wafers, singulated die, and packaged devices. And test content includes tests specific to GaN HEMT power devices performance specifications and tests more directed at defect detection.

Due to inherent device differences, the GaN test list varies in some significant ways from Si and SiC power ICs. Assessing GaN health for qualification and manufacturing purposes requires both static and dynamic tests (SiC DC and AC). A partial list includes zero gate voltage drain leakage current, rise time, fall time, dynamic RDSon, and dielectric integrity tests.

“These are very time-intensive measurement techniques for GaN devices,” said Tom Tran, product manager for power discrete test products at Teradyne. “On top of the static measurement techniques is the concern about trapped charge — both for functionality and efficiency — revealed through dynamic RDSon testing.”

Transient tests are necessary for qualification and production purposes due to the high electron mobility, which is what gives GaN HEMT its high switching speed. “From a test standpoint, static test failures indicate basic processing failures, while transient switching failures indicate marginal or process excursions,” said Amkor’s Vineet Pancholi. “Both tests continue to be important to our customers until process maturity is achieved. With the extended range of voltage, current, and switching operations, mainstream test equipment suppliers have been adding complementary instrumentation capabilities.”

And ATE suppliers look to reduce test time, which reduces cost. “Both static and dynamic test requirements drive very high test times,” said Teradyne’s Tran. “But the GaN of today is very different than GaN from a decade ago. We’re able to accelerate this testing just due to the core nature of our ATE architecture. We think there is the possibility further reducing the cost of test for our customers.”

Tools for process control and quality management
GaN HEMT devices’ reliance on thin-film processes highlights the need to understand the material properties and the nature of the interfaces between each layer. That requires tools for process control, yield management, and failure analysis.

“GaN device performance is highly reflective of the film characteristics used in its manufacture,” said Mike McIntyre, director of software product management at Onto Innovation. “The smallest process variations when it comes to film thickness, film stress, line width or even crystalline make-up, can have a dramatic impact on how the device performs, or even if it is usable in its target market. This lack of tolerance to any variation places a greater burden on engineers to understand the factors that correlate to device performance and its profitability.”

Inspection methods that are non-destructive vary in throughput time and in the level of detail provided for engineers to make decisions. While optical methods are fast and provide full wafer coverage, they cannot accurately classify chemical or structural defects for engineers/technicians to review. In contrast, destructive methods provide the information that’s needed to truly understand the nature of the defects. For example, conductive atomic force microscopy (AFM) probing remains slow, but it can identify electrical nature of a defect. And to truly comprehend crystallographic defects and the chemical nature of impurities, engineers can turn to electron microscopy based methods.

One way to assess thin films is with X-rays. “High resolution X-ray measurements are useful to provide production control of the wafer crystalline quality and defects in the buffer, said Soitec’s Maleville. “Minor changes in composition of the buffer, barrier, or capping layer, as well as their layer thickness, can result in significant deviations in device performance. Thickness of the layers, in particular the top cap, barrier, and spacer layers, are typically measured by XRD. However, the throughput of XRD systems is low. Alternatively, ellipsometry offers a reasonably good throughput measurement with more data points for both development and production mode scenarios.”

Optical techniques have been the standard for thin film assessment in the semiconductor industry. Inspection equipment providers have long been on the continuation improvement always evolving journey to improve accuracy, precision and throughput. Providing better metrology tools helps device makers with process control and yield management.

“Recently, we successfully developed a non-destructive on product measurement capability for GaN epi process monitoring,” said Onto’s Hu. “It takes advantage of our advanced optical film experience and our modeling software to simultaneously measure multi-layer epi film thickness, composition, and interface roughness on product wafers.”


Fig. 4: Metrology measurements on GaN for roughness and for Al concentration. Source: Onto Innovation

Assessing the electrical characteristics — 2DEG sheet resistance, channel carrier mobility, and concentration are required for controlling the manufacturing process. A non-destructive assessment would be an improvement over currently used destructive techniques (e.g. SEM). The solutions used for other power ICs do not work for GaN HEMT. As of today, no one has come up with a commercial solution.

Inspection looks for yield impacting defects, as well as defects that affect wafer acceptance in the case of companies that provide engineered substrates.

“Defect inspection for incoming silicon wafers looks for particles, scratches, and other anomalies that might seed imperfections in the subsequent buffer and crystal growth,” said Antonio Mani, business development manager at Thermo Fisher Scientific. “After the growth of the buffer and termination layers, followed by the growth of the doped GaN layers, another set of inspections is carried out. In this case, it is more focused on the detection of cracks, other macroscopic defects (micropipes, carrots), and looking for micro-pits, which are associated to threading dislocations that have survived the buffer layer and are surfacing at the top GaN surface.”

Mani noted that follow-up inspection methods for Si and GaN devices are similar. The difference is the importance in connecting observations back to post-epi results.

More accurate defect libraries would shorten inspection time. “The lack of standardization of surface defect analysis impedes progress,” said Soitec’s Maleville. “Different tools are available on the market, while defect libraries are still being developed essentially by the different user. This lack of globally accepted method and standard defect library for surface defect analysis is slowing down the GaN surface qualification process.”

Whether it involves a manufacturing test failure or a field return, the necessary steps for determining root cause on a problematic packaged part begins with fault isolation. “Given the direct nature of the bandgap of GaN and its operating window in terms of voltage/frequency/power density, classical methods of fault isolation (e.g. optical emission spectroscopy) are forced to focus on different wavelengths and different ranges of excitation of the typical electrical defects,” said Thermo Fisher’s Mani. “Hot carrier pairs are just one example, which highlights the radical difference between GaN and silicon devices.”

In addition to fault isolation there are challenges in creating a device cross-section with focused-ion beam milling methods.

“Several challenges exist in FA for GaN power ICs,” said Zeiss’ Taraci. “In any completed device, in particular, there are numerous materials and layers present for stress mitigation/relaxation and thermal management, depending on whether we are talking enhancement- or depletion-mode devices. Length-scale can be difficult to manage as you are working with these samples, because they have structures of varying dimension present in close proximity. Many of the structures are quite unique to power GaN and can pose challenges themselves in cross-section and analyses. Beam-milling approaches have to be tailored to prevent heavy re-deposition and masking, and are dependent on material, lattice orientation, current, geometry, etc.”

Conclusion
To be successful in bringing new GaN power ICs to new application space engineers and their equipment suppliers need faster process development and a reduction in overall costs. For HEMT devices, it’s understanding the resulting layers and their material properties. This requires a host of metrology, inspection, test, and failure analysis steps to comprehend the issues, and to provide feedback data from experiments and qualifications for process and design improvements.

References

[1] M. Buffolo et al., “Review and Outlook on GaN and SiC Power Devices: Industrial State-of-the-Art, Applications, and Perspectives,” in IEEE Transactions on Electron Devices, March 2024, open access, https://ieeexplore.ieee.org/document/10388225

[2] High electron mobility transistor (HEMT) https://en.wikipedia.org/wiki/High-electron-mobility_transistor

[3] Guideline to specify a transient off-state withstand voltage robustness indicated in datasheets for lateral GaN power conversion devices, JEP186, version 1.0, December 2021. https://www.jedec.org/standards-documents/docs/jep186

Related Stories

Ramping Up Power Electronics For EVs

SiC Growth For EVs Is Stressing Manufacturing

GaN ICs Wanted For Power, EV Markets

Architecting Chips For High-Performance Computing

Power Semiconductors: 2023

The post Driving Cost Lower and Power Higher With GaN appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Why Chiplets Are So Critical In AutomotiveJohn Koon
    Chiplets are gaining renewed attention in the automotive market, where increasing electrification and intense competition are forcing companies to accelerate their design and production schedules. Electrification has lit a fire under some of the biggest and best-known carmakers, which are struggling to remain competitive in the face of very short market windows and constantly changing requirements. Unlike in the past, when carmakers typically ran on five- to seven-year design cycles, the latest
     

Why Chiplets Are So Critical In Automotive

Od: John Koon
20. Únor 2024 v 09:10

Chiplets are gaining renewed attention in the automotive market, where increasing electrification and intense competition are forcing companies to accelerate their design and production schedules.

Electrification has lit a fire under some of the biggest and best-known carmakers, which are struggling to remain competitive in the face of very short market windows and constantly changing requirements. Unlike in the past, when carmakers typically ran on five- to seven-year design cycles, the latest technology in vehicles today may well be considered dated within several years. And if they cannot keep up, there is a whole new crop of startups producing cheap vehicles with the ability to update or change out features as quickly as a software update.

But software has speed, security, and reliability limitations, and being able to customize the hardware is where many automakers are now putting their efforts. This is where chiplets fit in, and the focus now is on how to build enough interoperability across large ecosystems to make this a plug-and-play market. The key factors to enable automotive chiplet interoperability include standardization, interconnect technologies, communication protocols, power and thermal management, security, testing, and ecosystem collaboration.

Similar to non-automotive applications at the board level, many design efforts are focusing on a die-to-die approach, which is driving a number of novel design considerations and tradeoffs. At the chip level, the interconnects between various processors, chips, memory, and I/O are becoming more complex due to increased design performance requirements, spurring a flurry of standards activities. Different interconnect and interface types have been proposed to serve varying purposes, while emerging chiplet technologies for dedicated functions — processors, memories, and I/Os, to name a few — are changing the approach to chip design.

“There is a realization by automotive OEMs that to control their own destiny, they’re going to have to control their own SoCs,” said David Fritz, vice president of virtual and hybrid systems at Siemens EDA. “However, they don’t understand how far along EDA has come since they were in college in 1982. Also, they believe they need to go to the latest process node, where a mask set is going to cost $100 million. They can’t afford that. They also don’t have access to talent because the talent pool is fairly small. With all that together comes the realization by the OEMs that to control their destiny, they need a technology that’s developed by others, but which can be combined however needed to have a unique differentiated product they are confident is future-proof for at least a few model years. Then it becomes economically viable. The only thing that fits the bill is chiplets.”

Chiplets can be optimized for specific functions, which can help automakers meet reliability, safety, security requirements with technology that has been proven across multiple vehicle designs. In addition, they can shorten time to market and ultimately reduce the cost of different features and functions.

Demand for chips has been on the rise for the past decade. According to Allied Market Research, global automotive chip demand will grow from $49.8 billion in 2021 to $121.3 billion by 2031. That growth will attract even more automotive chip innovation and investment, and chiplets are expected to be a big beneficiary.

But the marketplace for chiplets will take time to mature, and it will likely roll out in phases.  Initially, a vendor will provide different flavors of proprietary dies. Then, partners will work together to supply chiplets to support each other, as has already happened with some vendors. The final stage will be universally interoperable chiplets, as supported by UCIe or some other interconnect scheme.

Getting to the final stage will be the hardest, and it will require significant changes. To ensure interoperability, large enough portions of the automotive ecosystem and supply chain must come together, including hardware and software developers, foundries, OSATs, and material and equipment suppliers.

Momentum is building
On the plus side, not all of this is starting from scratch. At the board level, modules and sub-systems always have used onboard chip-to-chip interfaces, and they will continue to do so. Various chip and IP providers, including Cadence, Diode, Microchip, NXP, Renesas, Rambus, Infineon, Arm, and Synopsys, provide off-the-shelf interface chips or IP to create the interface silicon.

The Universal Chiplet Interconnect Express (UCIe) Consortium is the driving force behind the die-to-die, open interconnect standard. The group released its latest UCIe 1.1 specification in August 2023. Board members include Alibaba, AMD, Arm, ASE, Google Cloud, Intel, Meta, Microsoft, NVIDIA, Qualcomm, Samsung, and others. Industry partners are showing widespread support. AIB and Bunch of Wires (BoW) also have been proposed. In addition, Arm just released its own Chiplet System Architecture, along with an updated AMBA spec to standardize protocols for chiplets.

“Chiplets are already here, driven by necessity,” said Arif Khan, senior product marketing group director for design IP at Cadence. “The growing processor and SoC sizes are hitting the reticle limit and the diseconomies of scale. Incremental gains from process technology advances are lower than rising cost per transistor and design. The advances in packaging technology (2.5D/3D) and interface standardization at a die-to-die level, such as UCIe, will facilitate chiplet development.”

Nearly all of the chiplets used today are developed in-house by big chipmakers such as Intel, AMD, and Marvell, because they can tightly control the characteristics and behavior of those chiplets. But there is work underway at every level to open this market to more players. When that happens, smaller companies can begin capitalizing on what the high-profile trailblazers have accomplished so far, and innovating around those developments.

“Many of us believe the dream of having an off-the-shelf, interoperable chiplet portfolio will likely take years before becoming a reality,” said Guillaume Boillet, senior director strategic marketing at Arteris, adding that interoperability will emerge from groups of partners who are addressing the risk of incomplete specifications.

This also is raising the attractiveness of FPGAs and eFPGAs, which can provide a level of customization and updates for hardware in the field. “Chiplets are a real thing,” said Geoff Tate, CEO of Flex Logix. “Right now, a company building two or more chiplets can operate much more economically than a company building near-reticle-size die with almost no yield. Chiplet standardization still appears to be far away. Even UCIe is not a fixed standard yet. Not all agree on UCIe, bare die testing, and who owns the problem when the integrated package doesn’t work, etc. We do have some customers who use or are evaluating eFPGA for interfaces where standards are in flux like UCIe. They can implement silicon now and use the eFPGA to conform to standards changes later.”

There are other efforts supporting chiplets, as well, although for somewhat different reasons — notably, the rising cost of device scaling and the need to incorporate more features into chips, which are reticle-constrained at the most advanced nodes. But those efforts also pave the way for chiplets in automotive, and there is strong industry backing to make this all work. For example, under the sponsorship of SEMI, ASME, and three IEEE Societies, the new Heterogeneous Integration Roadmap (HIR) looks at various microelectronics design, materials, and packaging issues to come up with a roadmap for the semiconductor industry. Their current focus includes 2.5D, 3D-ICs, wafer-level packaging, integrated photonics, MEMS and sensors, and system-in-package (SiP), aerospace, automotive, and more.

At the recent Heterogeneous Integration Global Summit 2023, representatives from AMD, Applied Materials, ASE, Lam Research, MediaTek, Micron, Onto Innovation, TSMC, and others demonstrated strong support for chiplets. Another group that supports chiplets is the Chiplet Design Exchange (CDX) working group , which is part of the Open Domain Specific Architecture (ODSA) and the Open Compute Project Foundation (OCP). The Chiplet Design Exchange (CDX) charter focuses on the various characteristics of chiplet and chiplet integration, including electrical, mechanical, and thermal design exchange standards of the 2.5D stacked, and 3D Integrated Circuits (3D-ICs). Its representatives include Ansys, Applied Materials, Arm, Ayar Labs, Broadcom, Cadence, Intel, Macom, Marvell, Microsemi, NXP, Siemens EDA, Synopsys, and others.

“The things that automotive companies want in terms of what each chiplet does in terms of functionality is still in an upheaval mode,” Siemens’ Fritz noted. “One extreme has these problems, the other extreme has those problems. This is the sweet spot. This is what’s needed. And these are the types of companies that can go off and do that sort of work, and then you could put them together. Then this interoperability thing is not a big deal. The OEM can make it too complex by saying, ‘I have to handle that whole spectrum of possibilities.’ The alternative is that they could say, ‘It’s just like a high speed PCIe. If I want to communicate from one to the other, I already know how to do that. I’ve got drivers that are running my operating system. That would solve an awful lot of problems, and that’s where I believe it’s going to end up.”

One path to universal chiplet development?

Moving forward, chiplets are a focal point for both the automotive and chip industries, and that will involve everything from chiplet IP to memory interconnects and customization options and limitations.

For example, Renesas Electronics announced in November 2023 plans for its next-generation SoCs and MCUs. The company is targeting all major applications across the automotive digital domain, including advance information about its fifth-generation R-Car SoC for high-performance applications with advanced in-package chiplet integration technology, which is meant to provide automotive engineers greater flexibility to customize their designs.

Renesas noted that if more AI performance is required in Advanced Driver Assistance Systems (ADAS), engineers will have the capability to integrate AI accelerators into a single chip. The company said this roadmap comes after years of collaboration and discussions with Tier 1 and OEM customers, which have been clamoring for a way to accelerate development without compromising quality, including designing and verifying the software even before the hardware is available.

“Due to the ever increasing needs to increase compute on demand, and the increasing need for higher levels of autonomy in the cars of tomorrow, we see challenges in monolithic solutions scaling and providing the performance needs of the market in the upcoming years,” said Vasanth Waran, senior director for SoC Business & Strategies at Renesas. “Chiplets allows for the compute solutions to scale above and beyond the needs of the market.”

Renesas announced plans to create a chiplet-based product family specifically targeted at the automotive market starting in 2025.

Standard interfaces allow for SoC customization
It is not entirely clear how much overlap there will be between standard processors, which is where most chiplets are used today, and chiplets developed for automotive applications. But the underlying technologies and developments certainly will build off each other as this technology shifts into new markets.

“Whether it is an AI accelerator or ADAS automotive application, customers need standard interface IP blocks,” noted David Ridgeway, senior product manager, IP accelerated solutions group at Synopsys. “It is important to provide fully verified IP subsystems around their IP customization requirements to support the subsystem components used in the customers’ SoCs. When I say customization, you might not realize how customizable IP has become over the course of the last 10 to 20 years, on the PHY side as well as the controller side. For example, PCI Express has gone from PCIe Gen 3 to Gen 4 to Gen 5 and now Gen 6. The controller can be configured to support multiple bifurcation modes of smaller link widths, including one x16, two x8, or four x4. Our subsystem IP team works with customers to ensure all the customization requirements are met. For AI applications, signal and power integrity is extremely important to meet their performance requirements. Almost all our customers are seeking to push the envelope to achieve the highest memory bandwidth speeds possible so that their TPU can process many more transactions per second. Whenever the applications are cloud computing or artificial intelligence, customers want the fastest response rate possible.”

Fig 1: IP blocks including processor, digital, PHY, and verification help developers implement the entire SoC. Source: Synopsys

Fig 1: IP blocks including processor, digital, PHY, and verification help developers implement the entire SoC. Source: Synopsys

Optimizing PPA serves the ultimate goal of increasing efficiency, and this makes chiplets particularly attractive in automotive applications. When UCIe matures, it is expected to improve overall performance exponentially. For example, UCIe can deliver a shoreline bandwidth of 28 to 224 GB/s/mm in a standard package, and 165 to 1317 GB/s/mm in an advanced package. This represents a performance improvement of 20- to 100-fold. Bringing latency down from 20ns to 2ns represents a 10-fold improvement. Around 10 times greater power efficiency, at 0.5 pJ/b (standard package) and 0.25 pJ/b (advanced package), is another plus. The key is shortening the interface distance whenever possible.

To optimize chiplet designs, the UCIe Consortium provides some suggestions:

  • Careful planning consideration of architectural cut-lines (i.e. chiplet boundaries), optimizing for power, latency, silicon area, and IP reuse. For example, customizing one chiplet that needs a leading-edge process node while re-using other chiplets on older nodes may impact cost and time.
  • Thermal and mechanical packaging constraints need to be planned out for package thermal envelopes, hot spots, chiplet placements and I/O routing and breakouts.
  • Process nodes need to be carefully selected, particularly in the context of the associated power delivery scheme.
  • Test strategy for chiplets and packaged/assembled parts need to be developed up front to ensure silicon issues are caught at the chiplet-level testing phase rather than after they are assembled into a package.

Conclusion
The idea of standardizing die-to-die interfaces is catching on quickly but the path to get there will take time, effort, and a lot of collaboration among companies that rarely talk with each other. Building a vehicle takes one determine carmaker. Building a vehicle with chiplets requires an entire ecosystem that includes the developers, foundries, OSATs, and material and equipment suppliers to work together.

Automotive OEMs are experts at putting systems together and at finding innovative ways to cut costs. But it remains to seen how quickly and effectively they can build and leverage an ecosystem of interoperable chiplets to shrink design cycles, improve customization, and adapt to a world in which the leading edge technology may be outdated by the time it is fully designed, tested, and available to consumers.

— Ann Mutschler contributed to this report.

Related Reading
Automotive Relationships Shifting With Chiplets
As the automotive ecosystem balances the best approaches for designing in increasingly advanced features, how companies interact is still evolving.

The post Why Chiplets Are So Critical In Automotive appeared first on Semiconductor Engineering.

❌
❌