FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • Achieving Zero Defect Manufacturing Part 2: Finding Defect SourcesPrasad Bachiraju
    Semiconductor manufacturing creates a wealth of data – from materials, products, factory subsystems and equipment. But how do we best utilize that information to optimize processes and reach the goal of zero defect manufacturing? This is a topic we first explored in our previous blog, “Achieving Zero Defect Manufacturing Part 1: Detect & Classify.” In it, we examined real-time defect classification at the defect, die and wafer level. In this blog, the second in our three-part series, we will
     

Achieving Zero Defect Manufacturing Part 2: Finding Defect Sources

6. Srpen 2024 v 09:07

Semiconductor manufacturing creates a wealth of data – from materials, products, factory subsystems and equipment. But how do we best utilize that information to optimize processes and reach the goal of zero defect manufacturing?

This is a topic we first explored in our previous blog, “Achieving Zero Defect Manufacturing Part 1: Detect & Classify.” In it, we examined real-time defect classification at the defect, die and wafer level. In this blog, the second in our three-part series, we will discuss how to use root cause analysis to determine the source of defects. For starters, we will address the software tools needed to properly conduct root cause analysis for a faster understanding of visual, non-visual and latent defect sources.

About software

The software platform fabs choose impacts how well users are able to integrate data, conduct database analytics and perform server-side and real-time analytics. Manufacturers want the ability to choose a platform that can scale by data volume, type and multisite integration. In addition, all of this data – whether it is coming from metrology, inspection or testing – must be normalized before fabs can apply predictive modeling and machine learning based analytics to find the root cause of defects and failures. This search, however, goes beyond a simple examination of process steps and tools; manufacturers also need a clear understanding of each device’s genealogy. In addition, fabs should employ an AI-based yield optimizer capable of running multiple models and offering potential optimization measures that can be taken in the factory to improve the process.

Now that we have discussed software needs, we will turn our attention to two use cases to further our examination of root cause analysis in zero defect manufacturing.

Root Cause Case No. 1

The first root cause value case we would like to discuss involves the integration of wafer probe, photoluminescence and epitaxial (epi) data. Previously, integrating these three kinds of data was not possible because the identification for wafers and lots – pre- and post-epi – were generally not linked. Wafers and lots were often identified by entirely different names before and after the epi step. For reasons that do not need to be explained, this was a huge hindrance to advancing the goal of zero defect manufacturing because the impact of the epi process on yield was not detected in a timely manner, resulting in higher defectivity and yield loss.

But the challenge is not as simple as identification and naming practices. Typical wafer ID trackers are not applied prior to the post-epi step because of technical and logistical constraints. The solution is for fabs to employ defect and yield analytics software that will enable genealogy that can link data from the epi and pre-epi processes to post-epi processes. The real innovation occurs when the genealogical information is normalized and interpolated with electrical test data. Once integrated, this data offers users a more complete understanding of where yield limiting events are occurring.

Fig. 1: Photoluminescence map (left) and electrical test performance by epi tool (right).

For example, let us consider the following scenario: in figure 1 (left) we show a group of dies that negatively affect performance on the upper left edge of the wafer. Through more traditional measures, this pocket of defectivity may have gone unnoticed, allowing for bad die to move forward in the process. But by applying integrated data, genealogical information and electrical test data, this trouble-plagued area was identified down to the epi tool and chamber (figure 1, right), and the defective material was prevented from going forward in the process. As significant as this is, with the right software platform this approach enables root cause analysis to be conducted in minutes, not days.

Now, onto the second use case in which we look at how to problem solve within the supply chain.

Root Cause Case No. 2

During final test and measurement, chips sometimes fail. In many cases, the faulty chips were previously determined to be good chips and were advanced forward in the process as a result of combining multiple chips coming from different products, lots, or wafers. The important thing here is to understand why this happens.

When there is a genealogy model in a yield software platform, fabs are able to pick the lots and wafers where bad chips come from and then run this information through pattern analysis software. In one particular scenario (figure 2), users were able to apply pattern analysis software to discover that all of the defective die arose from a spin coater issue, in this case, a leak negatively impacting the underbump metallization area following typical preventive maintenance measures.

To compensate for this, the team used integrated analytics to create a fault detection and classification (FDC) model to identify similar circumstances going forward. In this case, the FDC model monitors the suction power of the spin coater. If suction power for more than 10 consecutive samples are above the set limit, alarms are triggered and an appropriate Out of Control Action Plan (OCAP) process is executed that includes notification to tool owner.

Fig. 2: Proactive zero defect manufacturing at-a-glance.

The above explains how fabs are able to turn reactive root cause analytics into proactive monitoring. With such an approach, manufacturers can monitor for this and other issues and avoid the advancement of future defective die. Furthermore, the number of defect signatures that can be monitored inline can be as high as 40 different signatures, if not more. And in case these defects are missed at the process level, they can be identified at the inspection level or post-inspection, avoiding hundreds of issues further along in the process.

Conclusion

Zero defect manufacturing is not so much of a goal as it is a commitment to root out defects before they happen. To accomplish this, fabs need a wealth of data from the entire process to achieve a clear picture of what is going wrong, where it is going wrong and why it is going wrong. In this blog, we offered specific scenarios where root cause analysis was used to find defects across wafers and dies. However, these are just a few examples of how software can be used to find difficult-to-find defects. It can be beneficial in many different areas across the entire process, with each application further strengthening a fab’s efforts to employ a zero defect manufacturing approach, increasing yield and meeting the stringent requirements of some of the industry’s most advanced customers.

In our next blog, we will discuss how to detect dormant defects, use feedback and feedforward measures, and monitor the health of process control equipment. We hope you join us as we continue to explore methods for achieving zero defect manufacturing.

The post Achieving Zero Defect Manufacturing Part 2: Finding Defect Sources appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Are You Ready For HBM4? A Silicon Lifecycle Management (SLM) PerspectiveFaisal Goriawalla
    Many factors are driving system-on-chip (SoC) developers to adopt multi-die technology, in which multiple dies are stacked in a three-dimensional (3D) configuration. Multi-die systems may make power and thermal issues more complex, and they have required major innovations in electronic design automation (EDA) implementation and test tools. These challenges are more than offset by the advantages of over traditional 2D designs, including: Reducing overall area Achieving much higher pin densities
     

Are You Ready For HBM4? A Silicon Lifecycle Management (SLM) Perspective

Many factors are driving system-on-chip (SoC) developers to adopt multi-die technology, in which multiple dies are stacked in a three-dimensional (3D) configuration. Multi-die systems may make power and thermal issues more complex, and they have required major innovations in electronic design automation (EDA) implementation and test tools. These challenges are more than offset by the advantages of over traditional 2D designs, including:

  • Reducing overall area
  • Achieving much higher pin densities
  • Reusing existing proven dies
  • Mixing heterogeneous die technologies
  • Quickly creating derivative designs for new applications

One of the most common uses of multi-die design is the interconnection of memory stacks and processors such as CPUs and GPUs. High Bandwidth Memory (HBM) is a standard interface specifically for 3D-stacked DRAM dies. It was defined by the JEDEC Solid State Technology Association in 2013, followed by HBM2 in 2016 and HBM3 in 2022. Many multi-die projects have used this standard for caches in advanced CPUs and other system-on-chip (SoC) designs used in high-end applications such as data centers, high-performance computing (HPC), and artificial intelligence (AI) processing.

Fig. 1: Example of a current HBM-based SoC.

JEDEC recently announced that it is nearing completion of HBM4 and published preliminary specifications. HBM4 has been developed to enhance data processing rates while maintaining higher bandwidth, lower power consumption, and increased capacity per die/stack. The initial agreement calls for speed bins up to 6.4 Gbps, although this will increase as memory vendors develop new chips and refine the technology. This speed will benefit applications that require efficient handling of large datasets and complex calculations.

HBM4 is introducing a doubled channel count per stack over HBM3. The new version of the standard features a 2048-bit memory interface, as compared to 1024 bits in previous versions, as shown in figure 1. This intent is to double the number of bits without increasing the footprint of HBM memory stacks, thus doubling the interconnection density as well.

Different memory configurations will require various interposers to accommodate the differing footprints. HBM4 will specify 24 Gb and 32 Gb layers, with options for supporting 4-high, 8-high, 12-high and 16-high TSV stacks. As an example configuration, a 16-high based on 32 Gb layers will offer a capacity of 64 GB, which means that a processor with four memory modules can support 256 GB of memory with a peak bandwidth of 6.56 TB/s using an 8,192-bit interface.

The move from HBM3 to HBM4 will require further evolution in multi-die support across a wide range of EDA tools. The 2048-bit memory interface requires a significant increase in the number of through-silicon vias (TSVs) routed through a memory stack. This will mean shrinking the external bump pitch as the total number of micro bumps increases significantly. In addition, support for 16-high TSV stacks brings new complexity in wiring up an even larger number of DRAM dies without defects.

Test challenges are likely to be a dominant part of the transition. Any signal integrity issues after assembly and multi-die packaging become more difficult to diagnose and debug since probing is not feasible. Further, some defects may marginally pass production/manufacturing test but subsequently fail in the field. Thus, test of the future HBM4-based subsystem needs to be accomplished not just at production test but also in-system to account for aging-related defects.

Being able to monitor real-time data during mission mode operation in the field is greatly preferable to having to take the system offline for unplanned service. This “predictive maintenance” allows the end user to be proactive rather than reactive. HBM provides capabilities for in-system repair, for example swapping out a bad lane. Even if a defect requires physical hardware repair, detecting it before system failure enables scheduled maintenance rather than unplanned downtime.

As shown in figure 1, HBM systems typically have a base die that includes an HBM controller, a basic/fixed test engine provided by the DRAM vendor, and Direct Access (DA) ports. The new industry trend is for the base die to be manufactured on a standard logic process rather than the DRAM process. The SoC designer should include in the base die a flexible built-in self-test (BIST) engine that allows different algorithms to be used to trade off high coverage versus test time depending on the scenario.

This engine must be programmable to handle different latencies, address ranges, and timing of test operations that vary across DRAM vendors. It may also need to support post-package repair (PPR) for HBM DRAM to delay any “truck roll-out” for in-field service. The diagnostics performed by the BIST engine must be precise, showing the failing bank, row address, column address, etc. if there is a defect detected in the DRAM stack. Figure 2 shows an example.

Fig. 2: Example fault diagnosis for HBM stack.

As an industry leader in multi-die EDA and IP solutions, Synopsys provides all the technology needed for HBM manufacturing yield optimization and in-field silicon health monitoring. Signal Integrity Monitors (SIMs) are embedded in physical layer (PHY) IP blocks for on-demand signal quality measurement for interconnects. This allows users to create 1D eye diagrams for interconnect signals during both production test and in-field operation. SIMs measure timing margins, enable HBM lane test/repair, and mitigate against silent data corruption (SDC), part of an effective silicon lifecycle management (SLM) solution.

Synopsys SMS ext-RAM is a programmable and synthesizable engine that performs test, repair, and diagnostics for memory systems, including HBM. SMS ext-RAM ensures high test coverage and supports power-on self-test (POST) with the flexibility to run custom memory algorithms in-field. As shown in figure 2, it detects a wide range of defects in memory dies, including stuck-at faults, read destructive faults, write destructive faults, deceptive read destructive faults, and row hammering.

A real world case study of a project using HBM with the Synopsys solutions is available. These solutions are scaling to support the emerging HBM4 standard, ensuring continued success.

The post Are You Ready For HBM4? A Silicon Lifecycle Management (SLM) Perspective appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Metrology And Inspection For The Chiplet EraGregory Haley
    New developments and innovations in metrology and inspection will enable chipmakers to identify and address defects faster and with greater accuracy than ever before, all of which will be required at future process nodes and in densely-packed assemblies of chiplets. These advances will affect both front-end and back-end processes, providing increased precision and efficiency, combined with artificial intelligence/machine learning and big data analytics. These kinds of improvements will be crucia
     

Metrology And Inspection For The Chiplet Era

6. Srpen 2024 v 09:03

New developments and innovations in metrology and inspection will enable chipmakers to identify and address defects faster and with greater accuracy than ever before, all of which will be required at future process nodes and in densely-packed assemblies of chiplets.

These advances will affect both front-end and back-end processes, providing increased precision and efficiency, combined with artificial intelligence/machine learning and big data analytics. These kinds of improvements will be crucial for meeting the industry’s changing needs, enabling deeper insights and more accurate measurements at rates suitable for high-volume manufacturing. But gaps still need to be filled, and new ones are likely to show up as new nodes and processes are rolled out.

“As semiconductor devices become more complex, the demand for high-resolution, high-accuracy metrology tools increases,” says Brad Perkins, product line manager at Nordson Test & Inspection. “We need new tools and techniques that can keep up with shrinking geometries and more intricate designs.”

The shift to high-NA EUV lithography (0.55 NA EUV) at the 2nm node and beyond is expected to exacerbate stochastic variability, demanding more robust metrology solutions on the front end. Traditional critical dimension (CD) measurements alone are insufficient for the level of analysis required. Comprehensive metrics, including line-edge roughness (LER), line-width roughness (LWR), local edge-placement error (LEPE), and local CD uniformity (LCDU), alongside CD measurements, are necessary for ensuring the integrity and performance of advanced semiconductor devices. These metrics require sophisticated tools that can capture and analyze tiny variations at the nanometer scale, where even slight discrepancies can significantly impact device functionality and yield.

“Metrology is now at the forefront of yield, especially considering the current demands for DRAM and HBM,” says Hamed Sadeghian, president and CEO of Nearfield Instruments. “The next generations of HBMs are approaching a stage where hybrid bonding will be essential due to the increasing stack thickness. Hybrid bonding requires high resolutions in vertical directions to ensure all pads, and the surface height versus the dielectric, remain within nanometer-scale process windows. Consequently, the tools used must be one order of magnitude more precise.”

To address these challenges, companies are developing hybrid metrology systems that combine various measurement techniques for a comprehensive data set. Integrating scatterometry, electron microscopy, and/or atomic force microscopy allows for more thorough analysis of critical features. Moreover, AI and ML algorithms enhance the predictive capabilities of these tools, enabling process adjustments.

“Our customers who are pushing into more advanced technology nodes are desperate to understand what’s driving their yield,” says Ronald Chaffee, senior director of applications engineering at NI/Emerson Test & Measurement. “They may not know what all the issues are, but they are gathering all possible data — metrology, AEOI, and any measurable parameters — and seeking correlations.”

Traditional methods for defect detection, pattern recognition, and quality control typically used spatial pattern-recognition modules and wafer image-based algorithms to address wafer-level issues. “However, we need to advance beyond these techniques,” says Prasad Bachiraju, senior director of business development at Onto Innovation. “Our observations show that about 20% of wafers have systematic issues that can limit yield, with nearly 4% being new additions. There is a pressing need for advanced metrology for in-line monitoring to achieve zero-defect manufacturing.”

Several companies recently announced metrology innovations to provide more precise inspections, particularly for difficult-to-see areas, edge effects, and highly reflective surfaces.

Nordson unveiled its AMI SpinSAM acoustic rotary scan system. The system represents a significant departure from traditional raster scan methods, utilizing a rotational scanning approach. Rather than moving the wafer in an x,y pattern relative to a stationary lens, the wafer spins, similar to a record player. This reduces motion over the wafer and increases inspection speed, negating the need for image stitching and improving image quality.

“For years, we’d been trying to figure out this technique, and it’s gratifying to finally achieve it. It’s something we’ve always thought would be incredibly beneficial,” says Perkins. “The SpinSAM is designed primarily to enhance inspection speed and efficiency, addressing the common industry demand for more product throughput and better edge inspection capabilities.”

Meanwhile, Nearfield Instruments introduced a multi-head atomic force microscopy (AFM) system called QUADRA. It is a high-throughput, non-destructive metrology tool for HVM that features a novel multi-miniaturized AFM head architecture. Nearfield claims the parallel independent multi-head scanner can deliver a 100-fold throughput advantage versus conventional single-probe AFM tools. This architecture allows for precise measurements of high-aspect-ratio structures and complex 3D features, critical for advanced memory (3D NAND, DRAM, HBM) and logic processes.


Fig. 1: Image capture comparison of standard AFM and multi-head AFM. Source: Nearfield Instruments

In April, Onto Innovation debuted an advancement in subsurface defect inspection technology with the release of its Dragonfly G3 inspection system. The new system allows for 100% wafer inspection, targeting subsurface defects that can cause yield losses, such as micro-cracks and other hidden flaws that may lead to entire wafers breaking during subsequent processing steps. The Dragonfly G3 utilizes novel infrared (IR) technology combined with specially designed algorithms to detect these defects, which previously were undetectable in a production environment. This new capability supports HBM, advanced logic, and various specialty segments, and aims to improve final yield and cost savings by reducing scrapped wafers and die stacks.

More recently, researchers at the Paul Scherrer Institute announced a high-performance X-ray tomography technique using burst ptychography. This new method can provide non-destructive, detailed views of nanostructures as small as 4nm in materials like silicon and metals at a fast acquisition rate of 14,000 resolution elements per seconds. The tomographic back-propagation reconstruction allows imaging of samples up to ten times larger than the conventional depth of field.

There are other technologies and techniques for improving metrology in semiconductor manufacturing, as well, including wafer-level ultrasonic inspection, which involves flipping the wafer to inspect from the other side. New acoustic microscopy techniques, such as scanning acoustic microscopy (SAM) and time-of-flight acoustic microscopy (TOF-AM), enable the detection and characterization of very small defects, such as voids, delaminations, and cracks within thin films and interfaces.

“We used to look at 80 to 100 micron resist films, but with 3D integrated packaging, we’re now dealing with films that are 160 to 240 microns—very thick resist films,” says Christopher Claypool, senior application scientist at Bruker OCD. “In TSVs and microbumps, the dominant technique today is white light interferometry, which provides profile information. While it has some advantages, its throughput is slow, and it’s a focus-based technique. This limitation makes it difficult to measure TSV structures smaller than four or five microns in diameter.”

Acoustic metrology tools equipped with the newest generation of focal length transducers (FLTs) can focus acoustic waves with precision down to a few nanometers, allowing for non-destructive detailed inspection of edge defects and critical stress points. This capability is particularly useful for identifying small-scale defects that might be missed by other inspection methods.

The development and integration of smart sensors in metrology equipment is instrumental in collecting the vast amounts of data needed for precise measurement and quality control. These sensors are highly sensitive and capable of operating under various environmental conditions, ensuring consistent performance. One significant advantage of smart sensors is their ability to facilitate predictive maintenance. By continuously monitoring the health and performance of metrology equipment, these sensors can predict potential failures and schedule maintenance before significant downtime occurs. This capability enhances the reliability of the equipment, reduces maintenance costs, and improves overall operational efficiency.

Smart sensors also are being developed to integrate seamlessly with metrology systems, offering real-time data collection and analysis. These sensors can monitor various parameters throughout the manufacturing process, providing continuous feedback and enabling quick adjustments to prevent defects. Smart sensors, combined with big data platforms and advanced data analytics, allow for more efficient and accurate defect detection and classification.

Critical stress points

A persistent challenge in semiconductor metrology is the identification and inspection of defects at critical stress points, particularly at the silicon edges. For bonded wafers, it’s at the outer ring of the wafer. For chip-on-wafer packaging, it’s at the edge of the chips. These edge defects are particularly problematic because they occur at the highest stress points from the neutral axis, making them more prone to failures. As semiconductor devices continue to involve more intricate packaging techniques, such as chip-on-wafer and wafer-level packaging, the focus on edge inspection becomes even more critical.

“When defects happen in a factory, you need imaging that can detect and classify them,” says Onto’s Bachiraju. “Then you need to find the root causes of where they’re coming from, and for that you need the entire data integration and a big data platform to help with faster analysis.”

Another significant challenge in semiconductor metrology is ensuring the reliability of known good die (KGD), especially as advanced packaging techniques and chiplets become more prevalent. Ensuring that every chip/chiplet in a stacked die configuration is of high quality is essential for maintaining yield and performance, but the speed of metrology processes is a constant concern. This leads to a balancing act between thoroughness and efficiency. The industry continuously seeks to develop faster machines that can handle the increasing volume and complexity of inspections without compromising accuracy. In this race, innovations in data processing and analysis are key to achieving quicker results.

“Customers would like, generally, 100% inspection for a lot of those processes because of the known good die, but it’s cost-prohibitive because the machines just can’t run fast enough,” says Nordson’s Perkins.

Metrology and Industry 4.0

Industry 4.0 — a term introduced in Germany in 2011 for the fourth industrial revolution, and called smart manufacturing in the U.S. — emphasizes the integration of digital technologies such as the Internet of Things, artificial intelligence, and big data analytics into manufacturing processes. Unlike past revolutions driven by mechanization, electrification, and computerization, Industry 4.0 focuses on connectivity, data, and automation to enhance manufacturing capabilities and efficiency.

“The better the data integration is, the more efficient the yield ramp,” says Dieter Rathei, CEO of DR Yield. “It’s essential to integrate all available data into the system for effective monitoring and analysis.”

In semiconductor manufacturing, this shift toward Industry 4.0 is particularly transformative, driven by the increasing complexity of semiconductor devices and the demand for higher precision and yield. Traditional metrology methods, heavily reliant on manual processes and limited automation, are evolving into highly interconnected systems that enable real-time data sharing and decision-making across the entire production chain.

“There haven’t been many tools to consolidate different data types into a single platform,” says NI’s Chaffee. “Historically, yield management systems focused on testing, while FDC or process systems concentrated on the process itself, without correlating the two. As manufacturers push into the 5, 3, and 2nm spaces, they’re discovering that defect density alone isn’t the sole governing factor. Process control is also crucial. By integrating all data, even the most complex correlations that a human might miss can be identified by AI and ML. The goal is to use machine learning to detect patterns or connections that could help control and optimize the manufacturing process.”

IoT forms the backbone of Industry 4.0 by connecting various devices, sensors, and systems within the manufacturing environment. In semiconductor manufacturing, IoT enables seamless communication between metrology tools, production equipment, and factory management systems. This interconnected network facilitates real-time monitoring and control of manufacturing processes, allowing for immediate adjustments and optimization.

“You need to integrate information from various sources, including sensors, metrology tools, and test structures, to build predictive models that enhance process control and yield improvement,” says Michael Yu, vice president of advanced solutions at PDF Solutions. “This holistic approach allows you to identify patterns and correlations that were previously undetectable.”

AI and ML are pivotal in processing and analyzing the vast amounts of data generated in a smart factory. These technologies can identify patterns, predict equipment failures, and optimize process parameters with a level of precision and speed unattainable by human operators alone. In semiconductor manufacturing, AI-driven analytics enhance process control, improve yield rates, and reduce downtime. “One of the major trends we see is the integration of artificial intelligence and machine learning into metrology tools,” says Perkins. “This helps in making sense of the vast amounts of data generated and enables more accurate and efficient measurements.”

AI’s role extends further as it assists in discovering anomalies within the production process that might have gone unnoticed with traditional methods. AI algorithms integrated into metrology systems can dynamically adjust processes in real-time, ensuring that deviations are corrected before they affect the end yield. This incorporation of AI minimizes defect rates and enhances overall production quality.

“Our experience has shown that in the past 20 years, machine learning and AI algorithms have been critical for automatic data classification and die classification,” says Bachiraju. “This has significantly improved the efficiency and accuracy of our metrology tools.”

Big data analytics complements AI/ML by providing the infrastructure necessary to handle and interpret massive datasets. In semiconductor manufacturing, big data analytics enables the extraction of actionable insights from data generated by IoT devices and production systems. This capability is crucial for predictive maintenance, quality control, and continuous process improvement.

“With big data, we can identify patterns and correlations that were previously impossible to detect, leading to better process control and yield improvement,” says Perkins.

Big data analytics also helps in understanding the lifecycle of semiconductor devices from production to field deployment. By analyzing product performance data over time, manufacturers can predict potential failures and enhance product designs, increasing reliability and lifecycle management.

“In the next decade, we see a lot of opportunities for AI,” says DR Yield’s Rathei. “The foundation for these advancements is the availability of comprehensive data. AI models need extensive data for training. Once all the data is available, we can experiment with different models and ideas. The ingenuity of engineers, combined with new tools, will drive exponential progress in this field.”

Metrology gaps remain

Despite recent advancements in metrology, analytics, and AI/ML, several gaps still remain, particularly in the context of high-volume manufacturing (HVM) and next-generation devices. The U.S. Commerce Department’s CHIPS R&D Metrology Program, along with industry stakeholders, have highlighted seven “grand challenges,” areas where current metrology capabilities fall short:

Metrology for materials purity and properties: There is a critical need for new measurements and standards to ensure the purity and physical properties of materials used in semiconductor manufacturing. Current techniques lack the sensitivity and throughput required to detect particles and contaminants throughout the supply chain.

Advanced metrology for future manufacturing: Next-generation semiconductor devices, such as gate-all-around (GAA) FETs and complementary FETs (CFETs), require breakthroughs in both physical and computational metrology. Existing tools are not yet capable of providing the resolution, sensitivity, and accuracy needed to characterize the intricate features and complex structures of these devices. This includes non-destructive techniques for characterizing defects and impurities at the nanoscale.

“There is a secondary challenge with some of the equipment in metrology, which often involves sampling data from single points on a wafer, much like heat test data that only covers specific sites,” says Chaffee. “To be meaningful, we need to move beyond sampling methods and find creative ways to gather information from every wafer, integrating it into a model. This involves building a knowledge base that can help in detecting patterns and correlations, which humans alone might miss. The key is to leverage AI and machine learning to identify these correlations and make sense of them, especially as we push into the 5, 3, and 2nm spaces. This process is iterative and requires a holistic approach, encompassing various data points and correlating them to understand the physical boundaries and the impact on the final product.”

Metrology for advanced packaging: The integration of sophisticated components and novel materials in advanced packaging technologies presents significant metrology challenges. There is a need for rapid, in-situ measurements to verify interfaces, subsurface interconnects, and internal 3D structures. Current methods do not adequately address issues such as warpage, voids, substrate yield, and adhesion, which are critical for the reliability and performance of advanced packages.

Modeling and simulating semiconductor materials, designs, and components: Modeling and simulating semiconductor processes require advanced computational models and data analysis tools. Current capabilities are limited in their ability to seamlessly integrate the entire semiconductor value chain, from materials inputs to system assembly. There is a need for standards and validation tools to support digital twins and other advanced simulation techniques that can optimize process development and control.

“Predictive analytics is particularly important,” says Chaffee. “They aim to determine the probability of any given die on a wafer being the best yielding or presenting issues. By integrating various data points and running different scenarios, they can identify and understand how specific equipment combinations, sequences and processes enhance yields.”

Modeling and simulating semiconductor processes: Current capabilities are limited in their ability to seamlessly integrate the entire semiconductor value chain, from materials inputs to system assembly. There is a need for standards and validation tools to support digital twins and other advanced simulation techniques that can optimize process development and control.

“Part of the problem comes from the back-end packaging and assembly process, but another part of the problem can originate from the quality of the wafer itself, which is determined during the front-end process,” says PDF’s Yu. “An effective ML model needs to incorporate both front-end and back-end information, including data from equipment sensors, metrology, and structured test information, to make accurate predictions and take proactive actions to correct the process.”

Standardizing new materials and processes: The development of future information and communication technologies hinges on the creation of new standards and validation methods. Current reference materials and calibration services do not meet the requirements for next-generation materials and processes, such as those used in advanced packaging and heterogeneous integration. This gap hampers the industry’s ability to innovate and maintain competitive production capabilities.

Metrology to enhance security and provenance of components and products: With the increasing complexity of the semiconductor supply chain, there is a need for metrology solutions that can ensure the security and provenance of components and products. This involves developing methods to trace materials and processes throughout the manufacturing lifecycle to prevent counterfeiting and ensure compliance with regulatory standards.

“The focus on security and sharing changes the supplier relationship into more of a partnership and less of a confrontation,” says Chaffee. “Historically, there’s always been a concern of data flowing across that boundary. People are very protective about their process, and other people are very protective about their product. But once you start pushing into the deep sub-micron space, those barriers have to come down. The die are too expensive for them not to communicate, but they can still do so while protecting their IP. Companies are starting to realize that by sharing parametric test information securely, they can achieve better yield management and process optimization without compromising their intellectual property.”

Conclusion

Advancements in metrology and testing are pivotal for the semiconductor industry’s continued growth and innovation. The integration of AI/ML, IoT, and big data analytics is transforming how manufacturers approach process control and yield improvement. As adoption of Industry 4.0 grows, the role of metrology will become even more critical in ensuring the efficiency, quality, and reliability of semiconductor devices. And by leveraging these advanced technologies, semiconductor manufacturers can achieve higher yields, reduce costs, and maintain the precision required in this competitive industry.

With continuous improvements and the integration of smart technologies, the semiconductor industry will keep pushing the boundaries of innovation, leading to more robust and capable electronic devices that define the future of technology. The journey toward a fully realized Industry 4.0 is ongoing, and its impact on semiconductor manufacturing undoubtedly will shape the future of the industry, ensuring it stays at the forefront of global technological advancements.

“Anytime you have new packaging technologies and process technologies that are evolving, you have a need for metrology,” says Perkins. “When you are ramping up new processes and need to make continuous improvements for yield, that is when you see the biggest need for new metrology solutions.”

The post Metrology And Inspection For The Chiplet Era appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Semiconductor Shifts In Automotive: Impact Of EV And ADAS TrendsFisher Zhang
    The integration of advanced driver assistance systems (ADAS) and the transition towards electric vehicles (EVs) are significantly transforming the automotive industry. Modern vehicles, essentially computers on wheels, require substantially more semiconductors. In response, carmakers are forming stronger partnerships with semiconductor vendors – some are taking a page from tech giants like Apple and Samsung by designing their own chips, often following a fabless or outsourced production model. Wh
     

Semiconductor Shifts In Automotive: Impact Of EV And ADAS Trends

6. Srpen 2024 v 09:03

The integration of advanced driver assistance systems (ADAS) and the transition towards electric vehicles (EVs) are significantly transforming the automotive industry.

Modern vehicles, essentially computers on wheels, require substantially more semiconductors. In response, carmakers are forming stronger partnerships with semiconductor vendors – some are taking a page from tech giants like Apple and Samsung by designing their own chips, often following a fabless or outsourced production model.

While a deeper connection with semiconductor design helps automakers maintain design control and supply chain resilience, it also imposes substantial responsibility to understand and meet stringent automotive quality standards.

The crucial role of semiconductor testing

Testing is vital to meet the automotive industry’s demands for quality, cost-efficiency, and timely market entry. As carmakers delve into semiconductor design, they face new challenges. Advanced semiconductors, more complex by nature, require thorough testing to ensure automotive-grade quality.

The industry’s push towards smaller process nodes, like 5nm and below, further amplifies these challenges, necessitating early and continuous engagement with testing resources to maintain high standards without compromising time to market.

Zero defects commitment

The automotive industry’s commitment to zero defects underscores the critical importance of quality. This commitment is based on an analysis of the costs associated with testing versus the potentially catastrophic costs of failures, such as life-threatening malfunctions, costly recalls, and market delays.

These issues can dramatically impact revenue and market position, highlighting the need for rigorous testing. The exceptional quality requirements inherent to automotive standards are set to intensify with the increasing digital complexity of vehicles.

Given that automotive chips must perform reliably over a lifespan of 10 to 20 years, comprehensive testing protocols play an essential role in identifying and rectifying defects early, optimizing both cost and quality. This fundamental aspect of semiconductor manufacturing cements the principle that quality is not just a priority, but the paramount concern.

This commitment transcends the capabilities of even the most skilled engineers, requiring systematic and integrated testing processes to ensure chip reliability and performance under diverse conditions.

Collaboration is key

Collaboration between automakers and semiconductor manufacturers is crucial, fostering an environment where issues can be identified and addressed early in the development cycle.

These partnerships are vital for maintaining momentum in the face of rapid technological advancements and ensuring that the automotive industry can meet the high standards of safety, reliability, and performance expected by consumers.

This collaborative approach helps to optimize testing processes, to maintain stringent quality standards, and to protect time-to-market goals, preventing production delays and ensuring the continuous advancement of automotive technologies.

The post Semiconductor Shifts In Automotive: Impact Of EV And ADAS Trends appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Driving Cost Lower and Power Higher With GaNAnne Meixner
    Gallium nitride is starting to make broader inroads in the lower-end of the high-voltage, wide-bandgap power FET market, where silicon carbide has been the technology of choice. This shift is driven by lower costs and processes that are more compatible with bulk silicon. Efficiency, power density (size), and cost are the three major concerns in power electronics, and GaN can meet all three criteria. However, to satisfy all of those criteria consistently, the semiconductor ecosystem needs to deve
     

Driving Cost Lower and Power Higher With GaN

6. Srpen 2024 v 09:02

Gallium nitride is starting to make broader inroads in the lower-end of the high-voltage, wide-bandgap power FET market, where silicon carbide has been the technology of choice. This shift is driven by lower costs and processes that are more compatible with bulk silicon.

Efficiency, power density (size), and cost are the three major concerns in power electronics, and GaN can meet all three criteria. However, to satisfy all of those criteria consistently, the semiconductor ecosystem needs to develop best practices for test, inspection, and metrology, determining what works best for which applications and under varying conditions.

Power ICs play an essential role in stepping up and down voltage levels from one power source to another. GaN is used extensively today in smart phone and laptop adapters, but market opportunities are beginning to widen for this technology. GaN likely will play a significant role in both data centers and automotive applications [1]. Data centers are expanding rapidly due to the focus on AI and a build-out at the edge. And automotive is keen to use GaN power ICs for inverter modules because they will be cheaper than SiC, as well as for onboard battery chargers (OBCs) and various DC-DC conversions from the battery to different applications in the vehicle.


Fig. 1: Current and future fields of interest for GaN and SiC power devices. Source A. Meixner/Semiconductor Engineering

But to enter new markets, GaN device manufactures need to more quickly ramp up new processes and their associated products. Because GaN for power transistors is a developing process technology, measurement data is critical to qualify both the manufacturing process and the reliability of the new semiconductor technology and resulting product.

Much of GaN’s success will depend on metrology and inspection solutions that offer high throughput, as well as non-destructive testing methods such as optical and X-ray. Electron microscopy is useful for drilling down into key device parameters and defect mechanisms. And electrical tests provide complementary data that assists with product/process validation, reliability and qualification, system-level validation, as well as being used for production screening.

Silicon carbide (SiC) remains the material of choice for very high-voltage applications. It offers better performance and higher efficiency than silicon. But SiC is expensive. It requires different equipment than silicon, it’s difficult to grow SiC ingots, and today there is limited wafer capacity.

In contrast, GaN offers some of the same desirable characteristics as SiC and can operate at even higher switching speeds. GaN wafer production is cheaper because it can be created on a silicon substrate utilizing typical silicon processing equipment other than the GaN epitaxial deposition tool. That enables a fab/foundry with a silicon CMOS process to ramp a GaN process with an engineering team experienced in GaN.

The cost comparison isn’t entirely apples-to-apples, of course. The highest-voltage GaN on the market today uses silicon on sapphire (SoS) or other engineered substrates, which are more expensive. But below those voltages, GaN typically has a cost advantage, and that has sparked renewed interest in this technology.

“GaN-based products increase the performance envelopes relative to the incumbent and mature silicon-based technologies,” said Vineet Pancholi, senior director of test technology at Amkor. “Switching speeds with GaN enable the application in ways never possible with silicon. But as the GaN production volumes ramp, these products have extreme economic pressures. The production test list includes static attributes. However, the transient and dynamic attributes are the primary benefit of GaN in the end application.”

Others agree. “The world needs cheaper material, and GaN is easy to build,” said Frank Heidemann, vice president and technology leader of SET at NI/Emerson Test & Measurement. “Gallium nitride has a huge success in the lower voltages ranges — anything up to 500V. This is where the GaN process is very well under control. The problem now is building in higher voltages is a challenge. In the near future there will be products at even higher voltage levels.”

Those higher-voltage applications require new process recipes, new power IC designs, and subsequently product/process validation and qualification.

GaN HEMT properties
Improving the processes needed to create GaN high-electron-mobility transistors (HEMTs) requires a deep understanding of the material properties and the manufacturing consequences of layering these materials.

The underlying physics and structure of wide-bandgap devices significantly differs from silicon high-voltage transistors. Silicon transistors rely on doping of p and n materials. When voltage is applied at the gate, it creates a channel for current to flow from source to drain. In contrast, wide-bandgap transistors are built by layering thin films of different materials, which differ in their bandgap energy. [2] Applying a voltage to the gate enables an electron exchange between the two materials, driving those electrons along the channel between source and drain.


Fig. 2. Cross-sectional animation of e-mode GaN HEMT device. Source: Zeiss Microscopy

“GaN devices rely on two-dimensional electron gas (2DEG) created at the GaN and AlGaN interface to conduct current at high speed,” said Jiangtao Hu, senior director of product marketing at Onto Innovation. “To enable high electron mobility, the epitaxy process creating complex multi-layer crystalline films must be carefully monitored and controlled, ensuring critical film properties such as thickness, composition, and interface roughness are within a tight spec. The ongoing trend of expanding wafer sizes further requires the measurement to be on-product and non-destructive for uniformity control.”


Fig. 3: SEM cross-section of enhancement-mode GaN HEMT built on silicon which requires a superlattice. Source: Zeiss Microscopy

Furthermore, each layer’s electrical properties need to be understood. “It is of utmost importance to determine, as early as possible in the manufacturing process, the electrical characteristics of the structures, the sheet resistance of the 2DEG, the carrier concentration, and the mobility of carriers in the channel, preferably at the wafer level in a non-destructive assessment,” said Christophe Maleville, CTO and senior executive vice president of innovation at Soitec.

Developing process recipes for GaN HEMT devices at higher operating ranges require measurements taken during wafer manufacturing and device testing, both for qualification of a process/product and production manufacturing. Inspection, metrology, and electrical tests focus on process anomalies and defects, which impact the device performance.

“Crystal defects such as dislocations and stacking faults, which can form during deposition and subsequently be grown over and buried, can create long-term reliability concerns even if the devices pass initial testing,” said David Taraci, business development manager of electronics strategic accounts at ZEISS Research Microscopy Solutions. “Gate oxides can pinch off during deposition, creating voids which may not manifest as an issue immediately.”

The quality of the buffer layer is critical because it affects the breakdown voltage. “The maximum breakdown voltage of the devices will be ultimately limited by the breakdown of the buffer layer grown in between the Si substrate and the GaN channel,” said Soitec’s Maleville. “An electrical assessment (IV at high voltage) requires destructive measurements as well as device isolation. This is performed on a sample basis only.”

One way to raise the voltage limit of a GaN device is to add a ‘gate driver’ which keeps it reliable at higher voltages. But to further expand GaN technology’s performance envelope to higher voltage operation engineers need to comprehend a new GaN device reliability properties.

“We are supporting GaN lifetime validation, which is the prediction of a mission characteristic of lifetime for gallium nitride power devices,” said Emerson’s Heidemann. “Engineers build physics-based failure models of these devices. Next, they investigate the acceleration factors. How can we really make tests and verification properly so that we can assess lifetime health?”

The qualification procedures necessitate life-stressing testing, which duplicates predicated mission profile usage, as well as electrical testing, after each life-stress period. That allows engineers to determine shifts in transistor characteristics and outright failures. For example, life stress periods could start with 4,000 hours and increase in 1,000-hour increments to 12,000 hours, during which time the device is turned on/off with specific durations of ‘on’ times.

“Reliability predictions are based upon application mission profiles,” said Stephanie Watts Butler, independent consultant and vice president of industry and standards in the IEEE Power Electronics Society. “In some cases, GaN is going into a new application, or being used differently than silicon, and the mission profile needs to be elucidated. This is one area that the industry is focused upon together.”

As an example of this effort, Butler pointed to JEDEC JEP186 spec [3], which provides guidelines for specifying the breakdown voltage for GaN HEMT devices. “JEDEC and IEC both are issuing guideline documents for methods for test and characterization of wide-bandgap devices, as well as reliability and qualification procedures, and datasheet parameters to enable wide bandgap devices, including GaN, to ramp faster with higher quality in the marketplace,” she said.

Electrical tests remain essential to screening for both time-zero and reliability-associated defects (e.g. infant mortality and reduced lifetime). This holds true for screening wafers, singulated die, and packaged devices. And test content includes tests specific to GaN HEMT power devices performance specifications and tests more directed at defect detection.

Due to inherent device differences, the GaN test list varies in some significant ways from Si and SiC power ICs. Assessing GaN health for qualification and manufacturing purposes requires both static and dynamic tests (SiC DC and AC). A partial list includes zero gate voltage drain leakage current, rise time, fall time, dynamic RDSon, and dielectric integrity tests.

“These are very time-intensive measurement techniques for GaN devices,” said Tom Tran, product manager for power discrete test products at Teradyne. “On top of the static measurement techniques is the concern about trapped charge — both for functionality and efficiency — revealed through dynamic RDSon testing.”

Transient tests are necessary for qualification and production purposes due to the high electron mobility, which is what gives GaN HEMT its high switching speed. “From a test standpoint, static test failures indicate basic processing failures, while transient switching failures indicate marginal or process excursions,” said Amkor’s Vineet Pancholi. “Both tests continue to be important to our customers until process maturity is achieved. With the extended range of voltage, current, and switching operations, mainstream test equipment suppliers have been adding complementary instrumentation capabilities.”

And ATE suppliers look to reduce test time, which reduces cost. “Both static and dynamic test requirements drive very high test times,” said Teradyne’s Tran. “But the GaN of today is very different than GaN from a decade ago. We’re able to accelerate this testing just due to the core nature of our ATE architecture. We think there is the possibility further reducing the cost of test for our customers.”

Tools for process control and quality management
GaN HEMT devices’ reliance on thin-film processes highlights the need to understand the material properties and the nature of the interfaces between each layer. That requires tools for process control, yield management, and failure analysis.

“GaN device performance is highly reflective of the film characteristics used in its manufacture,” said Mike McIntyre, director of software product management at Onto Innovation. “The smallest process variations when it comes to film thickness, film stress, line width or even crystalline make-up, can have a dramatic impact on how the device performs, or even if it is usable in its target market. This lack of tolerance to any variation places a greater burden on engineers to understand the factors that correlate to device performance and its profitability.”

Inspection methods that are non-destructive vary in throughput time and in the level of detail provided for engineers to make decisions. While optical methods are fast and provide full wafer coverage, they cannot accurately classify chemical or structural defects for engineers/technicians to review. In contrast, destructive methods provide the information that’s needed to truly understand the nature of the defects. For example, conductive atomic force microscopy (AFM) probing remains slow, but it can identify electrical nature of a defect. And to truly comprehend crystallographic defects and the chemical nature of impurities, engineers can turn to electron microscopy based methods.

One way to assess thin films is with X-rays. “High resolution X-ray measurements are useful to provide production control of the wafer crystalline quality and defects in the buffer, said Soitec’s Maleville. “Minor changes in composition of the buffer, barrier, or capping layer, as well as their layer thickness, can result in significant deviations in device performance. Thickness of the layers, in particular the top cap, barrier, and spacer layers, are typically measured by XRD. However, the throughput of XRD systems is low. Alternatively, ellipsometry offers a reasonably good throughput measurement with more data points for both development and production mode scenarios.”

Optical techniques have been the standard for thin film assessment in the semiconductor industry. Inspection equipment providers have long been on the continuation improvement always evolving journey to improve accuracy, precision and throughput. Providing better metrology tools helps device makers with process control and yield management.

“Recently, we successfully developed a non-destructive on product measurement capability for GaN epi process monitoring,” said Onto’s Hu. “It takes advantage of our advanced optical film experience and our modeling software to simultaneously measure multi-layer epi film thickness, composition, and interface roughness on product wafers.”


Fig. 4: Metrology measurements on GaN for roughness and for Al concentration. Source: Onto Innovation

Assessing the electrical characteristics — 2DEG sheet resistance, channel carrier mobility, and concentration are required for controlling the manufacturing process. A non-destructive assessment would be an improvement over currently used destructive techniques (e.g. SEM). The solutions used for other power ICs do not work for GaN HEMT. As of today, no one has come up with a commercial solution.

Inspection looks for yield impacting defects, as well as defects that affect wafer acceptance in the case of companies that provide engineered substrates.

“Defect inspection for incoming silicon wafers looks for particles, scratches, and other anomalies that might seed imperfections in the subsequent buffer and crystal growth,” said Antonio Mani, business development manager at Thermo Fisher Scientific. “After the growth of the buffer and termination layers, followed by the growth of the doped GaN layers, another set of inspections is carried out. In this case, it is more focused on the detection of cracks, other macroscopic defects (micropipes, carrots), and looking for micro-pits, which are associated to threading dislocations that have survived the buffer layer and are surfacing at the top GaN surface.”

Mani noted that follow-up inspection methods for Si and GaN devices are similar. The difference is the importance in connecting observations back to post-epi results.

More accurate defect libraries would shorten inspection time. “The lack of standardization of surface defect analysis impedes progress,” said Soitec’s Maleville. “Different tools are available on the market, while defect libraries are still being developed essentially by the different user. This lack of globally accepted method and standard defect library for surface defect analysis is slowing down the GaN surface qualification process.”

Whether it involves a manufacturing test failure or a field return, the necessary steps for determining root cause on a problematic packaged part begins with fault isolation. “Given the direct nature of the bandgap of GaN and its operating window in terms of voltage/frequency/power density, classical methods of fault isolation (e.g. optical emission spectroscopy) are forced to focus on different wavelengths and different ranges of excitation of the typical electrical defects,” said Thermo Fisher’s Mani. “Hot carrier pairs are just one example, which highlights the radical difference between GaN and silicon devices.”

In addition to fault isolation there are challenges in creating a device cross-section with focused-ion beam milling methods.

“Several challenges exist in FA for GaN power ICs,” said Zeiss’ Taraci. “In any completed device, in particular, there are numerous materials and layers present for stress mitigation/relaxation and thermal management, depending on whether we are talking enhancement- or depletion-mode devices. Length-scale can be difficult to manage as you are working with these samples, because they have structures of varying dimension present in close proximity. Many of the structures are quite unique to power GaN and can pose challenges themselves in cross-section and analyses. Beam-milling approaches have to be tailored to prevent heavy re-deposition and masking, and are dependent on material, lattice orientation, current, geometry, etc.”

Conclusion
To be successful in bringing new GaN power ICs to new application space engineers and their equipment suppliers need faster process development and a reduction in overall costs. For HEMT devices, it’s understanding the resulting layers and their material properties. This requires a host of metrology, inspection, test, and failure analysis steps to comprehend the issues, and to provide feedback data from experiments and qualifications for process and design improvements.

References

[1] M. Buffolo et al., “Review and Outlook on GaN and SiC Power Devices: Industrial State-of-the-Art, Applications, and Perspectives,” in IEEE Transactions on Electron Devices, March 2024, open access, https://ieeexplore.ieee.org/document/10388225

[2] High electron mobility transistor (HEMT) https://en.wikipedia.org/wiki/High-electron-mobility_transistor

[3] Guideline to specify a transient off-state withstand voltage robustness indicated in datasheets for lateral GaN power conversion devices, JEP186, version 1.0, December 2021. https://www.jedec.org/standards-documents/docs/jep186

Related Stories

Ramping Up Power Electronics For EVs

SiC Growth For EVs Is Stressing Manufacturing

GaN ICs Wanted For Power, EV Markets

Architecting Chips For High-Performance Computing

Power Semiconductors: 2023

The post Driving Cost Lower and Power Higher With GaN appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Leveraging AI To Efficiently Test AI ChipsAdvantest
    In the fast-paced world of technology, where innovation and efficiency are paramount, integrating artificial intelligence (AI) and machine learning (ML) into the semiconductor testing ecosystem has become of critical importance due to ongoing challenges with accuracy and reliability. AI and ML algorithms are used to identify patterns and anomalies that might not be discovered by human testers or traditional methods. By leveraging these technologies, companies can achieve higher accuracy in defec
     

Leveraging AI To Efficiently Test AI Chips

Od: Advantest
6. Srpen 2024 v 09:01

In the fast-paced world of technology, where innovation and efficiency are paramount, integrating artificial intelligence (AI) and machine learning (ML) into the semiconductor testing ecosystem has become of critical importance due to ongoing challenges with accuracy and reliability. AI and ML algorithms are used to identify patterns and anomalies that might not be discovered by human testers or traditional methods. By leveraging these technologies, companies can achieve higher accuracy in defect detection, ensuring that only the highest quality semiconductors reach the market. In addition, the industry is clamoring for increased efficiency and speed because AI-driven testing can significantly accelerate the testing process, analyzing vast amounts of data at speeds unattainable by human testers. This enables quicker turnaround times from design to production, helping companies meet market demands more effectively and stay ahead of competitors. Firms are also heavily invested in reducing costs. While the initial investment in AI/ML technology can be expansive, the long-term savings are irrefutable. With automated routine and complex testing processes, companies can reduce labor costs and minimize human error. Equally important, AI-enhanced testing can better predict potential failures before they occur, saving costs related to recalls and repairs.

The industry is now moving to chiplet-based modules, using a “Lego-like” approach to integrate CPU, GPU, cache, I/O, high-bandwidth memory (HBM), and other functions. In the rapidly evolving world of chiplets, the DUT is a complex multichip system with the integration of many devices in a single 2.5D or 3D package. Consequently, the tester can only access a subset of individual device pins. Even so, at each test insertion, the tester must be able to extract valuable data that is then used to optimize the current test insertion as well as other design, manufacturing, and test steps. With limited pin access, the tester must infer what is happening on unobservable nodes. To best achieve this goal, it is important to extract the most value possible out of the data that can be directly collected across all manufacturing and test steps, including data from on-chip sensors. The test flow in the chiplet world already includes PSV, wafer acceptance test (WAT), wafer sort (WS), final test (FT), burn-in, and SLT, and additional test insertions to account for the increased complexity of a package with multiple chiplets are not feasible from a cost perspective. Adding to the challenge, binning goes from performance-based to application-based. In this world, the tester must stay ahead of the system – the tester must be smarter than the complex system-under-test.

The ACS RTDI platform accelerates data analytics and AI/ML decision-making.

So, for these reasons and many more, the adoption of edge compute for ML test applications is well underway. Advantest’s ACS Real-Time Data Infrastructure (ACS RTDI) platform accelerates data analytics and AI/ML decision-making within a single integrated platform. It collects, analyzes, stores, and monitors semiconductor test data as well as data sources across the IC manufacturing supply chain while employing low-latency edge computing and analytics in a secure zero-trust environment. ACS RTDI minimizes the need for human intervention, streamlining overall data utilization across multiple insertions to boost quality, yield, and operational efficiencies. It includes Advantest’s ACS Edge HPC server, which works in conjunction with its V93000 and other ATE systems to handle computationally intensive workloads adjacent to the tester’s host controller.

A reliable, secure real-time data structure that integrates data sources across the IC manufacturing supply chain.

In this configuration, the ACS Edge provides low, consistent, and predictable latency compared with a data center-hosted alternative. It supports a user execution environment independent of the tester host controller to ease development and deployment. It also provides a reliable and secure real-time data infrastructure that integrates all data sources across the entire IC manufacturing supply chain, applying analytics models that enable real-time decision-making during production test.

The post Leveraging AI To Efficiently Test AI Chips appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Controller Area Network (CAN) OverviewNI
    What is CAN? A controller area network (CAN) bus is a high-integrity serial bus system for networking intelligent devices. CAN busses and devices are common components in automotive and industrial systems. Using a CAN interface device, you can write LabVIEW applications to communicate with a CAN network. CAN History Bosch originally developed CAN in 1985 for in-vehicle networks. In the past, automotive manufacturers connected electronic devices in vehicles using point-to-point wiring systems
     

Controller Area Network (CAN) Overview

Od: NI
6. Srpen 2024 v 09:01

What is CAN?

A controller area network (CAN) bus is a high-integrity serial bus system for networking intelligent devices. CAN busses and devices are common components in automotive and industrial systems. Using a CAN interface device, you can write LabVIEW applications to communicate with a CAN network.

CAN History

Bosch originally developed CAN in 1985 for in-vehicle networks. In the past, automotive manufacturers connected electronic devices in vehicles using point-to-point wiring systems. Manufacturers began using more and more electronics in vehicles, which resulted in bulky wire harnesses that were heavy and expensive. They then replaced dedicated wiring with in-vehicle networks, which reduced wiring cost, complexity, and weight. CAN, a high-integrity serial bus system for networking intelligent devices, emerged as the standard in-vehicle network. The automotive industry quickly adopted CAN and, in 1993, it became the international standard known as ISO 11898. Since 1994, several higher-level protocols have been standardized on CAN, such as CANopen and DeviceNet. Other markets have widely adopted these additional protocols, which are now standards for industrial communications. This white paper focuses on CAN as an in-vehicle network.

Read more here.

Fig.1: CAN networks significantly reduce wiring.  Source: NI.

The post Controller Area Network (CAN) Overview appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Expediting Manufacturing Safe Launch With Big Data AI/ML Analytic Solutions On The CloudPDF Solutions
    With highly competitive time-to-market and time-to-volume windows, IC suppliers need to be able to release new product to production (NPI) in a timely manner with competitive manufacturing metrics. Manufacturing yield, test time and quality are important metrics in NPI to Manufacturing safe launch. A powerful yield management system is crucial to achieve the goal metrics. In this paper, recommended yield management system selection criteria, data integration methodology and innovative ways of us
     

Expediting Manufacturing Safe Launch With Big Data AI/ML Analytic Solutions On The Cloud

6. Srpen 2024 v 09:01

With highly competitive time-to-market and time-to-volume windows, IC suppliers need to be able to release new product to production (NPI) in a timely manner with competitive manufacturing metrics. Manufacturing yield, test time and quality are important metrics in NPI to Manufacturing safe launch. A powerful yield management system is crucial to achieve the goal metrics. In this paper, recommended yield management system selection criteria, data integration methodology and innovative ways of using selected yield management system to benefit safe launch efficiency are introduced. Three examples of using cloud yield tool to expedite yield learning, test time reduction (TTR) and quality enhancement are presented.

Find more information here.

The post Expediting Manufacturing Safe Launch With Big Data AI/ML Analytic Solutions On The Cloud appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • AI/ML’s Role In Design And Test ExpandsLaura Peters
    The role of AI and ML in test keeps growing, providing significant time and money savings that often exceed initial expectations. But it doesn’t work in all cases, sometimes even disrupting well-tested process flows with questionable return on investment. One of the big attractions of AI is its ability to apply analytics to large data sets that are otherwise limited by human capabilities. In the critical design-to-test realm, AI can address problems such as tool incompatibilities between the des
     

AI/ML’s Role In Design And Test Expands

5. Srpen 2024 v 09:03

The role of AI and ML in test keeps growing, providing significant time and money savings that often exceed initial expectations. But it doesn’t work in all cases, sometimes even disrupting well-tested process flows with questionable return on investment.

One of the big attractions of AI is its ability to apply analytics to large data sets that are otherwise limited by human capabilities. In the critical design-to-test realm, AI can address problems such as tool incompatibilities between the design set-up, simulation, and ATE test program, which typically slows debugging and development efforts. Some of the most time-consuming and costly aspects of design-to-test arise from incompatibilities between tools.

“During device bring-up and debug, complex software/hardware interactions can expose the need for domain knowledge from multiple teams or stakeholders, who may not be familiar with each other’s tools,” said Richard Fanning, lead software engineer at Teradyne. “Any time spent doing conversions or debugging differences in these set-ups is time wasted. Our toolset targets this exact problem by allowing all set-ups to use the same set of source files so everyone can be sure they are running the same thing.”

ML/AI can help keep design teams on track, as well. “As we drive down this technology curve, the analytics and the compute infrastructure that we have to bring to bear becomes increasingly more complex and you want to be able to make the right decision with a minimal amount of overkill,” said Ken Butler, senior director of business development in the ACS data analytics platform group at Advantest. “In some cases, we are customizing the test solution on a die-by-die type of basis.”

But despite the hype, not all tools work well in every circumstance. “AI has some great capabilities, but it’s really just a tool,” said Ron Press, senior director of technology enablement at Siemens Digital Industries Software, in a recent presentation at a MEPTEC event. “We still need engineering innovation. So sometimes people write about how AI is going to take away everybody’s job. I don’t see that at all. We have more complex designs and scaling in our designs. We need to get the same work done even faster by using AI as a tool to get us there.”

Speeding design to characterization to first silicon
In the face of ever-shrinking process windows and the lowest allowable defectivity rates, chipmakers continually are improving the design-to-test processes to ensure maximum efficiency during device bring-up and into high volume manufacturing. “Analytics in test operations is not a new thing. This industry has a history of analyzing test data and making product decisions for more than 30 years,” said Advantest’s Butler. “What is different now is that we’re moving to increasingly smaller geometries, advanced packaging technologies and chiplet-based designs. And that’s driving us to change the nature of the type of analytics that we do, both in terms of the software and the hardware infrastructure. But from a production test viewpoint, we’re still kind of in the early days of our journey with AI and test.”

Nonetheless, early adopters are building out the infrastructure needed for in-line compute and AI/ML modeling to support real-time inferencing in test cells. And because no one company has all the expertise needed in-house, partnerships and libraries of applications are being developed with tool-to-tool compatibility in mind.

“Protocol libraries provide out-of-the-box solutions for communicating common protocols. This reduces the development and debug effort for device communication,” said Teradyne’s Fanning. “We have seen situations where a test engineer has been tasked with talking to a new protocol interface, and saved significant time using this feature.”

In fact, data compatibility is a consistent theme, from design all the way through to the latest developments in ATE hardware and software. “Using the same test sequences between characterization and production has become key as the device complexity has increased exponentially,” explained Teradyne’s Fanning. “Partnerships with EDA tool and IP vendors is also key. We have worked extensively with industry leaders to ensure that the libraries and test files they output are formats our system can utilize directly. These tools also have device knowledge that our toolset does not. This is why the remote connect feature is key, because our partners can provide context-specific tools that are powerful during production debug. Being able to use these tools real-time without having to reproduce a setup or use case in a different environment has been a game changer.”

Serial scan test
But if it seems as if all the configuration changes are happening on the test side, it’s important to take stock of substantial changes on the approach to multi-core design for test.

Tradeoffs during the iterative process of design for test (DFT) have become so substantial in the case of multi-core products that a new approach has become necessary.

“If we look at the way a design is typically put together today, you have multiple cores that are going to be produced at different times,” said Siemens’ Press. “You need to have an idea of how many I/O pins you need to get your scan channels, the deep serial memory from the tester that’s going to be feeding through your I/O pins to this core. So I have a bunch of variables I need to trade off. I have the number of pins going to the core, the pattern size, and the complexity of the core. Then I’ll try to figure out what’s the best combination of cores to test together in what is called hierarchical DFT. But as these designs get more complex, with upwards of 2,500 cores, that’s a lot of tradeoffs to figure out.”

Press noted that applying AI with the same architecture can provide a 20% to 30% higher efficiency, but an improved methodology based on packetized scan test (see figure 1) actually makes more sense.


Fig. 1: Advantages to the serial scan network (SSN) approach. Source: Siemens

“Instead of having tester channels feeding into the scan channels that go to each core, you have a packetized bus and packets of data that feed through all the cores. Then you instruct the cores when their packet information is going to be available. By doing this, you don’t have as many variables you need to trade off,” he said. At the core level, each core can be optimized for any number of scan channels and patterns, and the I/O pin count is no longer a variable in the calculation. “Then, when you put it into this final chip, it deliver from the packets the amount of data you need for that core, that can work with any size serial bus, in what is called a serial scan network (SSN).”

Some of the results reported by Siemens EDA customers (see figure 2) highlight both supervised and unsupervised machine learning implementation for improvements in diagnosis resolution and failure analysis. DFT productivity was boosted by 5 to 10X using the serial scan network methodology.


Fig. 2: Realized benefits using machine learning and the serial scan network approach. Source: Siemens

What slows down AI implementation in HVM?
In the transition from design to testing of a device, the application of machine learning algorithms can enable a number of advantages, from better pairing of chiplet performance for use in an advanced package to test time reduction. For example, only a subset of high-performing devices may require burn-in.

“You can identify scratches on wafers, and then bin out the dies surrounding those scratches automatically within wafer sort,” said Michael Schuldenfrei, fellow at NI/Emerson Test & Measurement. “So AI and ML all sounds like a really great idea, and there are many applications where it makes sense to use AI. The big question is, why isn’t it really happening frequently and at-scale? The answer to that goes into the complexity of building and deploying these solutions.”

Schuldenfrei summarized four key steps in ML’s lifecycle, each with its own challenges. In the first phase, the training, engineering teams use data to understand a particular issue and then build a model that can be used to predict an outcome associated with that issue. Once the model is validated and the team wants to deploy it in the production environment, it needs to be integrated with the existing equipment, such as a tester or manufacturing execution system (MES). Models also mature and evolve over time, requiring frequent validation of the data going into the model and checking to see that the model is functioning as expected. Models also must adapt, requiring redeployment, learning, acting, validating and adapting, in a continuous circle.

“That eats up a lot of time for the data scientists who are charged with deploying all these new AI-based solutions in their organizations. Time is also wasted in the beginning when they are trying to access the right data, organizing it, connecting it all together, making sense of it, and extracting features from it that actually make sense,” said Schuldenfrei.

Further difficulties are introduced in a distributed semiconductor manufacturing environment in which many different test houses are situated in various locations around the globe. “By the time you finish implementing the ML solution, your model is stale and your product is probably no longer bleeding edge so it has lost its actionability, when the model needs to make a decision that actually impacts either the binning or the processing of that particular device,” said Schuldenfrei. “So actually deploying ML-based solutions in a production environment with high-volume semiconductor test is very far from trivial.”

He cited a 2014 Google article that stated how the ML code development part of the process is both the smallest and easiest part of the whole exercise, [1] whereas the various aspects of building infrastructure, data collection, feature extraction, data verification, and managing model deployments are the most challenging parts.

Changes from design through test ripple through the ecosystem. “People who work in EDA put lots of effort into design rule checking (DRC), meaning we’re checking that the work we’ve done and the design structure are safe to move forward because we didn’t mess anything up in the process,” said Siemens’ Press. “That’s really important with AI — what we call verifiability. If we have some type of AI running and giving us a result, we have to make sure that result is safe. This really affects the people doing the design, the DFT group and the people in test engineering that have to take these patterns and apply them.”

There are a multitude of ML-based applications for improving test operations. Advantest’s Butler highlighted some of the apps customers are pursuing most often, including search time reduction, shift left testing, test time reduction, and chiplet pairing (see figure 3).

“For minimum voltage, maximum frequency, or trim tests, you tend to set a lower limit and an upper limit for your search, and then you’re going to search across there in order to be able to find your minimum voltage for this particular device,” he said. “Those limits are set based on process split, and they may be fairly wide. But if you have analytics that you can bring to bear, then the AI- or ML-type techniques can basically tell you where this die lies on the process spectrum. Perhaps it was fed forward from an earlier insertion, and perhaps you combine it with what you’re doing at the current insertion. That kind of inference can help you narrow the search limits and speed up that test. A lot of people are very interested in this application, and some folks are doing it in production to reduce search time for test time-intensive tests.”


Fig. 3: Opportunities for real-time and/or post-test improvements to pair or bin devices, improve yield, throughput, reliability or cost using the ACS platform. Source: Advantest

“The idea behind shift left is perhaps I have a very expensive test insertion downstream or a high package cost,” Butler said. “If my yield is not where I want it to be, then I can use analytics at earlier insertions to be able to try to predict which devices are likely to fail at the later insertion by doing analysis at an earlier insertion, and then downgrade or scrap those die in order to optimize downstream test insertions, raising the yield and lowering overall cost. Test time reduction is very simply the addition or removal of test content, skipping tests to reduce cost. Or you might want to add test content for yield improvement,” said Butler.

“If I have a multi-tiered device, and it’s not going to pass bin 1 criteria – but maybe it’s bin 2 if I add some additional content — then people may be looking at analytics to try to make those decisions. Finally, two things go together in my mind, this idea of chiplet designs and smart pairing. So the classic example is a processor die with a stack of high bandwidth memory on top of it. Perhaps I’m interested in high performance in some applications and low power in others. I want to be able to match the content and classify die as they’re coming through the test operation, and then downstream do pick-and-place and put them together in such a way that I maximize the yield for multiple streams of data. Similar kinds of things apply for achieving a low power footprint and carbon footprint.”

Generative AI
The question that inevitably comes up when discussing the role of AI in semiconductors is whether or not large language models like ChatGPT can prove useful to engineers working in fabs. Early work shows some promise.

“For example, you can ask the system to build an outlier detection model for you that looks for parts that are five sigma away from the center line, saying ‘Please create the script for me,’ and the system will create the script. These are the kinds of automated, generative AI-based solutions that we’re already playing with,” says Schuldenfrei. “But from everything I’ve seen so far, there is still quite a lot of work to be done to get these systems to provide outputs with high enough quality. At the moment, the amount of human interaction that is needed afterward to fix problems with the algorithms or models that generative AI is producing is still quite significant.”

A lingering question is how to access the test programs needed to train the new test programs when everyone is protecting important test IP? “Most people value their test IP and don’t necessarily want to set up guardrails around the training and utilization processes,” Butler said. “So finding a way to accelerate the overall process of developing test programs while protecting IP is the challenge. It’s clear this kind of technology is going to be brought to bear, just like we already see in the software development process.”

Failure analysis
Failure analysis is typically a costly and time-consuming endeavor for fabs because it requires a trip back in time to gather wafer processing, assembly, and packaging data specific to a particular failed device, known as a returned material authorization (RMA). Physical failure analysis is performed in an FA lab, using a variety of tools to trace the root cause of the failure.

While scan diagnostic data has been used for decades, a newer approach involves pairing a digital twin with scan diagnostics data to find the root cause of failures.

“Within test, we have a digital twin that does root cause deconvolution based on scan failure diagnosis. So instead of having to look at the physical device and spend time trying to figure out the root cause, since we have scan, we have millions and millions of virtual sample points,” said Siemens’ Press. “We can reverse-engineer what we did to create the patterns and figure out where the mis-compare happened within the scan cells deep within the design. Using YieldInsight and unsupervised machine learning with training on a bunch of data, we can very quickly pinpoint the fail locations. This allows us to run thousands, or tens of thousands fail diagnoses in a short period of time, giving us the opportunity to identify the systematic yield limiters.”

Yet another approach that is gaining steam is using on-die monitors to access specific performance information in lieu of physical FA. “What is needed is deep data from inside the package to monitor performance and reliability continuously, which is what we provide,” said Alex Burlak, vice president of test and analytics at proteanTecs. “For example, if the suspected failure is from the chiplet interconnect, we can help the analysis using deep data coming from on-chip agents instead of taking the device out of context and into the lab (where you may or may not be able to reproduce the problem). Even more, the ability to send back data and not the device can in many cases pinpoint the problem, saving the expensive RMA and failure analysis procedure.”

Conclusion
The enthusiasm around AI and machine learning is being met by robust infrastructure changes in the ATE community to accommodate the need for real-time inferencing of test data and test optimization for higher yield, higher throughput, and chiplet classifications for multi-chiplet packages. For multi-core designs, packetized test, commercialized as an SSN methodology, provides a more flexible approach to optimizing each core for the number of scan chains, patterns and bus width needs of each core in a device.

The number of testing applications that can benefit from AI continues to rise, including test time reduction, Vmin/Fmax search reduction, shift left, smart pairing of chiplets, and overall power reduction. New developments like identical source files for all setups across design, characterization, and test help speed the critical debug and development stage for new products.

Reference

  1. https://proceedings.neurips.cc/paper_files/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf

The post AI/ML’s Role In Design And Test Expands appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Making Adaptive Test Work BetterEd Sperling
    One of the big challenges for IC test is making sense of mountains of data, a direct result of more features being packed onto a single die, or multiple chiplets being assembled into an advanced package. Collecting all that data through various agents and building models on the tester no longer makes sense for a couple reasons — there is too much data, and there are multiple customers using the same equipment. Steve Zamek, director of product management at PDF Solutions, and Eli Roth, product ma
     

Making Adaptive Test Work Better

10. Červen 2024 v 09:15

One of the big challenges for IC test is making sense of mountains of data, a direct result of more features being packed onto a single die, or multiple chiplets being assembled into an advanced package. Collecting all that data through various agents and building models on the tester no longer makes sense for a couple reasons — there is too much data, and there are multiple customers using the same equipment. Steve Zamek, director of product management at PDF Solutions, and Eli Roth, product manager at Teradyne, explain how to optimize testing around different data sources, how to partition that data between the edge and the cloud, and how to ensure it remains secure.

The post Making Adaptive Test Work Better appeared first on Semiconductor Engineering.

❌
❌