FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremSemiconductor Engineering
  • ✇Semiconductor Engineering
  • Achieving Zero Defect Manufacturing Part 2: Finding Defect SourcesPrasad Bachiraju
    Semiconductor manufacturing creates a wealth of data – from materials, products, factory subsystems and equipment. But how do we best utilize that information to optimize processes and reach the goal of zero defect manufacturing? This is a topic we first explored in our previous blog, “Achieving Zero Defect Manufacturing Part 1: Detect & Classify.” In it, we examined real-time defect classification at the defect, die and wafer level. In this blog, the second in our three-part series, we will
     

Achieving Zero Defect Manufacturing Part 2: Finding Defect Sources

6. Srpen 2024 v 09:07

Semiconductor manufacturing creates a wealth of data – from materials, products, factory subsystems and equipment. But how do we best utilize that information to optimize processes and reach the goal of zero defect manufacturing?

This is a topic we first explored in our previous blog, “Achieving Zero Defect Manufacturing Part 1: Detect & Classify.” In it, we examined real-time defect classification at the defect, die and wafer level. In this blog, the second in our three-part series, we will discuss how to use root cause analysis to determine the source of defects. For starters, we will address the software tools needed to properly conduct root cause analysis for a faster understanding of visual, non-visual and latent defect sources.

About software

The software platform fabs choose impacts how well users are able to integrate data, conduct database analytics and perform server-side and real-time analytics. Manufacturers want the ability to choose a platform that can scale by data volume, type and multisite integration. In addition, all of this data – whether it is coming from metrology, inspection or testing – must be normalized before fabs can apply predictive modeling and machine learning based analytics to find the root cause of defects and failures. This search, however, goes beyond a simple examination of process steps and tools; manufacturers also need a clear understanding of each device’s genealogy. In addition, fabs should employ an AI-based yield optimizer capable of running multiple models and offering potential optimization measures that can be taken in the factory to improve the process.

Now that we have discussed software needs, we will turn our attention to two use cases to further our examination of root cause analysis in zero defect manufacturing.

Root Cause Case No. 1

The first root cause value case we would like to discuss involves the integration of wafer probe, photoluminescence and epitaxial (epi) data. Previously, integrating these three kinds of data was not possible because the identification for wafers and lots – pre- and post-epi – were generally not linked. Wafers and lots were often identified by entirely different names before and after the epi step. For reasons that do not need to be explained, this was a huge hindrance to advancing the goal of zero defect manufacturing because the impact of the epi process on yield was not detected in a timely manner, resulting in higher defectivity and yield loss.

But the challenge is not as simple as identification and naming practices. Typical wafer ID trackers are not applied prior to the post-epi step because of technical and logistical constraints. The solution is for fabs to employ defect and yield analytics software that will enable genealogy that can link data from the epi and pre-epi processes to post-epi processes. The real innovation occurs when the genealogical information is normalized and interpolated with electrical test data. Once integrated, this data offers users a more complete understanding of where yield limiting events are occurring.

Fig. 1: Photoluminescence map (left) and electrical test performance by epi tool (right).

For example, let us consider the following scenario: in figure 1 (left) we show a group of dies that negatively affect performance on the upper left edge of the wafer. Through more traditional measures, this pocket of defectivity may have gone unnoticed, allowing for bad die to move forward in the process. But by applying integrated data, genealogical information and electrical test data, this trouble-plagued area was identified down to the epi tool and chamber (figure 1, right), and the defective material was prevented from going forward in the process. As significant as this is, with the right software platform this approach enables root cause analysis to be conducted in minutes, not days.

Now, onto the second use case in which we look at how to problem solve within the supply chain.

Root Cause Case No. 2

During final test and measurement, chips sometimes fail. In many cases, the faulty chips were previously determined to be good chips and were advanced forward in the process as a result of combining multiple chips coming from different products, lots, or wafers. The important thing here is to understand why this happens.

When there is a genealogy model in a yield software platform, fabs are able to pick the lots and wafers where bad chips come from and then run this information through pattern analysis software. In one particular scenario (figure 2), users were able to apply pattern analysis software to discover that all of the defective die arose from a spin coater issue, in this case, a leak negatively impacting the underbump metallization area following typical preventive maintenance measures.

To compensate for this, the team used integrated analytics to create a fault detection and classification (FDC) model to identify similar circumstances going forward. In this case, the FDC model monitors the suction power of the spin coater. If suction power for more than 10 consecutive samples are above the set limit, alarms are triggered and an appropriate Out of Control Action Plan (OCAP) process is executed that includes notification to tool owner.

Fig. 2: Proactive zero defect manufacturing at-a-glance.

The above explains how fabs are able to turn reactive root cause analytics into proactive monitoring. With such an approach, manufacturers can monitor for this and other issues and avoid the advancement of future defective die. Furthermore, the number of defect signatures that can be monitored inline can be as high as 40 different signatures, if not more. And in case these defects are missed at the process level, they can be identified at the inspection level or post-inspection, avoiding hundreds of issues further along in the process.

Conclusion

Zero defect manufacturing is not so much of a goal as it is a commitment to root out defects before they happen. To accomplish this, fabs need a wealth of data from the entire process to achieve a clear picture of what is going wrong, where it is going wrong and why it is going wrong. In this blog, we offered specific scenarios where root cause analysis was used to find defects across wafers and dies. However, these are just a few examples of how software can be used to find difficult-to-find defects. It can be beneficial in many different areas across the entire process, with each application further strengthening a fab’s efforts to employ a zero defect manufacturing approach, increasing yield and meeting the stringent requirements of some of the industry’s most advanced customers.

In our next blog, we will discuss how to detect dormant defects, use feedback and feedforward measures, and monitor the health of process control equipment. We hope you join us as we continue to explore methods for achieving zero defect manufacturing.

The post Achieving Zero Defect Manufacturing Part 2: Finding Defect Sources appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Are You Ready For HBM4? A Silicon Lifecycle Management (SLM) PerspectiveFaisal Goriawalla
    Many factors are driving system-on-chip (SoC) developers to adopt multi-die technology, in which multiple dies are stacked in a three-dimensional (3D) configuration. Multi-die systems may make power and thermal issues more complex, and they have required major innovations in electronic design automation (EDA) implementation and test tools. These challenges are more than offset by the advantages of over traditional 2D designs, including: Reducing overall area Achieving much higher pin densities
     

Are You Ready For HBM4? A Silicon Lifecycle Management (SLM) Perspective

Many factors are driving system-on-chip (SoC) developers to adopt multi-die technology, in which multiple dies are stacked in a three-dimensional (3D) configuration. Multi-die systems may make power and thermal issues more complex, and they have required major innovations in electronic design automation (EDA) implementation and test tools. These challenges are more than offset by the advantages of over traditional 2D designs, including:

  • Reducing overall area
  • Achieving much higher pin densities
  • Reusing existing proven dies
  • Mixing heterogeneous die technologies
  • Quickly creating derivative designs for new applications

One of the most common uses of multi-die design is the interconnection of memory stacks and processors such as CPUs and GPUs. High Bandwidth Memory (HBM) is a standard interface specifically for 3D-stacked DRAM dies. It was defined by the JEDEC Solid State Technology Association in 2013, followed by HBM2 in 2016 and HBM3 in 2022. Many multi-die projects have used this standard for caches in advanced CPUs and other system-on-chip (SoC) designs used in high-end applications such as data centers, high-performance computing (HPC), and artificial intelligence (AI) processing.

Fig. 1: Example of a current HBM-based SoC.

JEDEC recently announced that it is nearing completion of HBM4 and published preliminary specifications. HBM4 has been developed to enhance data processing rates while maintaining higher bandwidth, lower power consumption, and increased capacity per die/stack. The initial agreement calls for speed bins up to 6.4 Gbps, although this will increase as memory vendors develop new chips and refine the technology. This speed will benefit applications that require efficient handling of large datasets and complex calculations.

HBM4 is introducing a doubled channel count per stack over HBM3. The new version of the standard features a 2048-bit memory interface, as compared to 1024 bits in previous versions, as shown in figure 1. This intent is to double the number of bits without increasing the footprint of HBM memory stacks, thus doubling the interconnection density as well.

Different memory configurations will require various interposers to accommodate the differing footprints. HBM4 will specify 24 Gb and 32 Gb layers, with options for supporting 4-high, 8-high, 12-high and 16-high TSV stacks. As an example configuration, a 16-high based on 32 Gb layers will offer a capacity of 64 GB, which means that a processor with four memory modules can support 256 GB of memory with a peak bandwidth of 6.56 TB/s using an 8,192-bit interface.

The move from HBM3 to HBM4 will require further evolution in multi-die support across a wide range of EDA tools. The 2048-bit memory interface requires a significant increase in the number of through-silicon vias (TSVs) routed through a memory stack. This will mean shrinking the external bump pitch as the total number of micro bumps increases significantly. In addition, support for 16-high TSV stacks brings new complexity in wiring up an even larger number of DRAM dies without defects.

Test challenges are likely to be a dominant part of the transition. Any signal integrity issues after assembly and multi-die packaging become more difficult to diagnose and debug since probing is not feasible. Further, some defects may marginally pass production/manufacturing test but subsequently fail in the field. Thus, test of the future HBM4-based subsystem needs to be accomplished not just at production test but also in-system to account for aging-related defects.

Being able to monitor real-time data during mission mode operation in the field is greatly preferable to having to take the system offline for unplanned service. This “predictive maintenance” allows the end user to be proactive rather than reactive. HBM provides capabilities for in-system repair, for example swapping out a bad lane. Even if a defect requires physical hardware repair, detecting it before system failure enables scheduled maintenance rather than unplanned downtime.

As shown in figure 1, HBM systems typically have a base die that includes an HBM controller, a basic/fixed test engine provided by the DRAM vendor, and Direct Access (DA) ports. The new industry trend is for the base die to be manufactured on a standard logic process rather than the DRAM process. The SoC designer should include in the base die a flexible built-in self-test (BIST) engine that allows different algorithms to be used to trade off high coverage versus test time depending on the scenario.

This engine must be programmable to handle different latencies, address ranges, and timing of test operations that vary across DRAM vendors. It may also need to support post-package repair (PPR) for HBM DRAM to delay any “truck roll-out” for in-field service. The diagnostics performed by the BIST engine must be precise, showing the failing bank, row address, column address, etc. if there is a defect detected in the DRAM stack. Figure 2 shows an example.

Fig. 2: Example fault diagnosis for HBM stack.

As an industry leader in multi-die EDA and IP solutions, Synopsys provides all the technology needed for HBM manufacturing yield optimization and in-field silicon health monitoring. Signal Integrity Monitors (SIMs) are embedded in physical layer (PHY) IP blocks for on-demand signal quality measurement for interconnects. This allows users to create 1D eye diagrams for interconnect signals during both production test and in-field operation. SIMs measure timing margins, enable HBM lane test/repair, and mitigate against silent data corruption (SDC), part of an effective silicon lifecycle management (SLM) solution.

Synopsys SMS ext-RAM is a programmable and synthesizable engine that performs test, repair, and diagnostics for memory systems, including HBM. SMS ext-RAM ensures high test coverage and supports power-on self-test (POST) with the flexibility to run custom memory algorithms in-field. As shown in figure 2, it detects a wide range of defects in memory dies, including stuck-at faults, read destructive faults, write destructive faults, deceptive read destructive faults, and row hammering.

A real world case study of a project using HBM with the Synopsys solutions is available. These solutions are scaling to support the emerging HBM4 standard, ensuring continued success.

The post Are You Ready For HBM4? A Silicon Lifecycle Management (SLM) Perspective appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Metrology And Inspection For The Chiplet EraGregory Haley
    New developments and innovations in metrology and inspection will enable chipmakers to identify and address defects faster and with greater accuracy than ever before, all of which will be required at future process nodes and in densely-packed assemblies of chiplets. These advances will affect both front-end and back-end processes, providing increased precision and efficiency, combined with artificial intelligence/machine learning and big data analytics. These kinds of improvements will be crucia
     

Metrology And Inspection For The Chiplet Era

6. Srpen 2024 v 09:03

New developments and innovations in metrology and inspection will enable chipmakers to identify and address defects faster and with greater accuracy than ever before, all of which will be required at future process nodes and in densely-packed assemblies of chiplets.

These advances will affect both front-end and back-end processes, providing increased precision and efficiency, combined with artificial intelligence/machine learning and big data analytics. These kinds of improvements will be crucial for meeting the industry’s changing needs, enabling deeper insights and more accurate measurements at rates suitable for high-volume manufacturing. But gaps still need to be filled, and new ones are likely to show up as new nodes and processes are rolled out.

“As semiconductor devices become more complex, the demand for high-resolution, high-accuracy metrology tools increases,” says Brad Perkins, product line manager at Nordson Test & Inspection. “We need new tools and techniques that can keep up with shrinking geometries and more intricate designs.”

The shift to high-NA EUV lithography (0.55 NA EUV) at the 2nm node and beyond is expected to exacerbate stochastic variability, demanding more robust metrology solutions on the front end. Traditional critical dimension (CD) measurements alone are insufficient for the level of analysis required. Comprehensive metrics, including line-edge roughness (LER), line-width roughness (LWR), local edge-placement error (LEPE), and local CD uniformity (LCDU), alongside CD measurements, are necessary for ensuring the integrity and performance of advanced semiconductor devices. These metrics require sophisticated tools that can capture and analyze tiny variations at the nanometer scale, where even slight discrepancies can significantly impact device functionality and yield.

“Metrology is now at the forefront of yield, especially considering the current demands for DRAM and HBM,” says Hamed Sadeghian, president and CEO of Nearfield Instruments. “The next generations of HBMs are approaching a stage where hybrid bonding will be essential due to the increasing stack thickness. Hybrid bonding requires high resolutions in vertical directions to ensure all pads, and the surface height versus the dielectric, remain within nanometer-scale process windows. Consequently, the tools used must be one order of magnitude more precise.”

To address these challenges, companies are developing hybrid metrology systems that combine various measurement techniques for a comprehensive data set. Integrating scatterometry, electron microscopy, and/or atomic force microscopy allows for more thorough analysis of critical features. Moreover, AI and ML algorithms enhance the predictive capabilities of these tools, enabling process adjustments.

“Our customers who are pushing into more advanced technology nodes are desperate to understand what’s driving their yield,” says Ronald Chaffee, senior director of applications engineering at NI/Emerson Test & Measurement. “They may not know what all the issues are, but they are gathering all possible data — metrology, AEOI, and any measurable parameters — and seeking correlations.”

Traditional methods for defect detection, pattern recognition, and quality control typically used spatial pattern-recognition modules and wafer image-based algorithms to address wafer-level issues. “However, we need to advance beyond these techniques,” says Prasad Bachiraju, senior director of business development at Onto Innovation. “Our observations show that about 20% of wafers have systematic issues that can limit yield, with nearly 4% being new additions. There is a pressing need for advanced metrology for in-line monitoring to achieve zero-defect manufacturing.”

Several companies recently announced metrology innovations to provide more precise inspections, particularly for difficult-to-see areas, edge effects, and highly reflective surfaces.

Nordson unveiled its AMI SpinSAM acoustic rotary scan system. The system represents a significant departure from traditional raster scan methods, utilizing a rotational scanning approach. Rather than moving the wafer in an x,y pattern relative to a stationary lens, the wafer spins, similar to a record player. This reduces motion over the wafer and increases inspection speed, negating the need for image stitching and improving image quality.

“For years, we’d been trying to figure out this technique, and it’s gratifying to finally achieve it. It’s something we’ve always thought would be incredibly beneficial,” says Perkins. “The SpinSAM is designed primarily to enhance inspection speed and efficiency, addressing the common industry demand for more product throughput and better edge inspection capabilities.”

Meanwhile, Nearfield Instruments introduced a multi-head atomic force microscopy (AFM) system called QUADRA. It is a high-throughput, non-destructive metrology tool for HVM that features a novel multi-miniaturized AFM head architecture. Nearfield claims the parallel independent multi-head scanner can deliver a 100-fold throughput advantage versus conventional single-probe AFM tools. This architecture allows for precise measurements of high-aspect-ratio structures and complex 3D features, critical for advanced memory (3D NAND, DRAM, HBM) and logic processes.


Fig. 1: Image capture comparison of standard AFM and multi-head AFM. Source: Nearfield Instruments

In April, Onto Innovation debuted an advancement in subsurface defect inspection technology with the release of its Dragonfly G3 inspection system. The new system allows for 100% wafer inspection, targeting subsurface defects that can cause yield losses, such as micro-cracks and other hidden flaws that may lead to entire wafers breaking during subsequent processing steps. The Dragonfly G3 utilizes novel infrared (IR) technology combined with specially designed algorithms to detect these defects, which previously were undetectable in a production environment. This new capability supports HBM, advanced logic, and various specialty segments, and aims to improve final yield and cost savings by reducing scrapped wafers and die stacks.

More recently, researchers at the Paul Scherrer Institute announced a high-performance X-ray tomography technique using burst ptychography. This new method can provide non-destructive, detailed views of nanostructures as small as 4nm in materials like silicon and metals at a fast acquisition rate of 14,000 resolution elements per seconds. The tomographic back-propagation reconstruction allows imaging of samples up to ten times larger than the conventional depth of field.

There are other technologies and techniques for improving metrology in semiconductor manufacturing, as well, including wafer-level ultrasonic inspection, which involves flipping the wafer to inspect from the other side. New acoustic microscopy techniques, such as scanning acoustic microscopy (SAM) and time-of-flight acoustic microscopy (TOF-AM), enable the detection and characterization of very small defects, such as voids, delaminations, and cracks within thin films and interfaces.

“We used to look at 80 to 100 micron resist films, but with 3D integrated packaging, we’re now dealing with films that are 160 to 240 microns—very thick resist films,” says Christopher Claypool, senior application scientist at Bruker OCD. “In TSVs and microbumps, the dominant technique today is white light interferometry, which provides profile information. While it has some advantages, its throughput is slow, and it’s a focus-based technique. This limitation makes it difficult to measure TSV structures smaller than four or five microns in diameter.”

Acoustic metrology tools equipped with the newest generation of focal length transducers (FLTs) can focus acoustic waves with precision down to a few nanometers, allowing for non-destructive detailed inspection of edge defects and critical stress points. This capability is particularly useful for identifying small-scale defects that might be missed by other inspection methods.

The development and integration of smart sensors in metrology equipment is instrumental in collecting the vast amounts of data needed for precise measurement and quality control. These sensors are highly sensitive and capable of operating under various environmental conditions, ensuring consistent performance. One significant advantage of smart sensors is their ability to facilitate predictive maintenance. By continuously monitoring the health and performance of metrology equipment, these sensors can predict potential failures and schedule maintenance before significant downtime occurs. This capability enhances the reliability of the equipment, reduces maintenance costs, and improves overall operational efficiency.

Smart sensors also are being developed to integrate seamlessly with metrology systems, offering real-time data collection and analysis. These sensors can monitor various parameters throughout the manufacturing process, providing continuous feedback and enabling quick adjustments to prevent defects. Smart sensors, combined with big data platforms and advanced data analytics, allow for more efficient and accurate defect detection and classification.

Critical stress points

A persistent challenge in semiconductor metrology is the identification and inspection of defects at critical stress points, particularly at the silicon edges. For bonded wafers, it’s at the outer ring of the wafer. For chip-on-wafer packaging, it’s at the edge of the chips. These edge defects are particularly problematic because they occur at the highest stress points from the neutral axis, making them more prone to failures. As semiconductor devices continue to involve more intricate packaging techniques, such as chip-on-wafer and wafer-level packaging, the focus on edge inspection becomes even more critical.

“When defects happen in a factory, you need imaging that can detect and classify them,” says Onto’s Bachiraju. “Then you need to find the root causes of where they’re coming from, and for that you need the entire data integration and a big data platform to help with faster analysis.”

Another significant challenge in semiconductor metrology is ensuring the reliability of known good die (KGD), especially as advanced packaging techniques and chiplets become more prevalent. Ensuring that every chip/chiplet in a stacked die configuration is of high quality is essential for maintaining yield and performance, but the speed of metrology processes is a constant concern. This leads to a balancing act between thoroughness and efficiency. The industry continuously seeks to develop faster machines that can handle the increasing volume and complexity of inspections without compromising accuracy. In this race, innovations in data processing and analysis are key to achieving quicker results.

“Customers would like, generally, 100% inspection for a lot of those processes because of the known good die, but it’s cost-prohibitive because the machines just can’t run fast enough,” says Nordson’s Perkins.

Metrology and Industry 4.0

Industry 4.0 — a term introduced in Germany in 2011 for the fourth industrial revolution, and called smart manufacturing in the U.S. — emphasizes the integration of digital technologies such as the Internet of Things, artificial intelligence, and big data analytics into manufacturing processes. Unlike past revolutions driven by mechanization, electrification, and computerization, Industry 4.0 focuses on connectivity, data, and automation to enhance manufacturing capabilities and efficiency.

“The better the data integration is, the more efficient the yield ramp,” says Dieter Rathei, CEO of DR Yield. “It’s essential to integrate all available data into the system for effective monitoring and analysis.”

In semiconductor manufacturing, this shift toward Industry 4.0 is particularly transformative, driven by the increasing complexity of semiconductor devices and the demand for higher precision and yield. Traditional metrology methods, heavily reliant on manual processes and limited automation, are evolving into highly interconnected systems that enable real-time data sharing and decision-making across the entire production chain.

“There haven’t been many tools to consolidate different data types into a single platform,” says NI’s Chaffee. “Historically, yield management systems focused on testing, while FDC or process systems concentrated on the process itself, without correlating the two. As manufacturers push into the 5, 3, and 2nm spaces, they’re discovering that defect density alone isn’t the sole governing factor. Process control is also crucial. By integrating all data, even the most complex correlations that a human might miss can be identified by AI and ML. The goal is to use machine learning to detect patterns or connections that could help control and optimize the manufacturing process.”

IoT forms the backbone of Industry 4.0 by connecting various devices, sensors, and systems within the manufacturing environment. In semiconductor manufacturing, IoT enables seamless communication between metrology tools, production equipment, and factory management systems. This interconnected network facilitates real-time monitoring and control of manufacturing processes, allowing for immediate adjustments and optimization.

“You need to integrate information from various sources, including sensors, metrology tools, and test structures, to build predictive models that enhance process control and yield improvement,” says Michael Yu, vice president of advanced solutions at PDF Solutions. “This holistic approach allows you to identify patterns and correlations that were previously undetectable.”

AI and ML are pivotal in processing and analyzing the vast amounts of data generated in a smart factory. These technologies can identify patterns, predict equipment failures, and optimize process parameters with a level of precision and speed unattainable by human operators alone. In semiconductor manufacturing, AI-driven analytics enhance process control, improve yield rates, and reduce downtime. “One of the major trends we see is the integration of artificial intelligence and machine learning into metrology tools,” says Perkins. “This helps in making sense of the vast amounts of data generated and enables more accurate and efficient measurements.”

AI’s role extends further as it assists in discovering anomalies within the production process that might have gone unnoticed with traditional methods. AI algorithms integrated into metrology systems can dynamically adjust processes in real-time, ensuring that deviations are corrected before they affect the end yield. This incorporation of AI minimizes defect rates and enhances overall production quality.

“Our experience has shown that in the past 20 years, machine learning and AI algorithms have been critical for automatic data classification and die classification,” says Bachiraju. “This has significantly improved the efficiency and accuracy of our metrology tools.”

Big data analytics complements AI/ML by providing the infrastructure necessary to handle and interpret massive datasets. In semiconductor manufacturing, big data analytics enables the extraction of actionable insights from data generated by IoT devices and production systems. This capability is crucial for predictive maintenance, quality control, and continuous process improvement.

“With big data, we can identify patterns and correlations that were previously impossible to detect, leading to better process control and yield improvement,” says Perkins.

Big data analytics also helps in understanding the lifecycle of semiconductor devices from production to field deployment. By analyzing product performance data over time, manufacturers can predict potential failures and enhance product designs, increasing reliability and lifecycle management.

“In the next decade, we see a lot of opportunities for AI,” says DR Yield’s Rathei. “The foundation for these advancements is the availability of comprehensive data. AI models need extensive data for training. Once all the data is available, we can experiment with different models and ideas. The ingenuity of engineers, combined with new tools, will drive exponential progress in this field.”

Metrology gaps remain

Despite recent advancements in metrology, analytics, and AI/ML, several gaps still remain, particularly in the context of high-volume manufacturing (HVM) and next-generation devices. The U.S. Commerce Department’s CHIPS R&D Metrology Program, along with industry stakeholders, have highlighted seven “grand challenges,” areas where current metrology capabilities fall short:

Metrology for materials purity and properties: There is a critical need for new measurements and standards to ensure the purity and physical properties of materials used in semiconductor manufacturing. Current techniques lack the sensitivity and throughput required to detect particles and contaminants throughout the supply chain.

Advanced metrology for future manufacturing: Next-generation semiconductor devices, such as gate-all-around (GAA) FETs and complementary FETs (CFETs), require breakthroughs in both physical and computational metrology. Existing tools are not yet capable of providing the resolution, sensitivity, and accuracy needed to characterize the intricate features and complex structures of these devices. This includes non-destructive techniques for characterizing defects and impurities at the nanoscale.

“There is a secondary challenge with some of the equipment in metrology, which often involves sampling data from single points on a wafer, much like heat test data that only covers specific sites,” says Chaffee. “To be meaningful, we need to move beyond sampling methods and find creative ways to gather information from every wafer, integrating it into a model. This involves building a knowledge base that can help in detecting patterns and correlations, which humans alone might miss. The key is to leverage AI and machine learning to identify these correlations and make sense of them, especially as we push into the 5, 3, and 2nm spaces. This process is iterative and requires a holistic approach, encompassing various data points and correlating them to understand the physical boundaries and the impact on the final product.”

Metrology for advanced packaging: The integration of sophisticated components and novel materials in advanced packaging technologies presents significant metrology challenges. There is a need for rapid, in-situ measurements to verify interfaces, subsurface interconnects, and internal 3D structures. Current methods do not adequately address issues such as warpage, voids, substrate yield, and adhesion, which are critical for the reliability and performance of advanced packages.

Modeling and simulating semiconductor materials, designs, and components: Modeling and simulating semiconductor processes require advanced computational models and data analysis tools. Current capabilities are limited in their ability to seamlessly integrate the entire semiconductor value chain, from materials inputs to system assembly. There is a need for standards and validation tools to support digital twins and other advanced simulation techniques that can optimize process development and control.

“Predictive analytics is particularly important,” says Chaffee. “They aim to determine the probability of any given die on a wafer being the best yielding or presenting issues. By integrating various data points and running different scenarios, they can identify and understand how specific equipment combinations, sequences and processes enhance yields.”

Modeling and simulating semiconductor processes: Current capabilities are limited in their ability to seamlessly integrate the entire semiconductor value chain, from materials inputs to system assembly. There is a need for standards and validation tools to support digital twins and other advanced simulation techniques that can optimize process development and control.

“Part of the problem comes from the back-end packaging and assembly process, but another part of the problem can originate from the quality of the wafer itself, which is determined during the front-end process,” says PDF’s Yu. “An effective ML model needs to incorporate both front-end and back-end information, including data from equipment sensors, metrology, and structured test information, to make accurate predictions and take proactive actions to correct the process.”

Standardizing new materials and processes: The development of future information and communication technologies hinges on the creation of new standards and validation methods. Current reference materials and calibration services do not meet the requirements for next-generation materials and processes, such as those used in advanced packaging and heterogeneous integration. This gap hampers the industry’s ability to innovate and maintain competitive production capabilities.

Metrology to enhance security and provenance of components and products: With the increasing complexity of the semiconductor supply chain, there is a need for metrology solutions that can ensure the security and provenance of components and products. This involves developing methods to trace materials and processes throughout the manufacturing lifecycle to prevent counterfeiting and ensure compliance with regulatory standards.

“The focus on security and sharing changes the supplier relationship into more of a partnership and less of a confrontation,” says Chaffee. “Historically, there’s always been a concern of data flowing across that boundary. People are very protective about their process, and other people are very protective about their product. But once you start pushing into the deep sub-micron space, those barriers have to come down. The die are too expensive for them not to communicate, but they can still do so while protecting their IP. Companies are starting to realize that by sharing parametric test information securely, they can achieve better yield management and process optimization without compromising their intellectual property.”

Conclusion

Advancements in metrology and testing are pivotal for the semiconductor industry’s continued growth and innovation. The integration of AI/ML, IoT, and big data analytics is transforming how manufacturers approach process control and yield improvement. As adoption of Industry 4.0 grows, the role of metrology will become even more critical in ensuring the efficiency, quality, and reliability of semiconductor devices. And by leveraging these advanced technologies, semiconductor manufacturers can achieve higher yields, reduce costs, and maintain the precision required in this competitive industry.

With continuous improvements and the integration of smart technologies, the semiconductor industry will keep pushing the boundaries of innovation, leading to more robust and capable electronic devices that define the future of technology. The journey toward a fully realized Industry 4.0 is ongoing, and its impact on semiconductor manufacturing undoubtedly will shape the future of the industry, ensuring it stays at the forefront of global technological advancements.

“Anytime you have new packaging technologies and process technologies that are evolving, you have a need for metrology,” says Perkins. “When you are ramping up new processes and need to make continuous improvements for yield, that is when you see the biggest need for new metrology solutions.”

The post Metrology And Inspection For The Chiplet Era appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Semiconductor Shifts In Automotive: Impact Of EV And ADAS TrendsFisher Zhang
    The integration of advanced driver assistance systems (ADAS) and the transition towards electric vehicles (EVs) are significantly transforming the automotive industry. Modern vehicles, essentially computers on wheels, require substantially more semiconductors. In response, carmakers are forming stronger partnerships with semiconductor vendors – some are taking a page from tech giants like Apple and Samsung by designing their own chips, often following a fabless or outsourced production model. Wh
     

Semiconductor Shifts In Automotive: Impact Of EV And ADAS Trends

6. Srpen 2024 v 09:03

The integration of advanced driver assistance systems (ADAS) and the transition towards electric vehicles (EVs) are significantly transforming the automotive industry.

Modern vehicles, essentially computers on wheels, require substantially more semiconductors. In response, carmakers are forming stronger partnerships with semiconductor vendors – some are taking a page from tech giants like Apple and Samsung by designing their own chips, often following a fabless or outsourced production model.

While a deeper connection with semiconductor design helps automakers maintain design control and supply chain resilience, it also imposes substantial responsibility to understand and meet stringent automotive quality standards.

The crucial role of semiconductor testing

Testing is vital to meet the automotive industry’s demands for quality, cost-efficiency, and timely market entry. As carmakers delve into semiconductor design, they face new challenges. Advanced semiconductors, more complex by nature, require thorough testing to ensure automotive-grade quality.

The industry’s push towards smaller process nodes, like 5nm and below, further amplifies these challenges, necessitating early and continuous engagement with testing resources to maintain high standards without compromising time to market.

Zero defects commitment

The automotive industry’s commitment to zero defects underscores the critical importance of quality. This commitment is based on an analysis of the costs associated with testing versus the potentially catastrophic costs of failures, such as life-threatening malfunctions, costly recalls, and market delays.

These issues can dramatically impact revenue and market position, highlighting the need for rigorous testing. The exceptional quality requirements inherent to automotive standards are set to intensify with the increasing digital complexity of vehicles.

Given that automotive chips must perform reliably over a lifespan of 10 to 20 years, comprehensive testing protocols play an essential role in identifying and rectifying defects early, optimizing both cost and quality. This fundamental aspect of semiconductor manufacturing cements the principle that quality is not just a priority, but the paramount concern.

This commitment transcends the capabilities of even the most skilled engineers, requiring systematic and integrated testing processes to ensure chip reliability and performance under diverse conditions.

Collaboration is key

Collaboration between automakers and semiconductor manufacturers is crucial, fostering an environment where issues can be identified and addressed early in the development cycle.

These partnerships are vital for maintaining momentum in the face of rapid technological advancements and ensuring that the automotive industry can meet the high standards of safety, reliability, and performance expected by consumers.

This collaborative approach helps to optimize testing processes, to maintain stringent quality standards, and to protect time-to-market goals, preventing production delays and ensuring the continuous advancement of automotive technologies.

The post Semiconductor Shifts In Automotive: Impact Of EV And ADAS Trends appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Driving Cost Lower and Power Higher With GaNAnne Meixner
    Gallium nitride is starting to make broader inroads in the lower-end of the high-voltage, wide-bandgap power FET market, where silicon carbide has been the technology of choice. This shift is driven by lower costs and processes that are more compatible with bulk silicon. Efficiency, power density (size), and cost are the three major concerns in power electronics, and GaN can meet all three criteria. However, to satisfy all of those criteria consistently, the semiconductor ecosystem needs to deve
     

Driving Cost Lower and Power Higher With GaN

6. Srpen 2024 v 09:02

Gallium nitride is starting to make broader inroads in the lower-end of the high-voltage, wide-bandgap power FET market, where silicon carbide has been the technology of choice. This shift is driven by lower costs and processes that are more compatible with bulk silicon.

Efficiency, power density (size), and cost are the three major concerns in power electronics, and GaN can meet all three criteria. However, to satisfy all of those criteria consistently, the semiconductor ecosystem needs to develop best practices for test, inspection, and metrology, determining what works best for which applications and under varying conditions.

Power ICs play an essential role in stepping up and down voltage levels from one power source to another. GaN is used extensively today in smart phone and laptop adapters, but market opportunities are beginning to widen for this technology. GaN likely will play a significant role in both data centers and automotive applications [1]. Data centers are expanding rapidly due to the focus on AI and a build-out at the edge. And automotive is keen to use GaN power ICs for inverter modules because they will be cheaper than SiC, as well as for onboard battery chargers (OBCs) and various DC-DC conversions from the battery to different applications in the vehicle.


Fig. 1: Current and future fields of interest for GaN and SiC power devices. Source A. Meixner/Semiconductor Engineering

But to enter new markets, GaN device manufactures need to more quickly ramp up new processes and their associated products. Because GaN for power transistors is a developing process technology, measurement data is critical to qualify both the manufacturing process and the reliability of the new semiconductor technology and resulting product.

Much of GaN’s success will depend on metrology and inspection solutions that offer high throughput, as well as non-destructive testing methods such as optical and X-ray. Electron microscopy is useful for drilling down into key device parameters and defect mechanisms. And electrical tests provide complementary data that assists with product/process validation, reliability and qualification, system-level validation, as well as being used for production screening.

Silicon carbide (SiC) remains the material of choice for very high-voltage applications. It offers better performance and higher efficiency than silicon. But SiC is expensive. It requires different equipment than silicon, it’s difficult to grow SiC ingots, and today there is limited wafer capacity.

In contrast, GaN offers some of the same desirable characteristics as SiC and can operate at even higher switching speeds. GaN wafer production is cheaper because it can be created on a silicon substrate utilizing typical silicon processing equipment other than the GaN epitaxial deposition tool. That enables a fab/foundry with a silicon CMOS process to ramp a GaN process with an engineering team experienced in GaN.

The cost comparison isn’t entirely apples-to-apples, of course. The highest-voltage GaN on the market today uses silicon on sapphire (SoS) or other engineered substrates, which are more expensive. But below those voltages, GaN typically has a cost advantage, and that has sparked renewed interest in this technology.

“GaN-based products increase the performance envelopes relative to the incumbent and mature silicon-based technologies,” said Vineet Pancholi, senior director of test technology at Amkor. “Switching speeds with GaN enable the application in ways never possible with silicon. But as the GaN production volumes ramp, these products have extreme economic pressures. The production test list includes static attributes. However, the transient and dynamic attributes are the primary benefit of GaN in the end application.”

Others agree. “The world needs cheaper material, and GaN is easy to build,” said Frank Heidemann, vice president and technology leader of SET at NI/Emerson Test & Measurement. “Gallium nitride has a huge success in the lower voltages ranges — anything up to 500V. This is where the GaN process is very well under control. The problem now is building in higher voltages is a challenge. In the near future there will be products at even higher voltage levels.”

Those higher-voltage applications require new process recipes, new power IC designs, and subsequently product/process validation and qualification.

GaN HEMT properties
Improving the processes needed to create GaN high-electron-mobility transistors (HEMTs) requires a deep understanding of the material properties and the manufacturing consequences of layering these materials.

The underlying physics and structure of wide-bandgap devices significantly differs from silicon high-voltage transistors. Silicon transistors rely on doping of p and n materials. When voltage is applied at the gate, it creates a channel for current to flow from source to drain. In contrast, wide-bandgap transistors are built by layering thin films of different materials, which differ in their bandgap energy. [2] Applying a voltage to the gate enables an electron exchange between the two materials, driving those electrons along the channel between source and drain.


Fig. 2. Cross-sectional animation of e-mode GaN HEMT device. Source: Zeiss Microscopy

“GaN devices rely on two-dimensional electron gas (2DEG) created at the GaN and AlGaN interface to conduct current at high speed,” said Jiangtao Hu, senior director of product marketing at Onto Innovation. “To enable high electron mobility, the epitaxy process creating complex multi-layer crystalline films must be carefully monitored and controlled, ensuring critical film properties such as thickness, composition, and interface roughness are within a tight spec. The ongoing trend of expanding wafer sizes further requires the measurement to be on-product and non-destructive for uniformity control.”


Fig. 3: SEM cross-section of enhancement-mode GaN HEMT built on silicon which requires a superlattice. Source: Zeiss Microscopy

Furthermore, each layer’s electrical properties need to be understood. “It is of utmost importance to determine, as early as possible in the manufacturing process, the electrical characteristics of the structures, the sheet resistance of the 2DEG, the carrier concentration, and the mobility of carriers in the channel, preferably at the wafer level in a non-destructive assessment,” said Christophe Maleville, CTO and senior executive vice president of innovation at Soitec.

Developing process recipes for GaN HEMT devices at higher operating ranges require measurements taken during wafer manufacturing and device testing, both for qualification of a process/product and production manufacturing. Inspection, metrology, and electrical tests focus on process anomalies and defects, which impact the device performance.

“Crystal defects such as dislocations and stacking faults, which can form during deposition and subsequently be grown over and buried, can create long-term reliability concerns even if the devices pass initial testing,” said David Taraci, business development manager of electronics strategic accounts at ZEISS Research Microscopy Solutions. “Gate oxides can pinch off during deposition, creating voids which may not manifest as an issue immediately.”

The quality of the buffer layer is critical because it affects the breakdown voltage. “The maximum breakdown voltage of the devices will be ultimately limited by the breakdown of the buffer layer grown in between the Si substrate and the GaN channel,” said Soitec’s Maleville. “An electrical assessment (IV at high voltage) requires destructive measurements as well as device isolation. This is performed on a sample basis only.”

One way to raise the voltage limit of a GaN device is to add a ‘gate driver’ which keeps it reliable at higher voltages. But to further expand GaN technology’s performance envelope to higher voltage operation engineers need to comprehend a new GaN device reliability properties.

“We are supporting GaN lifetime validation, which is the prediction of a mission characteristic of lifetime for gallium nitride power devices,” said Emerson’s Heidemann. “Engineers build physics-based failure models of these devices. Next, they investigate the acceleration factors. How can we really make tests and verification properly so that we can assess lifetime health?”

The qualification procedures necessitate life-stressing testing, which duplicates predicated mission profile usage, as well as electrical testing, after each life-stress period. That allows engineers to determine shifts in transistor characteristics and outright failures. For example, life stress periods could start with 4,000 hours and increase in 1,000-hour increments to 12,000 hours, during which time the device is turned on/off with specific durations of ‘on’ times.

“Reliability predictions are based upon application mission profiles,” said Stephanie Watts Butler, independent consultant and vice president of industry and standards in the IEEE Power Electronics Society. “In some cases, GaN is going into a new application, or being used differently than silicon, and the mission profile needs to be elucidated. This is one area that the industry is focused upon together.”

As an example of this effort, Butler pointed to JEDEC JEP186 spec [3], which provides guidelines for specifying the breakdown voltage for GaN HEMT devices. “JEDEC and IEC both are issuing guideline documents for methods for test and characterization of wide-bandgap devices, as well as reliability and qualification procedures, and datasheet parameters to enable wide bandgap devices, including GaN, to ramp faster with higher quality in the marketplace,” she said.

Electrical tests remain essential to screening for both time-zero and reliability-associated defects (e.g. infant mortality and reduced lifetime). This holds true for screening wafers, singulated die, and packaged devices. And test content includes tests specific to GaN HEMT power devices performance specifications and tests more directed at defect detection.

Due to inherent device differences, the GaN test list varies in some significant ways from Si and SiC power ICs. Assessing GaN health for qualification and manufacturing purposes requires both static and dynamic tests (SiC DC and AC). A partial list includes zero gate voltage drain leakage current, rise time, fall time, dynamic RDSon, and dielectric integrity tests.

“These are very time-intensive measurement techniques for GaN devices,” said Tom Tran, product manager for power discrete test products at Teradyne. “On top of the static measurement techniques is the concern about trapped charge — both for functionality and efficiency — revealed through dynamic RDSon testing.”

Transient tests are necessary for qualification and production purposes due to the high electron mobility, which is what gives GaN HEMT its high switching speed. “From a test standpoint, static test failures indicate basic processing failures, while transient switching failures indicate marginal or process excursions,” said Amkor’s Vineet Pancholi. “Both tests continue to be important to our customers until process maturity is achieved. With the extended range of voltage, current, and switching operations, mainstream test equipment suppliers have been adding complementary instrumentation capabilities.”

And ATE suppliers look to reduce test time, which reduces cost. “Both static and dynamic test requirements drive very high test times,” said Teradyne’s Tran. “But the GaN of today is very different than GaN from a decade ago. We’re able to accelerate this testing just due to the core nature of our ATE architecture. We think there is the possibility further reducing the cost of test for our customers.”

Tools for process control and quality management
GaN HEMT devices’ reliance on thin-film processes highlights the need to understand the material properties and the nature of the interfaces between each layer. That requires tools for process control, yield management, and failure analysis.

“GaN device performance is highly reflective of the film characteristics used in its manufacture,” said Mike McIntyre, director of software product management at Onto Innovation. “The smallest process variations when it comes to film thickness, film stress, line width or even crystalline make-up, can have a dramatic impact on how the device performs, or even if it is usable in its target market. This lack of tolerance to any variation places a greater burden on engineers to understand the factors that correlate to device performance and its profitability.”

Inspection methods that are non-destructive vary in throughput time and in the level of detail provided for engineers to make decisions. While optical methods are fast and provide full wafer coverage, they cannot accurately classify chemical or structural defects for engineers/technicians to review. In contrast, destructive methods provide the information that’s needed to truly understand the nature of the defects. For example, conductive atomic force microscopy (AFM) probing remains slow, but it can identify electrical nature of a defect. And to truly comprehend crystallographic defects and the chemical nature of impurities, engineers can turn to electron microscopy based methods.

One way to assess thin films is with X-rays. “High resolution X-ray measurements are useful to provide production control of the wafer crystalline quality and defects in the buffer, said Soitec’s Maleville. “Minor changes in composition of the buffer, barrier, or capping layer, as well as their layer thickness, can result in significant deviations in device performance. Thickness of the layers, in particular the top cap, barrier, and spacer layers, are typically measured by XRD. However, the throughput of XRD systems is low. Alternatively, ellipsometry offers a reasonably good throughput measurement with more data points for both development and production mode scenarios.”

Optical techniques have been the standard for thin film assessment in the semiconductor industry. Inspection equipment providers have long been on the continuation improvement always evolving journey to improve accuracy, precision and throughput. Providing better metrology tools helps device makers with process control and yield management.

“Recently, we successfully developed a non-destructive on product measurement capability for GaN epi process monitoring,” said Onto’s Hu. “It takes advantage of our advanced optical film experience and our modeling software to simultaneously measure multi-layer epi film thickness, composition, and interface roughness on product wafers.”


Fig. 4: Metrology measurements on GaN for roughness and for Al concentration. Source: Onto Innovation

Assessing the electrical characteristics — 2DEG sheet resistance, channel carrier mobility, and concentration are required for controlling the manufacturing process. A non-destructive assessment would be an improvement over currently used destructive techniques (e.g. SEM). The solutions used for other power ICs do not work for GaN HEMT. As of today, no one has come up with a commercial solution.

Inspection looks for yield impacting defects, as well as defects that affect wafer acceptance in the case of companies that provide engineered substrates.

“Defect inspection for incoming silicon wafers looks for particles, scratches, and other anomalies that might seed imperfections in the subsequent buffer and crystal growth,” said Antonio Mani, business development manager at Thermo Fisher Scientific. “After the growth of the buffer and termination layers, followed by the growth of the doped GaN layers, another set of inspections is carried out. In this case, it is more focused on the detection of cracks, other macroscopic defects (micropipes, carrots), and looking for micro-pits, which are associated to threading dislocations that have survived the buffer layer and are surfacing at the top GaN surface.”

Mani noted that follow-up inspection methods for Si and GaN devices are similar. The difference is the importance in connecting observations back to post-epi results.

More accurate defect libraries would shorten inspection time. “The lack of standardization of surface defect analysis impedes progress,” said Soitec’s Maleville. “Different tools are available on the market, while defect libraries are still being developed essentially by the different user. This lack of globally accepted method and standard defect library for surface defect analysis is slowing down the GaN surface qualification process.”

Whether it involves a manufacturing test failure or a field return, the necessary steps for determining root cause on a problematic packaged part begins with fault isolation. “Given the direct nature of the bandgap of GaN and its operating window in terms of voltage/frequency/power density, classical methods of fault isolation (e.g. optical emission spectroscopy) are forced to focus on different wavelengths and different ranges of excitation of the typical electrical defects,” said Thermo Fisher’s Mani. “Hot carrier pairs are just one example, which highlights the radical difference between GaN and silicon devices.”

In addition to fault isolation there are challenges in creating a device cross-section with focused-ion beam milling methods.

“Several challenges exist in FA for GaN power ICs,” said Zeiss’ Taraci. “In any completed device, in particular, there are numerous materials and layers present for stress mitigation/relaxation and thermal management, depending on whether we are talking enhancement- or depletion-mode devices. Length-scale can be difficult to manage as you are working with these samples, because they have structures of varying dimension present in close proximity. Many of the structures are quite unique to power GaN and can pose challenges themselves in cross-section and analyses. Beam-milling approaches have to be tailored to prevent heavy re-deposition and masking, and are dependent on material, lattice orientation, current, geometry, etc.”

Conclusion
To be successful in bringing new GaN power ICs to new application space engineers and their equipment suppliers need faster process development and a reduction in overall costs. For HEMT devices, it’s understanding the resulting layers and their material properties. This requires a host of metrology, inspection, test, and failure analysis steps to comprehend the issues, and to provide feedback data from experiments and qualifications for process and design improvements.

References

[1] M. Buffolo et al., “Review and Outlook on GaN and SiC Power Devices: Industrial State-of-the-Art, Applications, and Perspectives,” in IEEE Transactions on Electron Devices, March 2024, open access, https://ieeexplore.ieee.org/document/10388225

[2] High electron mobility transistor (HEMT) https://en.wikipedia.org/wiki/High-electron-mobility_transistor

[3] Guideline to specify a transient off-state withstand voltage robustness indicated in datasheets for lateral GaN power conversion devices, JEP186, version 1.0, December 2021. https://www.jedec.org/standards-documents/docs/jep186

Related Stories

Ramping Up Power Electronics For EVs

SiC Growth For EVs Is Stressing Manufacturing

GaN ICs Wanted For Power, EV Markets

Architecting Chips For High-Performance Computing

Power Semiconductors: 2023

The post Driving Cost Lower and Power Higher With GaN appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Leveraging AI To Efficiently Test AI ChipsAdvantest
    In the fast-paced world of technology, where innovation and efficiency are paramount, integrating artificial intelligence (AI) and machine learning (ML) into the semiconductor testing ecosystem has become of critical importance due to ongoing challenges with accuracy and reliability. AI and ML algorithms are used to identify patterns and anomalies that might not be discovered by human testers or traditional methods. By leveraging these technologies, companies can achieve higher accuracy in defec
     

Leveraging AI To Efficiently Test AI Chips

Od: Advantest
6. Srpen 2024 v 09:01

In the fast-paced world of technology, where innovation and efficiency are paramount, integrating artificial intelligence (AI) and machine learning (ML) into the semiconductor testing ecosystem has become of critical importance due to ongoing challenges with accuracy and reliability. AI and ML algorithms are used to identify patterns and anomalies that might not be discovered by human testers or traditional methods. By leveraging these technologies, companies can achieve higher accuracy in defect detection, ensuring that only the highest quality semiconductors reach the market. In addition, the industry is clamoring for increased efficiency and speed because AI-driven testing can significantly accelerate the testing process, analyzing vast amounts of data at speeds unattainable by human testers. This enables quicker turnaround times from design to production, helping companies meet market demands more effectively and stay ahead of competitors. Firms are also heavily invested in reducing costs. While the initial investment in AI/ML technology can be expansive, the long-term savings are irrefutable. With automated routine and complex testing processes, companies can reduce labor costs and minimize human error. Equally important, AI-enhanced testing can better predict potential failures before they occur, saving costs related to recalls and repairs.

The industry is now moving to chiplet-based modules, using a “Lego-like” approach to integrate CPU, GPU, cache, I/O, high-bandwidth memory (HBM), and other functions. In the rapidly evolving world of chiplets, the DUT is a complex multichip system with the integration of many devices in a single 2.5D or 3D package. Consequently, the tester can only access a subset of individual device pins. Even so, at each test insertion, the tester must be able to extract valuable data that is then used to optimize the current test insertion as well as other design, manufacturing, and test steps. With limited pin access, the tester must infer what is happening on unobservable nodes. To best achieve this goal, it is important to extract the most value possible out of the data that can be directly collected across all manufacturing and test steps, including data from on-chip sensors. The test flow in the chiplet world already includes PSV, wafer acceptance test (WAT), wafer sort (WS), final test (FT), burn-in, and SLT, and additional test insertions to account for the increased complexity of a package with multiple chiplets are not feasible from a cost perspective. Adding to the challenge, binning goes from performance-based to application-based. In this world, the tester must stay ahead of the system – the tester must be smarter than the complex system-under-test.

The ACS RTDI platform accelerates data analytics and AI/ML decision-making.

So, for these reasons and many more, the adoption of edge compute for ML test applications is well underway. Advantest’s ACS Real-Time Data Infrastructure (ACS RTDI) platform accelerates data analytics and AI/ML decision-making within a single integrated platform. It collects, analyzes, stores, and monitors semiconductor test data as well as data sources across the IC manufacturing supply chain while employing low-latency edge computing and analytics in a secure zero-trust environment. ACS RTDI minimizes the need for human intervention, streamlining overall data utilization across multiple insertions to boost quality, yield, and operational efficiencies. It includes Advantest’s ACS Edge HPC server, which works in conjunction with its V93000 and other ATE systems to handle computationally intensive workloads adjacent to the tester’s host controller.

A reliable, secure real-time data structure that integrates data sources across the IC manufacturing supply chain.

In this configuration, the ACS Edge provides low, consistent, and predictable latency compared with a data center-hosted alternative. It supports a user execution environment independent of the tester host controller to ease development and deployment. It also provides a reliable and secure real-time data infrastructure that integrates all data sources across the entire IC manufacturing supply chain, applying analytics models that enable real-time decision-making during production test.

The post Leveraging AI To Efficiently Test AI Chips appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Controller Area Network (CAN) OverviewNI
    What is CAN? A controller area network (CAN) bus is a high-integrity serial bus system for networking intelligent devices. CAN busses and devices are common components in automotive and industrial systems. Using a CAN interface device, you can write LabVIEW applications to communicate with a CAN network. CAN History Bosch originally developed CAN in 1985 for in-vehicle networks. In the past, automotive manufacturers connected electronic devices in vehicles using point-to-point wiring systems
     

Controller Area Network (CAN) Overview

Od: NI
6. Srpen 2024 v 09:01

What is CAN?

A controller area network (CAN) bus is a high-integrity serial bus system for networking intelligent devices. CAN busses and devices are common components in automotive and industrial systems. Using a CAN interface device, you can write LabVIEW applications to communicate with a CAN network.

CAN History

Bosch originally developed CAN in 1985 for in-vehicle networks. In the past, automotive manufacturers connected electronic devices in vehicles using point-to-point wiring systems. Manufacturers began using more and more electronics in vehicles, which resulted in bulky wire harnesses that were heavy and expensive. They then replaced dedicated wiring with in-vehicle networks, which reduced wiring cost, complexity, and weight. CAN, a high-integrity serial bus system for networking intelligent devices, emerged as the standard in-vehicle network. The automotive industry quickly adopted CAN and, in 1993, it became the international standard known as ISO 11898. Since 1994, several higher-level protocols have been standardized on CAN, such as CANopen and DeviceNet. Other markets have widely adopted these additional protocols, which are now standards for industrial communications. This white paper focuses on CAN as an in-vehicle network.

Read more here.

Fig.1: CAN networks significantly reduce wiring.  Source: NI.

The post Controller Area Network (CAN) Overview appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Expediting Manufacturing Safe Launch With Big Data AI/ML Analytic Solutions On The CloudPDF Solutions
    With highly competitive time-to-market and time-to-volume windows, IC suppliers need to be able to release new product to production (NPI) in a timely manner with competitive manufacturing metrics. Manufacturing yield, test time and quality are important metrics in NPI to Manufacturing safe launch. A powerful yield management system is crucial to achieve the goal metrics. In this paper, recommended yield management system selection criteria, data integration methodology and innovative ways of us
     

Expediting Manufacturing Safe Launch With Big Data AI/ML Analytic Solutions On The Cloud

6. Srpen 2024 v 09:01

With highly competitive time-to-market and time-to-volume windows, IC suppliers need to be able to release new product to production (NPI) in a timely manner with competitive manufacturing metrics. Manufacturing yield, test time and quality are important metrics in NPI to Manufacturing safe launch. A powerful yield management system is crucial to achieve the goal metrics. In this paper, recommended yield management system selection criteria, data integration methodology and innovative ways of using selected yield management system to benefit safe launch efficiency are introduced. Three examples of using cloud yield tool to expedite yield learning, test time reduction (TTR) and quality enhancement are presented.

Find more information here.

The post Expediting Manufacturing Safe Launch With Big Data AI/ML Analytic Solutions On The Cloud appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Real-World Applications Of Computational Fluid DynamicsEd Sperling
    More powerful chips are enabling chips to process more data faster, but they’re also having a revolutionary impact on how that data can be used. Simulations that used to take days or weeks now can be completed in a matter of hours, and multi-physics simulations that were implausible to even consider are now very much in the realm of what is possible. Parviz Moin, professor of mechanical engineering and director of the Center for Turbulence Research at Stanford University, talks about a future fi
     

Real-World Applications Of Computational Fluid Dynamics

5. Srpen 2024 v 09:15

More powerful chips are enabling chips to process more data faster, but they’re also having a revolutionary impact on how that data can be used. Simulations that used to take days or weeks now can be completed in a matter of hours, and multi-physics simulations that were implausible to even consider are now very much in the realm of what is possible. Parviz Moin, professor of mechanical engineering and director of the Center for Turbulence Research at Stanford University, talks about a future filled with “what if” scenarios, more grid points to capture tiny anomalies in wind or the behavior of jet engines, and much more detailed high-fidelity numerical simulations of waves, chemical reactions, and phase changes.

The post Real-World Applications Of Computational Fluid Dynamics appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • AI/ML’s Role In Design And Test ExpandsLaura Peters
    The role of AI and ML in test keeps growing, providing significant time and money savings that often exceed initial expectations. But it doesn’t work in all cases, sometimes even disrupting well-tested process flows with questionable return on investment. One of the big attractions of AI is its ability to apply analytics to large data sets that are otherwise limited by human capabilities. In the critical design-to-test realm, AI can address problems such as tool incompatibilities between the des
     

AI/ML’s Role In Design And Test Expands

5. Srpen 2024 v 09:03

The role of AI and ML in test keeps growing, providing significant time and money savings that often exceed initial expectations. But it doesn’t work in all cases, sometimes even disrupting well-tested process flows with questionable return on investment.

One of the big attractions of AI is its ability to apply analytics to large data sets that are otherwise limited by human capabilities. In the critical design-to-test realm, AI can address problems such as tool incompatibilities between the design set-up, simulation, and ATE test program, which typically slows debugging and development efforts. Some of the most time-consuming and costly aspects of design-to-test arise from incompatibilities between tools.

“During device bring-up and debug, complex software/hardware interactions can expose the need for domain knowledge from multiple teams or stakeholders, who may not be familiar with each other’s tools,” said Richard Fanning, lead software engineer at Teradyne. “Any time spent doing conversions or debugging differences in these set-ups is time wasted. Our toolset targets this exact problem by allowing all set-ups to use the same set of source files so everyone can be sure they are running the same thing.”

ML/AI can help keep design teams on track, as well. “As we drive down this technology curve, the analytics and the compute infrastructure that we have to bring to bear becomes increasingly more complex and you want to be able to make the right decision with a minimal amount of overkill,” said Ken Butler, senior director of business development in the ACS data analytics platform group at Advantest. “In some cases, we are customizing the test solution on a die-by-die type of basis.”

But despite the hype, not all tools work well in every circumstance. “AI has some great capabilities, but it’s really just a tool,” said Ron Press, senior director of technology enablement at Siemens Digital Industries Software, in a recent presentation at a MEPTEC event. “We still need engineering innovation. So sometimes people write about how AI is going to take away everybody’s job. I don’t see that at all. We have more complex designs and scaling in our designs. We need to get the same work done even faster by using AI as a tool to get us there.”

Speeding design to characterization to first silicon
In the face of ever-shrinking process windows and the lowest allowable defectivity rates, chipmakers continually are improving the design-to-test processes to ensure maximum efficiency during device bring-up and into high volume manufacturing. “Analytics in test operations is not a new thing. This industry has a history of analyzing test data and making product decisions for more than 30 years,” said Advantest’s Butler. “What is different now is that we’re moving to increasingly smaller geometries, advanced packaging technologies and chiplet-based designs. And that’s driving us to change the nature of the type of analytics that we do, both in terms of the software and the hardware infrastructure. But from a production test viewpoint, we’re still kind of in the early days of our journey with AI and test.”

Nonetheless, early adopters are building out the infrastructure needed for in-line compute and AI/ML modeling to support real-time inferencing in test cells. And because no one company has all the expertise needed in-house, partnerships and libraries of applications are being developed with tool-to-tool compatibility in mind.

“Protocol libraries provide out-of-the-box solutions for communicating common protocols. This reduces the development and debug effort for device communication,” said Teradyne’s Fanning. “We have seen situations where a test engineer has been tasked with talking to a new protocol interface, and saved significant time using this feature.”

In fact, data compatibility is a consistent theme, from design all the way through to the latest developments in ATE hardware and software. “Using the same test sequences between characterization and production has become key as the device complexity has increased exponentially,” explained Teradyne’s Fanning. “Partnerships with EDA tool and IP vendors is also key. We have worked extensively with industry leaders to ensure that the libraries and test files they output are formats our system can utilize directly. These tools also have device knowledge that our toolset does not. This is why the remote connect feature is key, because our partners can provide context-specific tools that are powerful during production debug. Being able to use these tools real-time without having to reproduce a setup or use case in a different environment has been a game changer.”

Serial scan test
But if it seems as if all the configuration changes are happening on the test side, it’s important to take stock of substantial changes on the approach to multi-core design for test.

Tradeoffs during the iterative process of design for test (DFT) have become so substantial in the case of multi-core products that a new approach has become necessary.

“If we look at the way a design is typically put together today, you have multiple cores that are going to be produced at different times,” said Siemens’ Press. “You need to have an idea of how many I/O pins you need to get your scan channels, the deep serial memory from the tester that’s going to be feeding through your I/O pins to this core. So I have a bunch of variables I need to trade off. I have the number of pins going to the core, the pattern size, and the complexity of the core. Then I’ll try to figure out what’s the best combination of cores to test together in what is called hierarchical DFT. But as these designs get more complex, with upwards of 2,500 cores, that’s a lot of tradeoffs to figure out.”

Press noted that applying AI with the same architecture can provide a 20% to 30% higher efficiency, but an improved methodology based on packetized scan test (see figure 1) actually makes more sense.


Fig. 1: Advantages to the serial scan network (SSN) approach. Source: Siemens

“Instead of having tester channels feeding into the scan channels that go to each core, you have a packetized bus and packets of data that feed through all the cores. Then you instruct the cores when their packet information is going to be available. By doing this, you don’t have as many variables you need to trade off,” he said. At the core level, each core can be optimized for any number of scan channels and patterns, and the I/O pin count is no longer a variable in the calculation. “Then, when you put it into this final chip, it deliver from the packets the amount of data you need for that core, that can work with any size serial bus, in what is called a serial scan network (SSN).”

Some of the results reported by Siemens EDA customers (see figure 2) highlight both supervised and unsupervised machine learning implementation for improvements in diagnosis resolution and failure analysis. DFT productivity was boosted by 5 to 10X using the serial scan network methodology.


Fig. 2: Realized benefits using machine learning and the serial scan network approach. Source: Siemens

What slows down AI implementation in HVM?
In the transition from design to testing of a device, the application of machine learning algorithms can enable a number of advantages, from better pairing of chiplet performance for use in an advanced package to test time reduction. For example, only a subset of high-performing devices may require burn-in.

“You can identify scratches on wafers, and then bin out the dies surrounding those scratches automatically within wafer sort,” said Michael Schuldenfrei, fellow at NI/Emerson Test & Measurement. “So AI and ML all sounds like a really great idea, and there are many applications where it makes sense to use AI. The big question is, why isn’t it really happening frequently and at-scale? The answer to that goes into the complexity of building and deploying these solutions.”

Schuldenfrei summarized four key steps in ML’s lifecycle, each with its own challenges. In the first phase, the training, engineering teams use data to understand a particular issue and then build a model that can be used to predict an outcome associated with that issue. Once the model is validated and the team wants to deploy it in the production environment, it needs to be integrated with the existing equipment, such as a tester or manufacturing execution system (MES). Models also mature and evolve over time, requiring frequent validation of the data going into the model and checking to see that the model is functioning as expected. Models also must adapt, requiring redeployment, learning, acting, validating and adapting, in a continuous circle.

“That eats up a lot of time for the data scientists who are charged with deploying all these new AI-based solutions in their organizations. Time is also wasted in the beginning when they are trying to access the right data, organizing it, connecting it all together, making sense of it, and extracting features from it that actually make sense,” said Schuldenfrei.

Further difficulties are introduced in a distributed semiconductor manufacturing environment in which many different test houses are situated in various locations around the globe. “By the time you finish implementing the ML solution, your model is stale and your product is probably no longer bleeding edge so it has lost its actionability, when the model needs to make a decision that actually impacts either the binning or the processing of that particular device,” said Schuldenfrei. “So actually deploying ML-based solutions in a production environment with high-volume semiconductor test is very far from trivial.”

He cited a 2014 Google article that stated how the ML code development part of the process is both the smallest and easiest part of the whole exercise, [1] whereas the various aspects of building infrastructure, data collection, feature extraction, data verification, and managing model deployments are the most challenging parts.

Changes from design through test ripple through the ecosystem. “People who work in EDA put lots of effort into design rule checking (DRC), meaning we’re checking that the work we’ve done and the design structure are safe to move forward because we didn’t mess anything up in the process,” said Siemens’ Press. “That’s really important with AI — what we call verifiability. If we have some type of AI running and giving us a result, we have to make sure that result is safe. This really affects the people doing the design, the DFT group and the people in test engineering that have to take these patterns and apply them.”

There are a multitude of ML-based applications for improving test operations. Advantest’s Butler highlighted some of the apps customers are pursuing most often, including search time reduction, shift left testing, test time reduction, and chiplet pairing (see figure 3).

“For minimum voltage, maximum frequency, or trim tests, you tend to set a lower limit and an upper limit for your search, and then you’re going to search across there in order to be able to find your minimum voltage for this particular device,” he said. “Those limits are set based on process split, and they may be fairly wide. But if you have analytics that you can bring to bear, then the AI- or ML-type techniques can basically tell you where this die lies on the process spectrum. Perhaps it was fed forward from an earlier insertion, and perhaps you combine it with what you’re doing at the current insertion. That kind of inference can help you narrow the search limits and speed up that test. A lot of people are very interested in this application, and some folks are doing it in production to reduce search time for test time-intensive tests.”


Fig. 3: Opportunities for real-time and/or post-test improvements to pair or bin devices, improve yield, throughput, reliability or cost using the ACS platform. Source: Advantest

“The idea behind shift left is perhaps I have a very expensive test insertion downstream or a high package cost,” Butler said. “If my yield is not where I want it to be, then I can use analytics at earlier insertions to be able to try to predict which devices are likely to fail at the later insertion by doing analysis at an earlier insertion, and then downgrade or scrap those die in order to optimize downstream test insertions, raising the yield and lowering overall cost. Test time reduction is very simply the addition or removal of test content, skipping tests to reduce cost. Or you might want to add test content for yield improvement,” said Butler.

“If I have a multi-tiered device, and it’s not going to pass bin 1 criteria – but maybe it’s bin 2 if I add some additional content — then people may be looking at analytics to try to make those decisions. Finally, two things go together in my mind, this idea of chiplet designs and smart pairing. So the classic example is a processor die with a stack of high bandwidth memory on top of it. Perhaps I’m interested in high performance in some applications and low power in others. I want to be able to match the content and classify die as they’re coming through the test operation, and then downstream do pick-and-place and put them together in such a way that I maximize the yield for multiple streams of data. Similar kinds of things apply for achieving a low power footprint and carbon footprint.”

Generative AI
The question that inevitably comes up when discussing the role of AI in semiconductors is whether or not large language models like ChatGPT can prove useful to engineers working in fabs. Early work shows some promise.

“For example, you can ask the system to build an outlier detection model for you that looks for parts that are five sigma away from the center line, saying ‘Please create the script for me,’ and the system will create the script. These are the kinds of automated, generative AI-based solutions that we’re already playing with,” says Schuldenfrei. “But from everything I’ve seen so far, there is still quite a lot of work to be done to get these systems to provide outputs with high enough quality. At the moment, the amount of human interaction that is needed afterward to fix problems with the algorithms or models that generative AI is producing is still quite significant.”

A lingering question is how to access the test programs needed to train the new test programs when everyone is protecting important test IP? “Most people value their test IP and don’t necessarily want to set up guardrails around the training and utilization processes,” Butler said. “So finding a way to accelerate the overall process of developing test programs while protecting IP is the challenge. It’s clear this kind of technology is going to be brought to bear, just like we already see in the software development process.”

Failure analysis
Failure analysis is typically a costly and time-consuming endeavor for fabs because it requires a trip back in time to gather wafer processing, assembly, and packaging data specific to a particular failed device, known as a returned material authorization (RMA). Physical failure analysis is performed in an FA lab, using a variety of tools to trace the root cause of the failure.

While scan diagnostic data has been used for decades, a newer approach involves pairing a digital twin with scan diagnostics data to find the root cause of failures.

“Within test, we have a digital twin that does root cause deconvolution based on scan failure diagnosis. So instead of having to look at the physical device and spend time trying to figure out the root cause, since we have scan, we have millions and millions of virtual sample points,” said Siemens’ Press. “We can reverse-engineer what we did to create the patterns and figure out where the mis-compare happened within the scan cells deep within the design. Using YieldInsight and unsupervised machine learning with training on a bunch of data, we can very quickly pinpoint the fail locations. This allows us to run thousands, or tens of thousands fail diagnoses in a short period of time, giving us the opportunity to identify the systematic yield limiters.”

Yet another approach that is gaining steam is using on-die monitors to access specific performance information in lieu of physical FA. “What is needed is deep data from inside the package to monitor performance and reliability continuously, which is what we provide,” said Alex Burlak, vice president of test and analytics at proteanTecs. “For example, if the suspected failure is from the chiplet interconnect, we can help the analysis using deep data coming from on-chip agents instead of taking the device out of context and into the lab (where you may or may not be able to reproduce the problem). Even more, the ability to send back data and not the device can in many cases pinpoint the problem, saving the expensive RMA and failure analysis procedure.”

Conclusion
The enthusiasm around AI and machine learning is being met by robust infrastructure changes in the ATE community to accommodate the need for real-time inferencing of test data and test optimization for higher yield, higher throughput, and chiplet classifications for multi-chiplet packages. For multi-core designs, packetized test, commercialized as an SSN methodology, provides a more flexible approach to optimizing each core for the number of scan chains, patterns and bus width needs of each core in a device.

The number of testing applications that can benefit from AI continues to rise, including test time reduction, Vmin/Fmax search reduction, shift left, smart pairing of chiplets, and overall power reduction. New developments like identical source files for all setups across design, characterization, and test help speed the critical debug and development stage for new products.

Reference

  1. https://proceedings.neurips.cc/paper_files/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf

The post AI/ML’s Role In Design And Test Expands appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Research Bits: Aug. 5Jesse Allen
    Measuring temperature with neutrons Researchers from Osaka University, National Institutes for Quantum Science and Technology, Hokkaido University, Japan Atomic Energy Agency, and Tokamak Energy developed a way to rapidly measure the temperature of electronic components inside a device using neutrons. The technique, called ‘neutron resonance absorption’ (NRA), examines neutrons being absorbed by atomic nuclei at certain energy levels to determine the properties of the material. After being gener
     

Research Bits: Aug. 5

5. Srpen 2024 v 09:01

Measuring temperature with neutrons

Researchers from Osaka University, National Institutes for Quantum Science and Technology, Hokkaido University, Japan Atomic Energy Agency, and Tokamak Energy developed a way to rapidly measure the temperature of electronic components inside a device using neutrons.

The technique, called ‘neutron resonance absorption’ (NRA), examines neutrons being absorbed by atomic nuclei at certain energy levels to determine the properties of the material. After being generated using high-intensity laser beans, the neutrons were then decelerated to a very low energy level before being passed through the sample, in this case plates of tantalum and silver. The temporal signal of the NRA was altered in a predictable manner when the sample material’s temperature was changed.

“This technology makes it possible to instantaneously and accurately measure temperature,” said Zechen Lan of Osaka University, in a statement. “As our method is non-destructive, it can be used to monitor devices like batteries and semiconductor devices.”

The technique can acquire temperature data in a window of 100 nanoseconds, and the measurement device itself is about a tenth of the size of similar equipment.

“Using lasers to generate and accelerate ions and neutrons is nothing new, but the techniques we’ve developed in this study represent an exciting advance,” added Akifumi Yogo of Osaka University, in a statement. “We expect that the high temporal resolution will allow electronics to be examined in greater detail, help us to understand normal operating conditions, and pinpoint abnormalities.” [1]

Mapping heat transfer

Researchers from the University of Rochester applied optical super-resolution fluorescence microscopy techniques used in biological imaging to map heat transfer in electronic devices using luminescent nanoparticles.

By applying highly doped upconverting nanoparticles to the surface of a device, the researchers were able to achieve super-high resolution thermometry at the nanoscale level from up to 10 millimeters away.

Rochester researchers demonstrated their super-high resolution thermometry techniques on an electrical heater structure that the team designed to produce sharp temperature gradients. (Credit: University of Rochester / J. Adam Fenster)

“The building blocks of our modern electronics are transistors with nanoscale features, so to understand which parts of overheating, the first step is to get a detailed temperature map,” said Andrea Pickel, an assistant professor from the University of Rochester’s Department of Mechanical Engineering, in a release. “But you need something with nanoscale resolution to do that.”

The researchers demonstrated the technique using an electrical heater structure designed to produce sharp temperature gradients. To improve the process, the team hopes to lower the laser power used and refine the methods for applying layers of nanoparticles to the devices. [2]

ML for predicting thermal properties

Researchers from MIT, Argonne National Laboratory, Harvard University, the University of South Carolina, Emory University, the University of California at Santa Barbara, and Oak Ridge National Laboratory propose a new machine learning framework that provides much faster prediction of phonon dispersion relations, an important measurement for determining the thermal properties of a material and how heat moves through semiconductors and insulators.

Heat-carrying phonons have an extremely wide frequency range, and the particles interact and travel at different speeds. “Phonons are the culprit for the thermal loss, yet obtaining their properties is notoriously challenging, either computationally or experimentally,” said Mingda Li, associate professor of nuclear science and engineering at MIT, in a release.

The researchers started with a graph neural network (GNN) that converts a material’s atomic structure into a crystal graph comprising multiple nodes, which represent atoms, connected by edges, which represent the interatomic bonding between atoms.

To make it suitable for predicting phonon dispersion relations, they created a virtual node graph neural network (VGNN) by adding a series of flexible virtual nodes to the fixed crystal structure to represent phonons. This enabled the VGNN to skip many complex calculations when estimating phonon dispersion relations, making it a more efficient method than a standard GNN.

Li noted that a VGNN could be used to calculate phonon dispersion relations for a few thousand materials in a few seconds with a personal computer. The technique could also be used to predict challenging optical and magnetic properties. [3]

References

[1] Lan, Z., Arikawa, Y., Mirfayzi, S.R. et al. Single-shot laser-driven neutron resonance spectroscopy for temperature profiling. Nat Commun 15, 5365 (2024). https://doi.org/10.1038/s41467-024-49142-y

[2] Ziyang Ye et al., Optical super-resolution nanothermometry via stimulated emission depletion imaging of upconverting nanoparticles. Sci. Adv. 10, eado6268 (2024) https://doi.org/10.1126/sciadv.ado6268

[3] Okabe, R., Chotrattanapituk, A., Boonkird, A. et al. Virtual node graph neural network for full phonon prediction. Nat Comput Sci 4, 522–531 (2024). https://doi.org/10.1038/s43588-024-00661-0

The post Research Bits: Aug. 5 appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Ensure Reliability In Automotive ICs By Reducing Thermal EffectsLee Wang
    In the relentless pursuit of performance and miniaturization, the semiconductor industry has increasingly turned to 3D integrated circuits (3D-ICs) as a cutting-edge solution. Stacking dies in a 3D assembly offers numerous benefits, including enhanced performance, reduced power consumption, and more efficient use of space. However, this advanced technology also introduces significant thermal dissipation challenges that can impact the electrical behavior, reliability, performance, and lifespan of
     

Ensure Reliability In Automotive ICs By Reducing Thermal Effects

Od: Lee Wang
5. Srpen 2024 v 09:01

In the relentless pursuit of performance and miniaturization, the semiconductor industry has increasingly turned to 3D integrated circuits (3D-ICs) as a cutting-edge solution. Stacking dies in a 3D assembly offers numerous benefits, including enhanced performance, reduced power consumption, and more efficient use of space. However, this advanced technology also introduces significant thermal dissipation challenges that can impact the electrical behavior, reliability, performance, and lifespan of the chips (figure 1). For automotive applications, where safety and reliability are paramount, managing these thermal effects is of utmost importance.

Fig. 1: Illustration of a 3D-IC with heat dissipation.

3D-ICs have become particularly attractive for safety-critical devices like automotive sensors. Advanced driver-assistance systems (ADAS) and autonomous vehicles (AVs) rely on these compact, high-performance chips to process vast amounts of sensor data in real time. Effective thermal management in these devices is a top priority to ensure that they function reliably under various operating conditions.

The thermal challenges of 3D-ICs in automotive applications

The stacked configuration of 3D-ICs inherently leads to complex thermal dynamics. In traditional 2D designs, heat dissipation occurs across a single plane, making it relatively straightforward to manage. However, in 3D-ICs, multiple active layers generate heat, creating significant thermal gradients and hotspots. These thermal issues can adversely affect device performance and reliability, which is particularly critical in automotive applications where components must operate reliably under extreme temperatures and harsh conditions.

These thermal effects in automotive 3D-ICs can impact the electrical behavior of the circuits, causing timing errors, increased leakage currents, and potential device failure. Therefore, accurate and comprehensive thermal analysis throughout the design flow is essential to ensure the reliability and performance of automotive ICs.

The importance of early and continuous thermal analysis

Traditionally, thermal analysis has been performed at the package and system levels, often as a separate process from IC design. However, with the advent of 3D-ICs, this approach is no longer sufficient.

To address the thermal challenges of 3D-ICs for automotive applications, it is crucial to incorporate die-level thermal analysis early in the design process and continue it throughout the design flow (figure 2). Early-stage thermal analysis can help identify potential hotspots and thermal bottlenecks before they become critical issues, enabling designers to make informed decisions about chiplet placement, power distribution, and cooling strategies. These early decisions reduce the risks of thermal-induced failures, improving the reliability of 3D automotive ICs.

Fig. 2: Die-level detailed thermal analysis using accurate package and boundary conditions should be fully integrated into the ASIC design flow to allow for fast thermal exploration.

Early package design, floorplanning and thermal feasibility analysis

During the initial package design and floorplanning stage, designers can use high-level power estimates and simplified models to perform thermal feasibility studies. These early analyses help identify configurations that are likely to cause thermal problems, allowing designers to rule out problematic designs before investing significant time and resources in detailed implementation.

Fig. 3: Thermal analysis as part of the package design, floorplanning and implementation flows.

For example, thermal analysis can reveal issues such as overlapping heat sources in stacked dies or insufficient cooling paths. By identifying these problems early, designers can explore alternative floorplans and adjust power distribution to mitigate thermal risks. This proactive approach reduces the likelihood of encountering critical thermal issues late in the design process, thereby shortening the overall design cycle.

Iterative thermal analysis throughout design refinement

As the design progresses and more detailed information becomes available, thermal analysis should be performed iteratively to refine the thermal model and verify that the design remains within acceptable thermal limits. At each stage of design refinement, additional details such as power maps, layout geometries and their material properties can be incorporated into the thermal model to improve accuracy.

This iterative approach lets designers continuously monitor and address thermal issues, ensuring that the design evolves in a thermally aware manner. By integrating thermal analysis with other design verification tasks, such as timing and power analysis, designers can achieve a holistic view of the design’s performance and reliability.

A robust thermal analysis tool should support various stages of the design process, providing value from initial concept to final signoff:

  1. Early design planning: At the conceptual stage, designers can apply high-level power estimates to explore the thermal impact of different design options. This includes decisions related to 3D partitioning, die assembly, block and TSV floorplan, interface layer design, and package selection. By identifying potential thermal issues early, designers can make informed decisions that avoid costly redesigns later.
  2. Detailed design and implementation: As designs become more detailed, thermal analysis should be used to verify that the design stays within its thermal budget. This involves analyzing the maturing package and die layout representations to account for their impact on thermally sensitive electrical circuits. Fine-grained power maps are crucial at this stage to capture hotspot effects accurately.
  3. Design signoff: Before finalizing the design, it is essential to perform comprehensive thermal verification. This ensures that the design meets all thermal constraints and reliability requirements. Automated constraints checking and detailed reporting can expedite this process, providing designers with clear insights into any remaining thermal issues.
  4. Connection to package-system analysis: Models from IC-level thermal analysis can be used in thermal analysis of the package and system. The integration lets designers build a streamlined flow through the entire development process of a 3D electronic product.

Tools and techniques for accurate thermal analysis

To effectively manage thermal challenges in automotive ICs, designers need advanced tools and techniques that can provide accurate and fast thermal analysis throughout the design flow. Modern thermal analysis tools are equipped with capabilities to handle the complexity of 3D-IC designs, from early feasibility studies to final signoff.

High-fidelity thermal models

Accurate thermal analysis requires high-fidelity thermal models that capture the intricate details of the 3D-IC assembly. These models should account for non-uniform material properties, fine-grained power distributions, and the thermal impact of through-silicon vias (TSVs) and other 3D features. Advanced tools can generate detailed thermal models based on the actual design geometries, providing a realistic representation of heat flow and temperature distribution.

For instance, tools like Calibre 3DThermal embeds an optimized custom 3D solver from Simcenter Flotherm to perform precise thermal analysis down to the nanometer scale. By leveraging detailed layer information and accurate boundary conditions, these tools can produce reliable thermal models that reflect the true thermal behavior of the design.

Automation and results viewing

Automation is a key feature of modern thermal analysis tools, enabling designers to perform complex analyses without requiring deep expertise in thermal engineering. An effective thermal analysis tool must offer advanced automation to facilitate use by non-experts. Key automation features include:

  1. Optimized gridding: Automatically applying finer grids in critical areas of the model to ensure high resolution where needed, while using coarser grids elsewhere for efficiency.
  2. Time step automation: In transient analysis, smaller time steps can be automatically generated during power transitions to capture key impacts accurately.
  3. Equivalent thermal properties: Automatically reducing model complexity while maintaining accuracy by applying different bin sizes for critical (hotspot) vs non-critical regions when generating equivalent thermal properties.
  4. Power map compression: Using adaptive bin sizes to compress very large power maps to improve tool performance.
  1. Automated reporting: Generating summary reports that highlight key results for easy review and decision-making (figure 4).

Fig. 4: Ways to view thermal analysis results.

Automated thermal analysis tools can also integrate seamlessly with other design verification and implementation tools, providing a unified environment for managing thermal, electrical, and mechanical constraints. This integration ensures that thermal considerations are consistently addressed throughout the design flow, from initial feasibility analysis to final tape-out and even connecting with package-level analysis tools.

Real-world application

The practical benefits of integrated thermal analysis solutions are evident in real-world applications. For instance, a leading research organization, CEA, utilized an advanced thermal analysis tool from Siemens EDA to study the thermal performance of their 3DNoC demonstrator. The high-fidelity thermal model they developed showed a worst-case difference of just 3.75% and an average difference within 2% between simulation and measured data, demonstrating the accuracy and reliability of the tool (figure 5).

Fig. 5: Correlation of simulation versus measured results.

The path forward for automotive 3D-IC thermal management

As the automotive industry continues to embrace advanced technologies, the importance of accurate thermal analysis throughout the design flow of 3D-ICs cannot be overstated. By incorporating thermal analysis early in the design process and iteratively refining thermal models, designers can mitigate thermal risks, reduce design time, and enhance chip reliability.

Advanced thermal analysis tools that integrate seamlessly with the broader design environment are essential for achieving these goals. These tools enable designers to perform high-fidelity thermal analysis, automate complex tasks, and ensure that thermal considerations are addressed consistently from package design, through implementation to signoff.

By embracing these practices, designers can unlock the full potential of 3D-IC technology, delivering innovative, high-performance devices that meet the demands of today’s increasingly complex automotive applications.

For more information about die-level 3D-IC thermal analysis, read Conquer 3DIC thermal impacts with Calibre 3DThermal.

The post Ensure Reliability In Automotive ICs By Reducing Thermal Effects appeared first on Semiconductor Engineering.

Flexible-Wafer Platform And CMOS-Compatible 300mm Wafer-Scale Integrated-Photonics Fabrication

A new technical paper titled “Mechanically-flexible wafer-scale integrated-photonics fabrication platform” was published by researchers at MIT and New York Center for Research, Economic Advancement, Technology, Engineering, and Science (NY CREATES).

Abstract
“The field of integrated photonics has advanced rapidly due to wafer-scale fabrication, with integrated-photonics platforms and fabrication processes being demonstrated at both infrared and visible wavelengths. However, these demonstrations have primarily focused on fabrication processes on silicon substrates that result in rigid photonic wafers and chips, which limit the potential application spaces. There are many application areas that would benefit from mechanically-flexible integrated-photonics wafers, such as wearable healthcare monitors and pliable displays. Although there have been demonstrations of mechanically-flexible photonics fabrication, they have been limited to fabrication processes on the individual device or chip scale, which limits scalability. In this paper, we propose, develop, and experimentally characterize the first 300-mm wafer-scale platform and fabrication process that results in mechanically-flexible photonic wafers and chips. First, we develop and describe the 300-mm wafer-scale CMOS-compatible flexible platform and fabrication process. Next, we experimentally demonstrate key optical functionality at visible wavelengths, including chip coupling, waveguide routing, and passive devices. Then, we perform a bend-durability study to characterize the mechanical flexibility of the photonic chips, demonstrating bending a single chip 2000 times down to a bend diameter of 0.5 inch with no degradation in the optical performance. Finally, we experimentally characterize polarization-rotation effects induced by bending the flexible photonic chips. This work will enable the field of integrated photonics to advance into new application areas that require flexible photonic chips.”

Find the technical paper here. Published May 2024. Find MIT’s news release here.

Notaros, M., Dyer, T., Garcia Coleto, A. et al. Mechanically-flexible wafer-scale integrated-photonics fabrication platform. Sci Rep 14, 10623 (2024). https://doi.org/10.1038/s41598-024-61055-w.

The post Flexible-Wafer Platform And CMOS-Compatible 300mm Wafer-Scale Integrated-Photonics Fabrication appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Chip Industry Week in ReviewThe SE Staff
    Okinawa Institute of Science and Technology proposed a new EUV litho technology using only four reflective mirrors and a new method of illumination optics that it claims will use 1/10 the power and cost half as much as existing EUV technology from ASML. Applied Materials may not receive expected U.S. funding to build a $4 billion research facility in Sunnyvale, CA, due to internal government disagreements over how to fund chip R&D, according to Bloomberg. SEMI published a position paper this
     

Chip Industry Week in Review

2. Srpen 2024 v 09:01

Okinawa Institute of Science and Technology proposed a new EUV litho technology using only four reflective mirrors and a new method of illumination optics that it claims will use 1/10 the power and cost half as much as existing EUV technology from ASML.

Applied Materials may not receive expected U.S. funding to build a $4 billion research facility in Sunnyvale, CA, due to internal government disagreements over how to fund chip R&D, according to Bloomberg.

SEMI published a position paper this week cautioning the European Union against imposing additional export controls to allow companies, encouraging them to  be “as free as possible in their investment decisions to avoid losing their agility and relevance across global markets.” SEMI’s recommendations on outbound investments are in response to the European Economic Security Strategy and emphasize the need for a transparent and predictable regulatory framework.

The U.S. may restrict China’s access to HBM chips and the equipment needed to make them, reports Bloomberg. Today those chips are manufactured by two Korean-based companies, Samsung and SK hynix, but U.S.-based Micron expects to begin shipping 12-high stacks of HBM3E in 2025, and is currently working on HBM4.

Synopsys executive chair and founder Dr. Aart de Geus was named the winner of the Semiconductor Industry Association’s Robert N. Noyce Award. De Geus was selected due to his contributions to EDA technology over a career spanning more than four decades.

The top three foundries plan to implement high-NA EUV lithography as early as 2025 for the 18 angstrom generation, but the replacement of single exposure high-NA (0.55) over double patterning with standard EUV (NA = 0.33) depends on whether it provides better results at a reasonable cost per wafer.

Quick links to more news:

Global
In-Depth
Market Reports and Earnings
Education and Training
Security
Product News
Research
Events and Further Reading


Global

Belgium-based Imec released part 2 of its chiplets series, addressing testing strategies and standardization efforts, as well as guidelines and research “towards efficient ESD protection strategies for advanced 3D systems-on-chip.”

Also in Belgium, BelGan, maker of GaN chips, filed for bankruptcy according to the Brussels Times.

TSMC‘s Dresden, Germany, plant will break ground this month.

The UK will dole out more than £100 million (~US $128 million) in funding to develop five new quantum research hubs in Glasgow, Edinburgh, Birmingham, Oxford, and London.

MassPhoton is opening Hong Kong‘s first ultra-high vacuum GaN epitaxial wafer pilot line and will establish a GaN research center.

Infineon completed the sale of its manufacturing sites in the Philippines and South Korea to ASE.

Israel-based RAAAM Memory Technologies received a €5.25 million grant from the European Innovation Council (EIC) to support the development and commercialization of its innovative memory solutions. This funding will enable RAAAM to advance its research in high-performance and energy-efficient memory technologies, accelerating their integration into various applications and markets.


In-Depth

Semiconductor Engineering published its Automotive, Security and Pervasive Computing newsletter this week, featuring these top stories and video:

And:


Market Reports and Earnings

The semiconductor equipment industry is on a positive trajectory in 2024, with moderate revenue growth observed in Q2 after a subdued Q1, according to a new report from Yole Group. Wafer Fab Equipment revenue is projected to grow by 1.3% year-on-year, despite a 12% drop in Q1. Test equipment lead times are normalizing, improving order conditions. Key areas driving growth include memory and logic capital expenditures and high-bandwidth memory demand.

Worldwide silicon wafer shipments increased by 7% in Q2 2024, according to SEMI‘s latest report. This growth is attributed to robust demand from multiple semiconductor sectors, driven by advancements in AI, 5G, and automotive technologies.

The RF GaN market is projected to grow to US $2 billion by 2029, a 10% CAGR, according to Yole Group.

Counterpoint released their Q2 smartphone top 10 report.

Renesas completed their acquisition of EDA firm Altium, best known for its EDA platform and freeware CircuitMaker package.

It’s earnings season and here are recently released financials in the chip industry:

AMD  Advantest   Amkor   Ansys  Arteris   Arm   ASE   ASM   ASML
Cadence  IBM   Intel   Lam Research   Lattice   Nordson   NXP   Onsemi 
Qualcomm   Rambus  Samsung    SK Hynix   STMicro   Teradyne    TI  
Tower  TSMC    UMC  Western Digital

Industry stock price impacts are here.


Education and Training

Rochester Institute of Technology is leading a new pilot program to prepare community college students in areas such as cleanroom operations, new materials, simulation, and testing processes, with the intent of eventual transfer into RIT’s microelectronic engineering program.

Purdue University inked a deal with three research institutions — University of Piraeus, Technical University of Crete, and King’s College London —to develop joint research programs for semiconductors, AI and other critical technology fields.

The European Chips Skills Academy formed the Educational Leaders Board to help bridge the talent gap in Europe’s microelectronics sector.  The Board includes representatives from universities, vocational training providers, educators and research institutions who collaborate on strategic initiatives to strengthen university networks and build academic expertise through ECSA training programs.


Security

The Cybersecurity and Infrastructure Security Agency (CISA) is encouraging Apple users to review and apply this week’s recent security updates.

Microsoft Azure experienced a nearly 10 hour DDoS attack this week, leading to global service disruption for many customers.  “While the initial trigger event was a Distributed Denial-of-Service (DDoS) attack, which activated our DDoS protection mechanisms, initial investigations suggest that an error in the implementation of our defenses amplified the impact of the attack rather than mitigating it,” stated Microsoft in a release.

NIST published:

  • “Recommendations For Increasing U.S. Participation and Leadership in Standards Development,” a report outlining cybersecurity recommendations and mitigation strategies.
  • Final guidance documents and software to help improve the “safety, security and trustworthiness of AI systems.”
  • Cloud Computing Forensic Reference Architecture guide.

Delta Air Lines plans to seek damages after losing $500 million in lost revenue due to security company CrowdStrike‘s software update debacle.  And shareholders are also angry.

Recent security research:

  • Physically Secure Logic Locking With Nanomagnet Logic (UT Dallas)
  • WBP: Training-time Backdoor Attacks through HW-based Weight Bit Poisoning (UCF)
  • S-Tune: SOT-MTJ Manufacturing Parameters Tuning for Secure Next Generation of Computing ( U. of Arizona, UCF)
  • Diffie Hellman Picture Show: Key Exchange Stories from Commercial VoWiFi Deployments (CISPA, SBA Research, U. of Vienna)

Product News

Lam Research introduced a new version of its cryogenic etch technology designed to enhance the manufacturing of 3D NAND for AI applications. This technology allows for the precise etching of high aspect ratio features, crucial for creating 1,000-layer 3D NAND.


Fig.1: 3D NAND etch. Source: Lam Research

Alphawave Semi launched its Universal Chiplet Interconnect Express Die-to-Die IP. The subsystem offers 8 Tbps/mm bandwidth density and supports operation at 24 Gbps for D2D connectivity.

Infineon introduced a new MCU series for industrial and consumer motor controls, as well as power conversion system applications. The company also unveiled its new GoolGaN Drive product family of integrated single switches and half-bridges with integrated drivers.

Rambus released its DDR5 Client Clock Driver for next-gen, high-performance desktops and notebooks. The chips include Gen1 to Gen4 RCDs, power management ICs, Serial Presence Detect Hubs, and temperature sensors for leading-edge servers.

SK hynix introduced its new GDDR7 graphics DRAM. The product has an operating speed of 32Gbps, can process 1.5TB of data per second and has a 50% power efficiency improvement compared to the previous generation.

Intel launched its new Lunar Lake Ultra processors. The long awaited chips will be included in more than 80 laptop designs and has more than 40 NPU tera operations per second as well as over 60 GPU TOPS delivering more than 100 platform TOPS.

Brewer Science achieved recertification as a Certified B Corporation, reaffirming its commitment to sustainable and ethical business practices.

Panasonic adopted Siemens’ Teamcenter X cloud product lifecycle management solution, citing Teamcenter X’s Mendix low-code platform, improved operational efficiency and flexibility for its choice.

Keysight validated its 5G NR FR1 1024-QAM demodulation test cases for the first time. The 5G NR radio access technology supports eMBB and was validated on the 3GPP TS 38.521-4 test specification.


Research

In a 47-page deep-dive report, the Center for Security and Emerging Technology delved into all of the scientific breakthroughs from 1980 to present that brought EUV lithography to commercialization, including lessons learned for the next emerging technologies.

Researchers at the Paul Scherrer Institute developed a high-performance X-ray tomography technique using burst ptychography, achieving a resolution of 4nm. This method allows for non-destructive imaging of integrated circuits, providing detailed views of nanostructures in materials like silicon and metals.

MIT signed a four-year agreement with the Novo Nordisk Foundation Quantum Computing Programme at University of Copenhagen, focused on accelerating quantum computing hardware research.

MIT’s Research Laboratory of Electronics (RLE) developed a mechanically flexible wafer-scale integrated photonics fabrication platform. This enables the creation of flexible photonic circuits that maintain high performance while being bendable and stretchable. It offers significant potential for integrating photonic circuits into various flexible substrate applications in wearable technology, medical devices, and flexible electronics.

The Naval Research Lab identified a new class of semiconductor nanocrystals with bright ground-state excitons, emphasizing an important advancement in optoelectronics.

Researchers from National University of Singapore developed a novel method, known as tension-driven CHARM3D,  to fabricate 3D self-healing circuits, enabling the 3D printing of free-standing metallic structures without the need for support materials and external pressure.

Find more research in our Technical Papers library.


Events and Further Reading

Find upcoming chip industry events here, including:

Event Date Location
Atomic Layer Deposition (ALD 2024) Aug 4 – 7 Helsinki
Flash Memory Summit Aug 6 – 8 Santa Clara, CA
USENIX Security Symposium Aug 14 – 16 Philadelphia, PA
SPIE Optics + Photonics 2024 Aug 18 – 22 San Diego, CA
Cadence Cloud Tech Day Aug 20 San Jose, CA
Hot Chips 2024 Aug 25- 27 Stanford University/ Hybrid
Optica Online Industry Meeting: PIC Manufacturing, Packaging and Testing (imec) Aug 27 Online
SEMICON Taiwan Sep 4 -6 Taipei
DVCON Taiwan Sep 10 – 11 Hsinchu
AI HW and Edge AI Summit Sep 9 – 12 San Jose, CA
GSA Executive Forum Sep 26 Menlo Park, CA
SPIE Photomask Technology + EUVL Sep 29 – Oct 3 Monterey, CA
Strategic Materials Conference: SMC 2024 Sep 30 – Oct 2 San Jose, CA
Find All Upcoming Events Here

Upcoming webinars are here, including topics such as quantum safe cryptography, analytics for high-volume manufacturing, and mastering EMC simulations for electronic design.

Find Semiconductor Engineering’s latest newsletters here:

Automotive, Security and Pervasive Computing
Systems and Design
Low Power-High Performance
Test, Measurement and Analytics
Manufacturing, Packaging and Materials

 

The post Chip Industry Week in Review appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Heterogeneity Of 3DICs As A Security VulnerabilityTechnical Paper Link
    A new technical paper titled “Harnessing Heterogeneity for Targeted Attacks on 3-D ICs” was published by Drexel University. Abstract “As 3-D integrated circuits (ICs) increasingly pervade the microelectronics industry, the integration of heterogeneous components presents a unique challenge from a security perspective. To this end, an attack on a victim die of a multi-tiered heterogeneous 3-D IC is proposed and evaluated. By utilizing on-chip inductive circuits and transistors with low voltage th
     

Heterogeneity Of 3DICs As A Security Vulnerability

A new technical paper titled “Harnessing Heterogeneity for Targeted Attacks on 3-D ICs” was published by Drexel University.

Abstract
“As 3-D integrated circuits (ICs) increasingly pervade the microelectronics industry, the integration of heterogeneous components presents a unique challenge from a security perspective. To this end, an attack on a victim die of a multi-tiered heterogeneous 3-D IC is proposed and evaluated. By utilizing on-chip inductive circuits and transistors with low voltage threshold (LVT), a die based on CMOS technology is proposed that includes a sensor to monitor the electromagnetic (EM) emissions from the normal function of a victim die, without requiring physical probing. The adversarial circuit is self-powered through the use of thermocouples that supply the generated current to circuits that sense EM emissions. Therefore, the integration of disparate technologies in a single 3-D circuit allows for a stealthy, wireless, and non-invasive side-channel attack. A thin-film thermo-electric generator (TEG) is developed that produces a 115 mV voltage source, which is amplified 5 × through a voltage booster to provide power to the adversarial circuit. An on-chip inductor is also developed as a component of a sensing array, which detects changes to the magnetic field induced by the computational activity of the victim die. In addition, the challenges associated with detecting and mitigating such attacks are discussed, highlighting the limitations of existing security mechanisms in addressing the multifaceted nature of vulnerabilities due to the heterogeneity of 3-D ICs.”

Find the technical paper here. Published June 2024.

Alec Aversa and Ioannis Savidis. 2024. Harnessing Heterogeneity for Targeted Attacks on 3-D ICs. In Proceedings of the Great Lakes Symposium on VLSI 2024 (GLSVLSI ’24). Association for Computing Machinery, New York, NY, USA, 246–251. https://doi.org/10.1145/3649476.3660385.

The post Heterogeneity Of 3DICs As A Security Vulnerability appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • A Hybrid ECO Detailed Placement Flow for Mitigating Dynamic IR Drop (UC San Diego)Technical Paper Link
    A new technical paper titled “A Hybrid ECO Detailed Placement Flow for Improved Reduction of Dynamic IR Drop” was published by researchers at UC San Diego. Abstract: “With advanced semiconductor technology progressing well into sub-7nm scale, voltage drop has become an increasingly challenging issue. As a result, there has been extensive research focused on predicting and mitigating dynamic IR drops, leading to the development of IR drop engineering change order (ECO) flows – often integrated wi
     

A Hybrid ECO Detailed Placement Flow for Mitigating Dynamic IR Drop (UC San Diego)

A new technical paper titled “A Hybrid ECO Detailed Placement Flow for Improved Reduction of Dynamic IR Drop” was published by researchers at UC San Diego.

Abstract:

“With advanced semiconductor technology progressing well into sub-7nm scale, voltage drop has become an increasingly challenging issue. As a result, there has been extensive research focused on predicting and mitigating dynamic IR drops, leading to the development of IR drop engineering change order (ECO) flows – often integrated with modern commercial EDA tools. However, these tools encounter QoR limitations while mitigating IR drop. To address this, we propose a hybrid ECO detailed placement approach that is integrated with existing commercial EDA flows, to mitigate excessive peak current demands within power and ground rails. Our proposed hybrid approach effectively optimizes peak current levels within a specified “clip”– complementing and enhancing commercial EDA dynamic IR-driven ECO detailed placements. In particular, we: (i) order instances in a netlist in decreasing order of worst voltage drop; (ii) extract a clip around each instance; and (iii) solve an integer linear programming (ILP) problem to optimize instance placements. Our approach optimizes dynamic voltage drops (DVD) across ten designs by up to 15.3% compared to original conventional flows, with similar timing quality and 55.1% less runtime.”

Find the technical paper here. Published June 2024.

Andrew B. Kahng, Bodhisatta Pramanik, and Mingyu Woo. 2024. A Hybrid ECO Detailed Placement Flow for Improved Reduction of Dynamic IR Drop. In Proceedings of the Great Lakes Symposium on VLSI 2024 (GLSVLSI ’24). Association for Computing Machinery, New York, NY, USA, 390–396. https://doi.org/10.1145/3649476.3658727.

The post A Hybrid ECO Detailed Placement Flow for Mitigating Dynamic IR Drop (UC San Diego) appeared first on Semiconductor Engineering.

Classification and Localization of Semiconductor Defect Classes in Aggressive Pitches (imec, Screen)

A new technical paper titled “An Evaluation of Continual Learning for Advanced Node Semiconductor Defect Inspection” was published by Imec and SCREEN SPE Germany.

Abstract

“Deep learning-based semiconductor defect inspection has gained traction in recent years, offering a powerful and versatile approach that provides high accuracy, adaptability, and efficiency in detecting and classifying nano-scale defects. However, semiconductor manufacturing processes are continually evolving, leading to the emergence of new types of defects over time. This presents a significant challenge for conventional supervised defect detectors, as they may suffer from catastrophic forgetting when trained on new defect datasets, potentially compromising performance on previously learned tasks. An alternative approach involves the constant storage of previously trained datasets alongside pre-trained model versions, which can be utilized for (re-)training from scratch or fine-tuning whenever encountering a new defect dataset. However, adhering to such a storage template is impractical in terms of size, particularly when considering High-Volume Manufacturing (HVM). Additionally, semiconductor defect datasets, especially those encompassing stochastic defects, are often limited and expensive to obtain, thus lacking sufficient representation of the entire universal set of defectivity. This work introduces a task-agnostic, meta-learning approach aimed at addressing this challenge, which enables the incremental addition of new defect classes and scales to create a more robust and generalized model for semiconductor defect inspection. We have benchmarked our approach using real resist-wafer SEM (Scanning Electron Microscopy) datasets for two process steps, ADI and AEI, demonstrating its superior performance compared to conventional supervised training methods.”

Find the technical paper here.  Published July 2024 (preprint).

Prasad, Amit, Bappaditya Dey, Victor Blanco, and Sandip Halder. “An Evaluation of Continual Learning for Advanced Node Semiconductor Defect Inspection.” arXiv preprint arXiv:2407.12724 (2024).

The post Classification and Localization of Semiconductor Defect Classes in Aggressive Pitches (imec, Screen) appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Survey of Energy Efficient PIM ProcessorsTechnical Paper Link
    A new technical paper titled “Survey of Deep Learning Accelerators for Edge and Emerging Computing” was published by researchers at University of Dayton and the Air Force Research Laboratory. Abstract “The unprecedented progress in artificial intelligence (AI), particularly in deep learning algorithms with ubiquitous internet connected smart devices, has created a high demand for AI computing on the edge devices. This review studied commercially available edge processors, and the processors that
     

Survey of Energy Efficient PIM Processors

A new technical paper titled “Survey of Deep Learning Accelerators for Edge and Emerging Computing” was published by researchers at University of Dayton and the Air Force Research Laboratory.

Abstract

“The unprecedented progress in artificial intelligence (AI), particularly in deep learning algorithms with ubiquitous internet connected smart devices, has created a high demand for AI computing on the edge devices. This review studied commercially available edge processors, and the processors that are still in industrial research stages. We categorized state-of-the-art edge processors based on the underlying architecture, such as dataflow, neuromorphic, and processing in-memory (PIM) architecture. The processors are analyzed based on their performance, chip area, energy efficiency, and application domains. The supported programming frameworks, model compression, data precision, and the CMOS fabrication process technology are discussed. Currently, most commercial edge processors utilize dataflow architectures. However, emerging non-von Neumann computing architectures have attracted the attention of the industry in recent years. Neuromorphic processors are highly efficient for performing computation with fewer synaptic operations, and several neuromorphic processors offer online training for secured and personalized AI applications. This review found that the PIM processors show significant energy efficiency and consume less power compared to dataflow and neuromorphic processors. A future direction of the industry could be to implement state-of-the-art deep learning algorithms in emerging non-von Neumann computing paradigms for low-power computing on edge devices.”

Find the technical paper here. Published July 2024.

Alam, Shahanur, Chris Yakopcic, Qing Wu, Mark Barnell, Simon Khan, and Tarek M. Taha. 2024. “Survey of Deep Learning Accelerators for Edge and Emerging Computing” Electronics 13, no. 15: 2988. https://doi.org/10.3390/electronics13152988.

The post Survey of Energy Efficient PIM Processors appeared first on Semiconductor Engineering.

❌
❌