FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • Driving Cost Lower and Power Higher With GaNAnne Meixner
    Gallium nitride is starting to make broader inroads in the lower-end of the high-voltage, wide-bandgap power FET market, where silicon carbide has been the technology of choice. This shift is driven by lower costs and processes that are more compatible with bulk silicon. Efficiency, power density (size), and cost are the three major concerns in power electronics, and GaN can meet all three criteria. However, to satisfy all of those criteria consistently, the semiconductor ecosystem needs to deve
     

Driving Cost Lower and Power Higher With GaN

6. Srpen 2024 v 09:02

Gallium nitride is starting to make broader inroads in the lower-end of the high-voltage, wide-bandgap power FET market, where silicon carbide has been the technology of choice. This shift is driven by lower costs and processes that are more compatible with bulk silicon.

Efficiency, power density (size), and cost are the three major concerns in power electronics, and GaN can meet all three criteria. However, to satisfy all of those criteria consistently, the semiconductor ecosystem needs to develop best practices for test, inspection, and metrology, determining what works best for which applications and under varying conditions.

Power ICs play an essential role in stepping up and down voltage levels from one power source to another. GaN is used extensively today in smart phone and laptop adapters, but market opportunities are beginning to widen for this technology. GaN likely will play a significant role in both data centers and automotive applications [1]. Data centers are expanding rapidly due to the focus on AI and a build-out at the edge. And automotive is keen to use GaN power ICs for inverter modules because they will be cheaper than SiC, as well as for onboard battery chargers (OBCs) and various DC-DC conversions from the battery to different applications in the vehicle.


Fig. 1: Current and future fields of interest for GaN and SiC power devices. Source A. Meixner/Semiconductor Engineering

But to enter new markets, GaN device manufactures need to more quickly ramp up new processes and their associated products. Because GaN for power transistors is a developing process technology, measurement data is critical to qualify both the manufacturing process and the reliability of the new semiconductor technology and resulting product.

Much of GaN’s success will depend on metrology and inspection solutions that offer high throughput, as well as non-destructive testing methods such as optical and X-ray. Electron microscopy is useful for drilling down into key device parameters and defect mechanisms. And electrical tests provide complementary data that assists with product/process validation, reliability and qualification, system-level validation, as well as being used for production screening.

Silicon carbide (SiC) remains the material of choice for very high-voltage applications. It offers better performance and higher efficiency than silicon. But SiC is expensive. It requires different equipment than silicon, it’s difficult to grow SiC ingots, and today there is limited wafer capacity.

In contrast, GaN offers some of the same desirable characteristics as SiC and can operate at even higher switching speeds. GaN wafer production is cheaper because it can be created on a silicon substrate utilizing typical silicon processing equipment other than the GaN epitaxial deposition tool. That enables a fab/foundry with a silicon CMOS process to ramp a GaN process with an engineering team experienced in GaN.

The cost comparison isn’t entirely apples-to-apples, of course. The highest-voltage GaN on the market today uses silicon on sapphire (SoS) or other engineered substrates, which are more expensive. But below those voltages, GaN typically has a cost advantage, and that has sparked renewed interest in this technology.

“GaN-based products increase the performance envelopes relative to the incumbent and mature silicon-based technologies,” said Vineet Pancholi, senior director of test technology at Amkor. “Switching speeds with GaN enable the application in ways never possible with silicon. But as the GaN production volumes ramp, these products have extreme economic pressures. The production test list includes static attributes. However, the transient and dynamic attributes are the primary benefit of GaN in the end application.”

Others agree. “The world needs cheaper material, and GaN is easy to build,” said Frank Heidemann, vice president and technology leader of SET at NI/Emerson Test & Measurement. “Gallium nitride has a huge success in the lower voltages ranges — anything up to 500V. This is where the GaN process is very well under control. The problem now is building in higher voltages is a challenge. In the near future there will be products at even higher voltage levels.”

Those higher-voltage applications require new process recipes, new power IC designs, and subsequently product/process validation and qualification.

GaN HEMT properties
Improving the processes needed to create GaN high-electron-mobility transistors (HEMTs) requires a deep understanding of the material properties and the manufacturing consequences of layering these materials.

The underlying physics and structure of wide-bandgap devices significantly differs from silicon high-voltage transistors. Silicon transistors rely on doping of p and n materials. When voltage is applied at the gate, it creates a channel for current to flow from source to drain. In contrast, wide-bandgap transistors are built by layering thin films of different materials, which differ in their bandgap energy. [2] Applying a voltage to the gate enables an electron exchange between the two materials, driving those electrons along the channel between source and drain.


Fig. 2. Cross-sectional animation of e-mode GaN HEMT device. Source: Zeiss Microscopy

“GaN devices rely on two-dimensional electron gas (2DEG) created at the GaN and AlGaN interface to conduct current at high speed,” said Jiangtao Hu, senior director of product marketing at Onto Innovation. “To enable high electron mobility, the epitaxy process creating complex multi-layer crystalline films must be carefully monitored and controlled, ensuring critical film properties such as thickness, composition, and interface roughness are within a tight spec. The ongoing trend of expanding wafer sizes further requires the measurement to be on-product and non-destructive for uniformity control.”


Fig. 3: SEM cross-section of enhancement-mode GaN HEMT built on silicon which requires a superlattice. Source: Zeiss Microscopy

Furthermore, each layer’s electrical properties need to be understood. “It is of utmost importance to determine, as early as possible in the manufacturing process, the electrical characteristics of the structures, the sheet resistance of the 2DEG, the carrier concentration, and the mobility of carriers in the channel, preferably at the wafer level in a non-destructive assessment,” said Christophe Maleville, CTO and senior executive vice president of innovation at Soitec.

Developing process recipes for GaN HEMT devices at higher operating ranges require measurements taken during wafer manufacturing and device testing, both for qualification of a process/product and production manufacturing. Inspection, metrology, and electrical tests focus on process anomalies and defects, which impact the device performance.

“Crystal defects such as dislocations and stacking faults, which can form during deposition and subsequently be grown over and buried, can create long-term reliability concerns even if the devices pass initial testing,” said David Taraci, business development manager of electronics strategic accounts at ZEISS Research Microscopy Solutions. “Gate oxides can pinch off during deposition, creating voids which may not manifest as an issue immediately.”

The quality of the buffer layer is critical because it affects the breakdown voltage. “The maximum breakdown voltage of the devices will be ultimately limited by the breakdown of the buffer layer grown in between the Si substrate and the GaN channel,” said Soitec’s Maleville. “An electrical assessment (IV at high voltage) requires destructive measurements as well as device isolation. This is performed on a sample basis only.”

One way to raise the voltage limit of a GaN device is to add a ‘gate driver’ which keeps it reliable at higher voltages. But to further expand GaN technology’s performance envelope to higher voltage operation engineers need to comprehend a new GaN device reliability properties.

“We are supporting GaN lifetime validation, which is the prediction of a mission characteristic of lifetime for gallium nitride power devices,” said Emerson’s Heidemann. “Engineers build physics-based failure models of these devices. Next, they investigate the acceleration factors. How can we really make tests and verification properly so that we can assess lifetime health?”

The qualification procedures necessitate life-stressing testing, which duplicates predicated mission profile usage, as well as electrical testing, after each life-stress period. That allows engineers to determine shifts in transistor characteristics and outright failures. For example, life stress periods could start with 4,000 hours and increase in 1,000-hour increments to 12,000 hours, during which time the device is turned on/off with specific durations of ‘on’ times.

“Reliability predictions are based upon application mission profiles,” said Stephanie Watts Butler, independent consultant and vice president of industry and standards in the IEEE Power Electronics Society. “In some cases, GaN is going into a new application, or being used differently than silicon, and the mission profile needs to be elucidated. This is one area that the industry is focused upon together.”

As an example of this effort, Butler pointed to JEDEC JEP186 spec [3], which provides guidelines for specifying the breakdown voltage for GaN HEMT devices. “JEDEC and IEC both are issuing guideline documents for methods for test and characterization of wide-bandgap devices, as well as reliability and qualification procedures, and datasheet parameters to enable wide bandgap devices, including GaN, to ramp faster with higher quality in the marketplace,” she said.

Electrical tests remain essential to screening for both time-zero and reliability-associated defects (e.g. infant mortality and reduced lifetime). This holds true for screening wafers, singulated die, and packaged devices. And test content includes tests specific to GaN HEMT power devices performance specifications and tests more directed at defect detection.

Due to inherent device differences, the GaN test list varies in some significant ways from Si and SiC power ICs. Assessing GaN health for qualification and manufacturing purposes requires both static and dynamic tests (SiC DC and AC). A partial list includes zero gate voltage drain leakage current, rise time, fall time, dynamic RDSon, and dielectric integrity tests.

“These are very time-intensive measurement techniques for GaN devices,” said Tom Tran, product manager for power discrete test products at Teradyne. “On top of the static measurement techniques is the concern about trapped charge — both for functionality and efficiency — revealed through dynamic RDSon testing.”

Transient tests are necessary for qualification and production purposes due to the high electron mobility, which is what gives GaN HEMT its high switching speed. “From a test standpoint, static test failures indicate basic processing failures, while transient switching failures indicate marginal or process excursions,” said Amkor’s Vineet Pancholi. “Both tests continue to be important to our customers until process maturity is achieved. With the extended range of voltage, current, and switching operations, mainstream test equipment suppliers have been adding complementary instrumentation capabilities.”

And ATE suppliers look to reduce test time, which reduces cost. “Both static and dynamic test requirements drive very high test times,” said Teradyne’s Tran. “But the GaN of today is very different than GaN from a decade ago. We’re able to accelerate this testing just due to the core nature of our ATE architecture. We think there is the possibility further reducing the cost of test for our customers.”

Tools for process control and quality management
GaN HEMT devices’ reliance on thin-film processes highlights the need to understand the material properties and the nature of the interfaces between each layer. That requires tools for process control, yield management, and failure analysis.

“GaN device performance is highly reflective of the film characteristics used in its manufacture,” said Mike McIntyre, director of software product management at Onto Innovation. “The smallest process variations when it comes to film thickness, film stress, line width or even crystalline make-up, can have a dramatic impact on how the device performs, or even if it is usable in its target market. This lack of tolerance to any variation places a greater burden on engineers to understand the factors that correlate to device performance and its profitability.”

Inspection methods that are non-destructive vary in throughput time and in the level of detail provided for engineers to make decisions. While optical methods are fast and provide full wafer coverage, they cannot accurately classify chemical or structural defects for engineers/technicians to review. In contrast, destructive methods provide the information that’s needed to truly understand the nature of the defects. For example, conductive atomic force microscopy (AFM) probing remains slow, but it can identify electrical nature of a defect. And to truly comprehend crystallographic defects and the chemical nature of impurities, engineers can turn to electron microscopy based methods.

One way to assess thin films is with X-rays. “High resolution X-ray measurements are useful to provide production control of the wafer crystalline quality and defects in the buffer, said Soitec’s Maleville. “Minor changes in composition of the buffer, barrier, or capping layer, as well as their layer thickness, can result in significant deviations in device performance. Thickness of the layers, in particular the top cap, barrier, and spacer layers, are typically measured by XRD. However, the throughput of XRD systems is low. Alternatively, ellipsometry offers a reasonably good throughput measurement with more data points for both development and production mode scenarios.”

Optical techniques have been the standard for thin film assessment in the semiconductor industry. Inspection equipment providers have long been on the continuation improvement always evolving journey to improve accuracy, precision and throughput. Providing better metrology tools helps device makers with process control and yield management.

“Recently, we successfully developed a non-destructive on product measurement capability for GaN epi process monitoring,” said Onto’s Hu. “It takes advantage of our advanced optical film experience and our modeling software to simultaneously measure multi-layer epi film thickness, composition, and interface roughness on product wafers.”


Fig. 4: Metrology measurements on GaN for roughness and for Al concentration. Source: Onto Innovation

Assessing the electrical characteristics — 2DEG sheet resistance, channel carrier mobility, and concentration are required for controlling the manufacturing process. A non-destructive assessment would be an improvement over currently used destructive techniques (e.g. SEM). The solutions used for other power ICs do not work for GaN HEMT. As of today, no one has come up with a commercial solution.

Inspection looks for yield impacting defects, as well as defects that affect wafer acceptance in the case of companies that provide engineered substrates.

“Defect inspection for incoming silicon wafers looks for particles, scratches, and other anomalies that might seed imperfections in the subsequent buffer and crystal growth,” said Antonio Mani, business development manager at Thermo Fisher Scientific. “After the growth of the buffer and termination layers, followed by the growth of the doped GaN layers, another set of inspections is carried out. In this case, it is more focused on the detection of cracks, other macroscopic defects (micropipes, carrots), and looking for micro-pits, which are associated to threading dislocations that have survived the buffer layer and are surfacing at the top GaN surface.”

Mani noted that follow-up inspection methods for Si and GaN devices are similar. The difference is the importance in connecting observations back to post-epi results.

More accurate defect libraries would shorten inspection time. “The lack of standardization of surface defect analysis impedes progress,” said Soitec’s Maleville. “Different tools are available on the market, while defect libraries are still being developed essentially by the different user. This lack of globally accepted method and standard defect library for surface defect analysis is slowing down the GaN surface qualification process.”

Whether it involves a manufacturing test failure or a field return, the necessary steps for determining root cause on a problematic packaged part begins with fault isolation. “Given the direct nature of the bandgap of GaN and its operating window in terms of voltage/frequency/power density, classical methods of fault isolation (e.g. optical emission spectroscopy) are forced to focus on different wavelengths and different ranges of excitation of the typical electrical defects,” said Thermo Fisher’s Mani. “Hot carrier pairs are just one example, which highlights the radical difference between GaN and silicon devices.”

In addition to fault isolation there are challenges in creating a device cross-section with focused-ion beam milling methods.

“Several challenges exist in FA for GaN power ICs,” said Zeiss’ Taraci. “In any completed device, in particular, there are numerous materials and layers present for stress mitigation/relaxation and thermal management, depending on whether we are talking enhancement- or depletion-mode devices. Length-scale can be difficult to manage as you are working with these samples, because they have structures of varying dimension present in close proximity. Many of the structures are quite unique to power GaN and can pose challenges themselves in cross-section and analyses. Beam-milling approaches have to be tailored to prevent heavy re-deposition and masking, and are dependent on material, lattice orientation, current, geometry, etc.”

Conclusion
To be successful in bringing new GaN power ICs to new application space engineers and their equipment suppliers need faster process development and a reduction in overall costs. For HEMT devices, it’s understanding the resulting layers and their material properties. This requires a host of metrology, inspection, test, and failure analysis steps to comprehend the issues, and to provide feedback data from experiments and qualifications for process and design improvements.

References

[1] M. Buffolo et al., “Review and Outlook on GaN and SiC Power Devices: Industrial State-of-the-Art, Applications, and Perspectives,” in IEEE Transactions on Electron Devices, March 2024, open access, https://ieeexplore.ieee.org/document/10388225

[2] High electron mobility transistor (HEMT) https://en.wikipedia.org/wiki/High-electron-mobility_transistor

[3] Guideline to specify a transient off-state withstand voltage robustness indicated in datasheets for lateral GaN power conversion devices, JEP186, version 1.0, December 2021. https://www.jedec.org/standards-documents/docs/jep186

Related Stories

Ramping Up Power Electronics For EVs

SiC Growth For EVs Is Stressing Manufacturing

GaN ICs Wanted For Power, EV Markets

Architecting Chips For High-Performance Computing

Power Semiconductors: 2023

The post Driving Cost Lower and Power Higher With GaN appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • What Works Best For ChipletsAnne Meixner
    The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield. To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least i
     

What Works Best For Chiplets

18. Duben 2024 v 09:08

The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield.

To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least initially. The basic challenge is aligning domain-specific performance demands of end systems, which contain a growing number of chiplets, with the assembly and packaging capabilities and methodologies of IDMs, foundries, and OSATs. This includes the creation of assembly development kits (ADKs) that are roughly the equivalent of process development kits (PDKs), which today are codified with manufacturing specifications.

A PDK provides the appropriate level of detail needed to develop planar chips, marrying design tools with fab processes to achieve a predictable outcome. But making this work for an ADK with heterogeneous chiplets is many times more complex. Design and assembly teams need to manage thermal, mechanical, and electrical co-dependencies that cause electrical and mechanical stress, resulting in warpage, reduced yield, and reliability issues under real-world workloads. Layered on top of this the business and legal issues related to packaging of different devices from different manufacturers.

“Chiplets are a growing trend, especially in the HPC and networking segments, with potential to scale to other applications,” said Gabriela Pereira, technology and market analyst for semiconductor packaging at Yole Intelligence. “The industry has understood that high-end advanced packaging technologies are needed to connect them — but that’s much more complex than it seems. Connecting chiplets requires the design of high-bandwidth interconnections at the package level, which can take different forms — e.g., 2D, 2.5D or 3D — while ensuring that the thermal and power requirements are fulfilled.”

Commercial chiplet-based devices generally are domain-specific, and sometimes developed for a specific workload. So despite a big industry push to create a LEGO-like mix-and-match ecosystem for chiplets — which today includes multiple IP and EDA vendors, foundries, memory suppliers, OSATs, substrate suppliers, etc. — making this work as planned will require time and a massive amount of work.

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

In creating heterogeneous integrated designs, it’s essential to have much tighter collaboration between foundries, IDMs, OSATs, and PCB manufacturers. And because each chiplet-based system will be customized, the number of assembly processes will grow substantially. For example, one OSAT noted that among its ~5,000 customers, there are ~1,000 different assembly processes.

That diversity in products and processes makes it difficult to achieve predictable results by choosing chiplets from a large menu of options.

“We’ve already encountered a lot of limitations including not only the silicon, but also integration and the ecosystem,” said Lihong Cao, senior director at ASE Group, at MEPTEC’s Road to Chiplets forum. She stressed that customers continue to push for a low-cost chiplet assembly process, which is creating constructive tension between developing a sophisticated assembly process and the economic realities of different industry sectors. Computing devices for automotive have a higher cost sensitivity than for data centers, for example, but their chips operate in a harsher environment over a longer lifetime.

What’s needed is a defined set of assembly process recipes — basically, a highly limited menu of choices — that are specific to the end application (HPC, automotive, RF telecommunications) in order to lower the cost of chiplet-based systems. OSATs and foundries already are moving in that direction for high-performance computing. For example, at its 2024 Direct Connect event, Intel shared its six different package processes for chiplets. TSMC and Samsung also offer defined sets of chiplet processes. But the success of these assembly processes requires engineering teams to co-optimize the flows, processes, and materials to best match the system requirements.

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

“Previously, when we designed a system we only had to be worried about the system requirements. Once we start segregating into dies and reassembling them, we have to start looking at other things. We have to worry about putting them together while considering signal integrity between dies, reliability, thermals, etc.,” said Itai Leshniak, director of AI systems solutions at Applied Materials, at the MEPTEC forum. “If we take the case of AI-based computer vision, we can break it down layer by layer — on the hardware side, determining which computer vision processors, sensors, filters are needed to break it down into the architecture at layer. Then we begin to go through how to package all these chiplets, and then which materials to use and how to take advantage of those materials.”

Materials and assembly processes
Conceptually, design engineers will use chiplets to design a system. However, the co-design and integration is far more complicated than assembling a set of LEGO blocks, because the chiplets, interposers, and package substrates come from different design houses and manufacturing facilities. The advanced packaging technologies used to connect chiplets vary with an alphabet soup of names — FOWLP, FOPLP, CoWoS, etc., each of which poses additional design and material choices along with certain process limitations.

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Currently engineering teams determine the tradeoffs among the different packaging options to select materials, derive a process recipe, and determine design rules.

Materials are a good starting point. “Materials are very important because they enable new products and packaging technologies,” Tanja Braun, deputy group manager at the Fraunhofer Institute for Reliability and Microintegration IZM. “As you move into more advanced packaging, process is getting much more complex because you are putting more things together. In the end, it’s a combination of equipment, materials, and process development.”

There are three thermal parameters that are critical in package assembly processes — coefficients of thermal expansion (CTE), glass transition temperature (Tg), and thermal conductivity. These factors affect how a material behaves in manufacturing to packaging processes, as well as how it behaves in the field.

“Challenges for our materials include temperature limitations of different die,” said Rama Puligadda, CTO at Brewer Science. “We have to ensure that the temperatures used for bonding materials don’t exceed the thermal limitations of any of the chips that are being integrated into the package. Additionally, there may be some subsequent processes like redistribution layer (RDL) formation or molding. Our materials have to survive those processes. They have to survive the chemicals they come in contact with throughout the packaging process scheme. Mechanical stresses in the package add additional challenges for bonding materials.”

Within a stack of chiplets-on-substrate with an optional interposer, their material attributes affect the thermal-mechanical stresses between neighboring materials, as well. This directly impacts interconnect dimensional control over a large area substrate area.

“If you go work the numbers, you will find that the level of tolerance and control required is frightening,” said Dick Otte, CEO of Promex Industries. “You’re talking about controlling dimensions equivalent to the width of a grass blade over the length of a football field, so that’s roughly 1 in 100,000.”

The goal is uniform heating of the structure in reflow in order to attain the best process results and to avoid cracking. “When you’re taking it through a 250 degrees centigrade temperature change, then you need to heat up slowly so that the top doesn’t get hot before the bottom does,” said Otte.

Multi-physics to comprehend co-optimization
Multi-physics modeling has become the go-to method for co-optimizing packaging design and assembly process development. That affects both permanent and temporary materials, as well the placement of processors, memories, and other components.

“You always looking to what the customer needs electrically, because that’s going to help define the material set. The material set is broadly applicable to a bunch of speed ranges. As long as you don’t step outside of those electrical specifications, theoretically you should be okay,” said Mike Kelly, vice president of advanced package and technology integration at Amkor Technology.

To save many iterations of empirically based development, engineers can use physics-based simulations to understand the impact of a material set’s properties impact on the assembly process, power/thermals, and mechanical vibrations.

Consider that HPC chiplet products can consume ~1,000 watts at peak performance so the power and thermal interactions need to be fully understood.

We’ve struggled, as everybody has, with this blizzard of complexity in the different techniques. Not only do they vary across different vendors, but they’re also varying over time,” said Marc Swinnen, director of product marketing at Ansys. “Our approach has been to identify the essentials that need to be worked on. We work jointly with customers to develop a simulation flow that actually achieves what is needed now.”

Materials are just one piece of the puzzle. “Then there’s the assembly stresses that need to be modeled to know whether you can correctly assemble this device. The third one is mechanical vibration,” Swinnen said. “Can your device withstand those regular vibrations? Modeling these attributes ties directly into our mechanical analysis tools — acoustic, thermal, vibration, etc. In the end, you’re going to have to do physics simulation. We’re trying to make it accessible to people in many different forms. But the bedrock of our tool offerings is that we have the meshing simulation and analysis. It’s a question of getting the data in the right format in a way that’s practical and usable.”

Evolving assembly design kits
For conventional packages, OSATs provide design rules for each packaging technology. These need to consider electrical, mechanical and thermal design requirements and manufacturing process limitations. In effect this is a multi-dimensional bounding box. Suppliers perform iterations with the customer to create a product specific process recipe.

Rules cover the macro-level attributes. “At a minimum, what you see from design rules is maximum package size, maximum silicon size, and whether silicon can be [mounted] on both sides of the substrate, such that when you follow these constructions the final product will have a lifetime of 1,000 thermal cycles, for example,” said Fraunhofer’s Braun.

In addition, design rules need to describe routing constraints for the interposer and/or redistribution layer, such as RDL line widths and spaces, ball-grid/pillar/pad size and pitches, and the maximum number of interconnections.

Breaking up a monolithic HPC device into multiple dies shifts some of the semiconductor design/process complexity into the packaging space. That makes things much more complicated. Consider that to connect 10 dies requires on order of 100,000 traces within the interposer’s or substrate’s redistribution layer.

To cope with the complexity at the chip level, the IC industry has long relied upon process design kits (PDKs) to capture design rules in an electronic file that can be imported into EDA tools. Their counterparts, assembly design kits (ADKs), are relatively immature.

“We call it Smart Package,” said Amkor’s Kelly. “It’s an ADK that we give to every customer who’s doing their own design. It is a set of macros, and a customization of a database tailored to a customer’s particular design. For chiplets, it is a high-density fan-out package technology. And it’s cognizant of the limitations for metal density and metal spacing, etc. This makes it easier for us to do design rule checks (DRCs).”

But right now, with the level of customization still required, how an ADK is derived and what it entails is in flux. Partnerships between EDA tool vendors, OSATs, and semiconductor device providers are required.

“We come from the IC world where everything is very rigid,” said Kenneth Larsen, director of 3D-IC product management in Synopsys‘ EDA Group. “On the OSAT side, and maybe this is because it’s so custom, design rules seem like a data sheet. Then you build and optimize the products over time or in collaboration with the OSAT. It’s not an electronic exchange. In the IC world, this would be totally unheard of. While it is possible to tweak a few things, you have a qualification process. And it seems like that’s not there yet for packaging.”

Materials and associated assembly recipes ultimately drive what’s possible for a chiplet-substrate stack in terms of pillar pitch, RDL line widths and spaces, bonding processes, and chiplet placement tolerances. But within a handful of ADKs, there are many possible interactions to consider.

The current focus is on co-optimizing the system design with the chiplet assembly process, leading to an assembly process development flow (see figure 4). This flow considers the needs of customization of an assembly process, and it creates the necessary design rules to be used by package designers.

Fig. 4: Chip-package hybrid flow. Source: ASE

Fig. 4: Chip-package hybrid flow. Source: ASE

“First you need to define your structure using chiplets. Are you using substrate RDL, 2.5D RDL, or a bridge? After that you need to consider your structure’s materials. What kind of material do you choose to fulfill your electrical performance and the mechanical stress requirements,” said Cao. “After that, you do pre-analysis to ensure all the structures and materials you use are workable in terms of electrical, warpage and mechanical stress.”

The design planning flow also includes the evaluation of die-to-die interconnects through the documents for co-design sign-off.

Conclusion
Before chiplet-based designs can be enabled outside the IDM model, the industry needs to complete the ecosystem that bridges the manufacturing and design complexity. This is because the need to co-optimize the system architecture based on materials, process, and integration capabilities is essential. While this would be easier with a set of well-defined products for the chiplet ecosystem to drive forward on, that has not happened yet.

Engineering teams across the design and manufacturing stack will need to collaborate to choose the appropriate materials, architectures, processes, etc., to develop a final chiplet-based product that is designable. As ASE group’s Cao noted, “An integrated design and manufacturing ecosystem is important. It is very critical to have collaboration among IDM, vendors, materials suppliers. Everyone needs to work together to really enable integration for the real applications.”

Related stories
Fan-Out Packaging Gets Competitive
Manufacturability reaches sufficient level to compete with flip-chip BGA and 2.5D.

Inside Panel-Level Fan-Out Technology
Fraunhofer’s panel experts dig into why this approach is needed and where the challenges are to making it work.

Next Steps For Panel-Level Packaging
Where it’s working, and what challenges remain for even broader adoption.

Mini-Consortia Forming Around Chiplets
Commercial chiplet marketplaces are still on the distant horizon, but companies are getting an early start with more limited partnerships.

What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.

Mechanical Challenges Rise With Heterogeneous Integration
But gaps in tools make it difficult to address warpage, structural issues, and new materials in multi-die/multi-chiplet designs.

Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.

The post What Works Best For Chiplets appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • What Works Best For ChipletsAnne Meixner
    The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield. To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least i
     

What Works Best For Chiplets

18. Duben 2024 v 09:08

The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield.

To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least initially. The basic challenge is aligning domain-specific performance demands of end systems, which contain a growing number of chiplets, with the assembly and packaging capabilities and methodologies of IDMs, foundries, and OSATs. This includes the creation of assembly development kits (ADKs) that are roughly the equivalent of process development kits (PDKs), which today are codified with manufacturing specifications.

A PDK provides the appropriate level of detail needed to develop planar chips, marrying design tools with fab processes to achieve a predictable outcome. But making this work for an ADK with heterogeneous chiplets is many times more complex. Design and assembly teams need to manage thermal, mechanical, and electrical co-dependencies that cause electrical and mechanical stress, resulting in warpage, reduced yield, and reliability issues under real-world workloads. Layered on top of this the business and legal issues related to packaging of different devices from different manufacturers.

“Chiplets are a growing trend, especially in the HPC and networking segments, with potential to scale to other applications,” said Gabriela Pereira, technology and market analyst for semiconductor packaging at Yole Intelligence. “The industry has understood that high-end advanced packaging technologies are needed to connect them — but that’s much more complex than it seems. Connecting chiplets requires the design of high-bandwidth interconnections at the package level, which can take different forms — e.g., 2D, 2.5D or 3D — while ensuring that the thermal and power requirements are fulfilled.”

Commercial chiplet-based devices generally are domain-specific, and sometimes developed for a specific workload. So despite a big industry push to create a LEGO-like mix-and-match ecosystem for chiplets — which today includes multiple IP and EDA vendors, foundries, memory suppliers, OSATs, substrate suppliers, etc. — making this work as planned will require time and a massive amount of work.

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

In creating heterogeneous integrated designs, it’s essential to have much tighter collaboration between foundries, IDMs, OSATs, and PCB manufacturers. And because each chiplet-based system will be customized, the number of assembly processes will grow substantially. For example, one OSAT noted that among its ~5,000 customers, there are ~1,000 different assembly processes.

That diversity in products and processes makes it difficult to achieve predictable results by choosing chiplets from a large menu of options.

“We’ve already encountered a lot of limitations including not only the silicon, but also integration and the ecosystem,” said Lihong Cao, senior director at ASE Group, at MEPTEC’s Road to Chiplets forum. She stressed that customers continue to push for a low-cost chiplet assembly process, which is creating constructive tension between developing a sophisticated assembly process and the economic realities of different industry sectors. Computing devices for automotive have a higher cost sensitivity than for data centers, for example, but their chips operate in a harsher environment over a longer lifetime.

What’s needed is a defined set of assembly process recipes — basically, a highly limited menu of choices — that are specific to the end application (HPC, automotive, RF telecommunications) in order to lower the cost of chiplet-based systems. OSATs and foundries already are moving in that direction for high-performance computing. For example, at its 2024 Direct Connect event, Intel shared its six different package processes for chiplets. TSMC and Samsung also offer defined sets of chiplet processes. But the success of these assembly processes requires engineering teams to co-optimize the flows, processes, and materials to best match the system requirements.

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

“Previously, when we designed a system we only had to be worried about the system requirements. Once we start segregating into dies and reassembling them, we have to start looking at other things. We have to worry about putting them together while considering signal integrity between dies, reliability, thermals, etc.,” said Itai Leshniak, director of AI systems solutions at Applied Materials, at the MEPTEC forum. “If we take the case of AI-based computer vision, we can break it down layer by layer — on the hardware side, determining which computer vision processors, sensors, filters are needed to break it down into the architecture at layer. Then we begin to go through how to package all these chiplets, and then which materials to use and how to take advantage of those materials.”

Materials and assembly processes
Conceptually, design engineers will use chiplets to design a system. However, the co-design and integration is far more complicated than assembling a set of LEGO blocks, because the chiplets, interposers, and package substrates come from different design houses and manufacturing facilities. The advanced packaging technologies used to connect chiplets vary with an alphabet soup of names — FOWLP, FOPLP, CoWoS, etc., each of which poses additional design and material choices along with certain process limitations.

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Currently engineering teams determine the tradeoffs among the different packaging options to select materials, derive a process recipe, and determine design rules.

Materials are a good starting point. “Materials are very important because they enable new products and packaging technologies,” Tanja Braun, deputy group manager at the Fraunhofer Institute for Reliability and Microintegration IZM. “As you move into more advanced packaging, process is getting much more complex because you are putting more things together. In the end, it’s a combination of equipment, materials, and process development.”

There are three thermal parameters that are critical in package assembly processes — coefficients of thermal expansion (CTE), glass transition temperature (Tg), and thermal conductivity. These factors affect how a material behaves in manufacturing to packaging processes, as well as how it behaves in the field.

“Challenges for our materials include temperature limitations of different die,” said Rama Puligadda, CTO at Brewer Science. “We have to ensure that the temperatures used for bonding materials don’t exceed the thermal limitations of any of the chips that are being integrated into the package. Additionally, there may be some subsequent processes like redistribution layer (RDL) formation or molding. Our materials have to survive those processes. They have to survive the chemicals they come in contact with throughout the packaging process scheme. Mechanical stresses in the package add additional challenges for bonding materials.”

Within a stack of chiplets-on-substrate with an optional interposer, their material attributes affect the thermal-mechanical stresses between neighboring materials, as well. This directly impacts interconnect dimensional control over a large area substrate area.

“If you go work the numbers, you will find that the level of tolerance and control required is frightening,” said Dick Otte, CEO of Promex Industries. “You’re talking about controlling dimensions equivalent to the width of a grass blade over the length of a football field, so that’s roughly 1 in 100,000.”

The goal is uniform heating of the structure in reflow in order to attain the best process results and to avoid cracking. “When you’re taking it through a 250 degrees centigrade temperature change, then you need to heat up slowly so that the top doesn’t get hot before the bottom does,” said Otte.

Multi-physics to comprehend co-optimization
Multi-physics modeling has become the go-to method for co-optimizing packaging design and assembly process development. That affects both permanent and temporary materials, as well the placement of processors, memories, and other components.

“You always looking to what the customer needs electrically, because that’s going to help define the material set. The material set is broadly applicable to a bunch of speed ranges. As long as you don’t step outside of those electrical specifications, theoretically you should be okay,” said Mike Kelly, vice president of advanced package and technology integration at Amkor Technology.

To save many iterations of empirically based development, engineers can use physics-based simulations to understand the impact of a material set’s properties impact on the assembly process, power/thermals, and mechanical vibrations.

Consider that HPC chiplet products can consume ~1,000 watts at peak performance so the power and thermal interactions need to be fully understood.

We’ve struggled, as everybody has, with this blizzard of complexity in the different techniques. Not only do they vary across different vendors, but they’re also varying over time,” said Marc Swinnen, director of product marketing at Ansys. “Our approach has been to identify the essentials that need to be worked on. We work jointly with customers to develop a simulation flow that actually achieves what is needed now.”

Materials are just one piece of the puzzle. “Then there’s the assembly stresses that need to be modeled to know whether you can correctly assemble this device. The third one is mechanical vibration,” Swinnen said. “Can your device withstand those regular vibrations? Modeling these attributes ties directly into our mechanical analysis tools — acoustic, thermal, vibration, etc. In the end, you’re going to have to do physics simulation. We’re trying to make it accessible to people in many different forms. But the bedrock of our tool offerings is that we have the meshing simulation and analysis. It’s a question of getting the data in the right format in a way that’s practical and usable.”

Evolving assembly design kits
For conventional packages, OSATs provide design rules for each packaging technology. These need to consider electrical, mechanical and thermal design requirements and manufacturing process limitations. In effect this is a multi-dimensional bounding box. Suppliers perform iterations with the customer to create a product specific process recipe.

Rules cover the macro-level attributes. “At a minimum, what you see from design rules is maximum package size, maximum silicon size, and whether silicon can be [mounted] on both sides of the substrate, such that when you follow these constructions the final product will have a lifetime of 1,000 thermal cycles, for example,” said Fraunhofer’s Braun.

In addition, design rules need to describe routing constraints for the interposer and/or redistribution layer, such as RDL line widths and spaces, ball-grid/pillar/pad size and pitches, and the maximum number of interconnections.

Breaking up a monolithic HPC device into multiple dies shifts some of the semiconductor design/process complexity into the packaging space. That makes things much more complicated. Consider that to connect 10 dies requires on order of 100,000 traces within the interposer’s or substrate’s redistribution layer.

To cope with the complexity at the chip level, the IC industry has long relied upon process design kits (PDKs) to capture design rules in an electronic file that can be imported into EDA tools. Their counterparts, assembly design kits (ADKs), are relatively immature.

“We call it Smart Package,” said Amkor’s Kelly. “It’s an ADK that we give to every customer who’s doing their own design. It is a set of macros, and a customization of a database tailored to a customer’s particular design. For chiplets, it is a high-density fan-out package technology. And it’s cognizant of the limitations for metal density and metal spacing, etc. This makes it easier for us to do design rule checks (DRCs).”

But right now, with the level of customization still required, how an ADK is derived and what it entails is in flux. Partnerships between EDA tool vendors, OSATs, and semiconductor device providers are required.

“We come from the IC world where everything is very rigid,” said Kenneth Larsen, director of 3D-IC product management in Synopsys‘ EDA Group. “On the OSAT side, and maybe this is because it’s so custom, design rules seem like a data sheet. Then you build and optimize the products over time or in collaboration with the OSAT. It’s not an electronic exchange. In the IC world, this would be totally unheard of. While it is possible to tweak a few things, you have a qualification process. And it seems like that’s not there yet for packaging.”

Materials and associated assembly recipes ultimately drive what’s possible for a chiplet-substrate stack in terms of pillar pitch, RDL line widths and spaces, bonding processes, and chiplet placement tolerances. But within a handful of ADKs, there are many possible interactions to consider.

The current focus is on co-optimizing the system design with the chiplet assembly process, leading to an assembly process development flow (see figure 4). This flow considers the needs of customization of an assembly process, and it creates the necessary design rules to be used by package designers.

Fig. 4: Chip-package hybrid flow. Source: ASE

Fig. 4: Chip-package hybrid flow. Source: ASE

“First you need to define your structure using chiplets. Are you using substrate RDL, 2.5D RDL, or a bridge? After that you need to consider your structure’s materials. What kind of material do you choose to fulfill your electrical performance and the mechanical stress requirements,” said Cao. “After that, you do pre-analysis to ensure all the structures and materials you use are workable in terms of electrical, warpage and mechanical stress.”

The design planning flow also includes the evaluation of die-to-die interconnects through the documents for co-design sign-off.

Conclusion
Before chiplet-based designs can be enabled outside the IDM model, the industry needs to complete the ecosystem that bridges the manufacturing and design complexity. This is because the need to co-optimize the system architecture based on materials, process, and integration capabilities is essential. While this would be easier with a set of well-defined products for the chiplet ecosystem to drive forward on, that has not happened yet.

Engineering teams across the design and manufacturing stack will need to collaborate to choose the appropriate materials, architectures, processes, etc., to develop a final chiplet-based product that is designable. As ASE group’s Cao noted, “An integrated design and manufacturing ecosystem is important. It is very critical to have collaboration among IDM, vendors, materials suppliers. Everyone needs to work together to really enable integration for the real applications.”

Related stories
Fan-Out Packaging Gets Competitive
Manufacturability reaches sufficient level to compete with flip-chip BGA and 2.5D.

Inside Panel-Level Fan-Out Technology
Fraunhofer’s panel experts dig into why this approach is needed and where the challenges are to making it work.

Next Steps For Panel-Level Packaging
Where it’s working, and what challenges remain for even broader adoption.

Mini-Consortia Forming Around Chiplets
Commercial chiplet marketplaces are still on the distant horizon, but companies are getting an early start with more limited partnerships.

What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.

Mechanical Challenges Rise With Heterogeneous Integration
But gaps in tools make it difficult to address warpage, structural issues, and new materials in multi-die/multi-chiplet designs.

The post What Works Best For Chiplets appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Tackling Variability With AI-based Process ControlAnne Meixner
    Jon Herlocker, co-founder and CEO of Tignis, sat down with Semiconductor Engineering to talk about how AI in advanced process control reduces equipment variability and corrects for process drift. What follows are excerpts of that conversation. SE: How is AI being used in semiconductor manufacturing and what will the impact be? Herlocker: AI is going to create a completely different factory. The real change is going to happen when AI gets integrated, from the design side all the way through the m
     

Tackling Variability With AI-based Process Control

22. Únor 2024 v 09:01

Jon Herlocker, co-founder and CEO of Tignis, sat down with Semiconductor Engineering to talk about how AI in advanced process control reduces equipment variability and corrects for process drift. What follows are excerpts of that conversation.

SE: How is AI being used in semiconductor manufacturing and what will the impact be?

Herlocker: AI is going to create a completely different factory. The real change is going to happen when AI gets integrated, from the design side all the way through the manufacturing side. We are just starting to see the beginnings of this integration right now. One of the biggest challenges in the semiconductor industry is it can take years from the time an engineer designs a new device to that device reaching high-volume production. Machine learning is going to cut that to half, or even a quarter. The AI technology that Tignis offers today accelerates that very last step — high-volume manufacturing. Our customers want to know how to tune their tools so that every time they process a wafer the process is in control. Traditionally, device makers get the hardware that meets their specifications from the equipment manufacturer, and then the fab team gets their process recipes working. Depending on the size of the fab, they try to physically replicate that process in a ‘copy exact’ manner, which can take a lot of time and effort. But now device makers can use machine learning (ML) models to autonomously compensate for the differences in equipment variation to produce the exact same outcome, but with significantly less effort by process engineers and equipment technicians.

SE: How is this typically done?

Herlocker: A classic APC system on the floor today might model three input parameters using linear models. But if you need to model 20 or 30 parameters, these linear models don’t work very well. With AI controllers and non-linear models, customers can ingest all of their rich sensor data that shows what is happening in the chamber, and optimally modulate the recipe settings to ensure that the outcome is on-target. AI tools such as our PAICe Maker solution can control any complex process with a greater degree of precision.

SE: So, the adjustments AI process control software makes is to tweak inputs to provide consistent outputs?

Herlocker: Yes, I preach this all the time. By letting AI automate the tasks that were traditionally very manual and time-consuming, engineers and technicians in the fab can remove a lot of the manual precision tasks they needed to do to control their equipment, significantly reducing module operating costs. AI algorithms also can help identify integration issues — interacting effects between tools that are causing variability. We look at process control from two angles. Software can autonomously control the tool by modulating the recipe parameters in response to sensor readings and metrology. But your autonomous control cannot control the process if your equipment is not doing what it is supposed to do, so we developed a separate AI learning platform that ensures equipment is performing to specification. It brings together all the different data silos across the fab – the FDC trace data, metrology data, test data, equipment data, and maintenance data. The aggregation of all that data is critical to understanding the causes of a variation in equipment. This is where ML algorithms can automatically sift through massive amount of data to help process engineers and data scientists determine what parameters are most influencing their process outcomes.

SE: Which process tools benefit the most from AI modeling of advanced process control?

Herlocker: We see the most interest in thin film deposition tools. The physics involved in plasma etching and plasma-enhanced CVD are non-linear processes. That is why you can get much better control with ML modeling. You also can model how the process and equipment evolves over time. For example, every time you run a batch through the PECVD chamber you get some amount of material accumulation on the chamber walls, and that changes the physics and chemistry of the process. AI can build a predictive model of that chamber. In addition to reacting to what it sees in the chamber, it also can predict what the chamber is going to look like for the next run, and now the ML model can tweak the input parameters before you even see the feedback.

SE: How do engineers react to the idea that the AI will be shifting the tool recipe?

Herlocker: That is a good question. Depending on the customer, they have different levels of comfort about how frequently things should change, and how much human oversight there needs to be for that change. We have seen everything from, ‘Just make a recommendation and one of our engineers will decide whether or not to accept that recommendation,’ to adjusting the recipe once a day, to autonomously adjusting for every run. The whole idea behind these adjustments is for variability reduction and drift management, and customers weigh the targeted results versus the perceived risk of taking a novel approach.

SE: Does this involve building confidence in AI-based approaches?

Herlocker: Absolutely, and our systems have a large number of fail-safes, and some limits are hard-coded. We have people with PhDs in chemical engineering and material science who have operated these tools for years. These experts understand the physics of what is happening in these tools, and they have the practical experience to know what level of change can be expected or not.

SE: How much of your modeling is physics-based?

Herlocker: In the beginning, all of our modeling was physics-based, because we were working with equipment makers on their next-generation tools. But now we are also bringing our technology to device makers, where we can also deliver a lot of value by squeezing the most juice out of a data-driven approach. The main challenge with physics models is they are usually IP-protected. When we work with equipment makers, they typically pay us to build those physics-based models so they cannot be shared with other customers.

SE: So are your target customers the toolmakers or the fabs?

Herlocker: They are both our target customers. Most of our sales and marketing efforts are focused on device makers with legacy fabs. In most cases, the fab manager has us engage with their team members to do an assessment. Frequently, that team includes a cross section of automation, process, and equipment teams. The automation team is most interested in reducing the time to detect some sort of deviation that is going to cause yield loss, scrap, or tool downtime. The process and equipment engineers are interested in reducing variability or controlling drift, which also increases chamber life.

For example, let’s consider a PECVD tool. As I mentioned, every time you run the process, byproducts such as polymer materials build up on the chamber walls. You want a thickness of x in your deposition, but you are getting a slightly different wafer thickness uniformity due to drift of that chamber because of plasma confinement changes. Eventually, you must shut down the tool, wet clean the chamber, replace the preventive maintenance kit parts, and send them through the cleaning loop (i.e., to the cleaning vendor shop). Then you need to season the chamber and bring it back online. By controlling the process better, the PECVD team does not have to vent the chamber as often to clean parts. Just a 5% increase in chamber life can be quite significant from a maintenance cost reduction perspective (e.g., parts spend, refurb spend, cleaning spend, etc.). Reducing variability has a similarly large impact, particularly if it is a bottleneck tool, because then that reduction directly contributes to higher or more stable yields via more ‘sweet spot’ processing time, and sometimes better wafer throughput due to the longer chamber lifetime. The ROI story is more nuanced on non-bottleneck tools because they don’t modulate fab revenue, but the ROI there is still there. It is just more about chamber life stability.

SE: Where does this go next?

Herlocker: We also are working with OEMs on next-generation toolsets. Using AI/ML as the core of process control enables equipment makers to control processes that are impossible to implement with existing control strategies and software. For example, imagine on each process step there are a million different parameters that you can control. Further imagine that changing any one parameter has a global effect on all the other parameters, and only by co-varying all the million parameters in just the right way will you get the ideal outcome. And to further complicate things, toss in run-to-run variance, so that the right solution continues to change over time. And then there is the need to do this more than 200 times per hour to support high-volume manufacturing. AI/ML enables this kind of process control, which in turn will enable a step function increase in the ability to produce more complex devices more reliably.

SE: What additional changes do you see from AI-based algorithms?

Herlocker: Machine learning will dramatically improve the agility and productivity of the facility broadly. For example, process engineers will spend less time chasing issues and have more time to implement continuous improvement. Maintenance engineers will have time to do more preventive maintenance. Agility and resiliency — the ability to rapidly adjust to or maintain operations, despite disturbances in the factory or market — will increase. If you look at ML combined with upcoming generative AI capabilities, within a year or two we are going to have agents that effectively will understand many aspects of how equipment or a process works. These agents will make good engineers great, and enable better capture, aggregation, and transfer of manufacturing knowledge. In fact, we have some early examples of this running in our labs. These ML agents capture and ingest knowledge very quickly. So when it comes to implementing the vision of smart factories, machine learning automation will have a massive impact on manufacturing in the future.

The post Tackling Variability With AI-based Process Control appeared first on Semiconductor Engineering.

❌
❌