FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • 3.5D: The Great CompromiseEd Sperling
    The semiconductor industry is converging on 3.5D as the next best option in advanced packaging, a hybrid approach that includes stacking logic chiplets and bonding them separately to a substrate shared by other components. This assembly model satisfies the need for big increases in performance while sidestepping some of the thorniest issues in heterogeneous integration. It establishes a middle ground between 2.5D, which already is in widespread use inside of data centers, and full 3D-ICs, which
     

3.5D: The Great Compromise

21. Srpen 2024 v 09:01

The semiconductor industry is converging on 3.5D as the next best option in advanced packaging, a hybrid approach that includes stacking logic chiplets and bonding them separately to a substrate shared by other components.

This assembly model satisfies the need for big increases in performance while sidestepping some of the thorniest issues in heterogeneous integration. It establishes a middle ground between 2.5D, which already is in widespread use inside of data centers, and full 3D-ICs, which the chip industry has been struggling to commercialize for the better part of a decade.

A 3.5D architecture offers several key advantages:

  • It creates enough physical separation to effectively address thermal dissipation and noise.
  • It provides a way to add more SRAM into high-speed designs. SRAM has been the go-to choice for processor cache since the mid-1960s, and remains an essential element for faster processing. But SRAM no longer scales at the same rate as digital transistors, so it is consuming more real estate (in percentage terms) at each new node. And because the size of a reticle is fixed, the best available option is to add area by stacking chiplets vertically.
  • By thinning the interface between processing elements and memory, a 3.5D approach also can shorten the distances that signals need to travel and greatly improve processing speeds well beyond a planar implementation. This is essential with large language models and AI/ML, where the amount of data that needs to be processed quickly is exploding.

Chipmakers still point to fully integrated 3D-ICs as the best performing alternative to a planar SoC, but packing everything into a 3D configuration makes it harder to deal with physical effects. Thermal dissipation is probably the most difficult to contend with. Workloads can vary significantly, creating dynamic thermal gradients and trapping heat in unexpected places, which in turn reduce the lifespan and reliability of chips. On top of that, power and substrate noise become more problematic at each new node, as do concerns about electromagnetic interference.

“What the market has adopted first is high-performance chips, and those produce a lot of heat,” said Marc Swinnen, director of product marketing at Ansys. “They have gone for expensive cooling systems with a huge number of fans and heat sinks, and they have opted for silicon interposers, which arguably are some of the most expensive technologies for connecting chips together. But it also gives the highest performance and is very good for thermal because it matches the coefficient of thermal expansion. Thermal is one of the big reasons that’s been successful. In addition to that, you may want bigger systems with more stuff that you can’t fit on one chip. That’s just a reticle-size limitation. Another is heterogeneous integration, where you want multiple different processes, like an RF process or the I/O, which don’t need to be in 5nm.”

A 3.5D assembly also provides more flexibility to add additional processor cores, and higher yield because known good die can be manufactured and tested separately, a concept first pioneered by Xilinx in 2011 at 28nm.

3.5D is a loose amalgamation of all these approaches. It can include two to three chiplets stacked on top of each other, or even multiple stacks laid out horizontally.

“It’s limited vertical, and not just for thermal reasons,” said Bill Chen, fellow and senior technical advisor at ASE Group. “It’s also for performance reasons. But thermal is the limiting factor, and we’ve talked about many different materials to help with that — diamond and graphene — but that limitation is still there.”

This is why the most likely combination, at least initially, will be processors stacked on SRAM, which simplifies the cooling. The heat generated by high utilization of different processing elements can be removed with heat sinks or liquid cooling. And with one or more thinned out substrates, signals will travel shorter distances, which in turn uses less power to move data back and forth between processors and memory.

“Most likely, this is going to be logic over memory on a logic process,” said Javier DeLaCruz, fellow and senior director of Silicon Ops Engineering at Arm. “These are all contained within an SoC normally, but a portion of that is going to be SRAM, which does not scale very well from node to node. So having logic over memory and a logic process is really the winning solution, and that’s one of the better use cases for 3D because that’s what really shortens your connectivity. A processor generally doesn’t talk to another processor. They talk to each other through memory, so having the memory on a different floor with no latency between them is pretty attractive.”

The SRAM doesn’t necessarily have to be at the same node as the processors advanced node, which also helps with yield, and reliability. At a recent Samsung Foundry event, Taejoong Song, the company’s vice president of foundry business development, showed a roadmap of a 3.5D configuration using a 2nm chiplet stacked on a 4nm chiplet next year, and a 1.4nm chiplet on top of a 2nm chiplet in 2027.


Fig. 1: Samsung’s heterogeneous integration roadmap showing stacked DRAM (HBM), chiplets and co-packaged optics. Source: Samsung Foundry

Intel Foundry’s approach is similar in many ways. “Our 3.5D technology is implemented on a substrate with silicon bridges,” said Kevin O’Buckley, senior vice president and general manager of Foundry Services at Intel. “This is not an incredibly costly, low-yielding, multi-reticle form-factor silicon, or even RDL. We’re using thin silicon slices in a much more cost-efficient fashion to enable that die-to-die connectivity — even stacked die-to-die connectivity — through a silicon bridge. So you get the same advantages of silicon density, the same SI (signal integrity) performance of that bridge without having to put a giant monolithic interposer underneath the whole thing, which is both cost- and capacity-prohibitive. It’s working. It’s in the lab and it’s running.”


Fig. 2: Intel’s 3.5D model. Source: Intel

The strategy here is partly evolutionary — 3.5D has been in R&D for at least several years — and part revolutionary, because thinning out the interconnect layer, figuring out a way to handle these thinner interconnect layers, and how to bond them is still a work in progress. There is a potential for warping, cracking, or other latent defects, and dynamically configuring data paths to maximize throughput is an ongoing challenge. But there have been significant advances in thermal management on two- and three-chiplet stacks.

“There will be multiple solutions,” said C.P. Hung, vice president of corporate R&D at ASE. “For example, besides the device itself and an external heat sink, a lot of people will be adding immersion cooling or local liquid cooling. So for the packaging, you can probably also expect to see the implementation of a vapor chamber, which will add a good interface from the device itself to an external heat sink. With all these challenges, we also need to target a different pitch. For example, nowadays you see mass production with a 45 to 40 pitch. That is a typical bumping solution. We expect the industry to move to a 25 to 20 micron bump pitch. Then, to go further, we need hybrid bonding, which is a less than 10 micron pitch.”


Fig. 3: Today’s interposers support more than 100,000 I/Os at a 45m pitch. Source: ASE

Hybrid bonding solves another thorny problem, which is co-planarity across thousands of micro-bumps. “People are starting to realize that the densities we’re interconnecting require a level of flatness, which the guys who make traditional things to be bonded are having a hard time meeting with reasonable yield,” David Fromm, COO at Promex Industries. “That makes it hard to build them, and the thinking is, ‘So maybe we’ve got to do something else.’ You’re starting to see some of that.”

Taming the Hydra
Managing heat remains a challenge, even with all the latest advances and a 3.5D assembly, but the ability to isolate the thermal effects from other components is the best option available today, and possibly well into the future. Still, there are other issues to contend with. Even 2.5D isn’t easy, and a large percentage of the 2.5D implementations have been bespoke designs by large systems companies with very deep pockets.

One of the big remaining challenges is closing timing so that signals arrive at the right place at the right fraction of a second. This becomes harder as more elements are added into chips, and in a 3.5D or 3D-IC, this can be incredibly complex.

“Timing ultimately is the key,” said Sutirtha Kabir, R&D director at Synopsys. “It’s not guaranteed that at whatever your temperature is, you can use the same library for timing. So the question is how much thermal- and IR-aware timing do you have to do? These are big systems. You have to make sure your sign-off is converging. There are two things coming out. There are a bunch of multi-physics effects that are all clumped together. And yes, you could traditionally do one at a time as sign-off, but that isn’t going to work very well. You need to figure out how to solve these problems simultaneously. Ultimately, you’re doing one design. It’s not one for thermal, one for IR, one for timing. The second thing is the data is exploding. How do you efficiently handle the data, because you cannot wait for days and days of runtime and simulation and analysis?”

Physically assembling these devices isn’t easy, either. “The challenge here is really in the thermal, electrical, and mechanical connection of all these various die with different thicknesses and different coefficients of thermal expansion,” said Intel’s O’Buckley. “So with three die, you’ve got the die and an active base, and those are substantially thinned to enable them to come together. And then EMIB is in the substrate. There’s always intense thermal-mechanical qualification work done to manage not just the assembly, but to ensure in the final assembly — the second-level assembly when this is going through system-level card attach — that this thing stays together.”

And depending upon demands for speed, the interconnects and interconnect materials can change. “Hybrid bonding gives you, by far, the best signal and power density,” said Arm’s DeLaCruz. “And it gives you the best thermal conductivity, because you don’t have that underfill that you would otherwise have to put in between the die, which is a pretty significant barrier. This is likely where the industry will go. It’s just a matter of having the production base.”

Hybrid bonding has been used for years for image sensors using wafer-on-wafer connections. “The tricky part is going into the logic space, where you’re moving from wafer-on-wafer to a die-on-wafer process, which is more complex,” DeLaCruz said. “While it currently would cost more, that’s a temporary problem because there’s not much of an installed base to support it and drive down the cost. There’s really no expensive material or equipment costs.”

Toward mass customization
All of this is leading toward the goal of choosing chiplets from a menu and then rapidly connecting them into some sort of architecture that is proven to work. That may not materialize for years. But commercial chiplets will show up in advanced designs over the next couple years, most likely in high-bandwidth memory with a customized processor in the stack, with more following that path in the future.

At least part of this will depend on how standardized the processes for designing, manufacturing, and testing become. “We’re seeing a lot of 2.5D from customers able to secure silicon interposers,” said Ruben Fuentes, vice president for the Design Center at Amkor Technology. “These customers want to place their chiplets on an interposer, then the full module is placed on a flip-chip substrate package. We also have customers who say they either don’t want to use a silicon interposer or cannot secure them. They prefer an RDL interconnect with S-SWIFT or with S-Connect, which serves as an interposer in very dense areas.”

But with at least a third of these leading designs only for internal use, and the remainder confined to large processor vendors, the rest of the market hasn’t caught up yet. Once it does, that will drive economies of scale and open the door to more complete assembly design kits, commercial chiplets, and more options for customization.

“Everybody is generally going in the same direction,” said Fuentes. “But not everything is the same height. HBMs are pre-packaged and are taller than ICs. HBMs could have 12 or 16 ICs stacked inside. It makes a difference from a co-planarity and thermal standpoint, and metal balancing on different layers. So now vendors are having a hard time processing all this data because suddenly you have these huge databases that are a lot bigger than the standard packaging databases. We’re seeing bridges, S-Connect, SWIFT, and then S-SWIFT. This is new territory, and we’re seeing a performance gap in the packaging tools. There’s work that needs to be done here, but software vendors have been very proactive in finding solutions. Additionally, these packages need to be routed. There is limited automated routing, so a good amount of interactive routing is still required, so it takes a lot of time.”


Fig. 4: Packaging roadmap showing bridge and hybrid bonding connections for modules and chiplets, respectively. Source: Amkor Technology

What’s missing
The key challenges ahead for 3.5D are proven reliability and customizability — requirements that are seemingly contradictory, and which are beyond the control of any single company. There are four major pieces to making all of this work.

EDA is the first important piece of the puzzle, and the challenge extends just beyond a single chip. “The IC designers have to think about a lot of things concurrently, like thermal, signal integrity, and power integrity,” said Keith Lanier, technical product management director at Synopsys. “But even beyond that, there’s a new paradigm in terms of how people need to work. Traditional packaging folks and IC designers need to work closely together to make these 3.5D designs successful.”

It’s not just about doing more with the same or fewer people. It’s doing more with different people, too. “It’s understanding the architecture definition, the functional requirements, constraints, and having those well-defined,” Lanier said. “But then it’s also feasibility, which includes partitioning and technology selection, and then prototyping and floor-planning. This is lots and lots of data that is required to be generated, and you need analysis-driven exploration, design, and implementation. And AI will be required to help designers and system design teams manage the sheer complexity of these 3.5D designs.”

Process/assembly design kits are a second critical piece, and this is likely to be split between the foundries and the OSATs. “If the customer wants a silicon interposer for a 2.5D package, it would be up to the foundry that’s going to manufacture the interposer to provide the PDK. We would provide the PDK for all of our products, such as S-SWIFT and S-Connect,” said Amkor’s Fuentes.

Setting realistic parameters is the third piece of the puzzle. While the type of processing elements and some of the analog functions may change — particularly those involving power and communication — most of the components will remain the same. That determines what can be pre-built and pre-tested, and the speed and ease of assembly.

“A lot of the standards that are being deployed, like UCIe interfaces and HBM interfaces are heading to where 20% is customization and 80% is on the shelf,” said Intel’s O’Buckley. “But we aren’t there today. At the scale that our customers are deploying these products, the economics of spending that extra time to optimize an implementation is a decimal point. It’s not leveraging 80/20 standards. We’ll get there. But most of these designs you can count on your fingers and toes because of the cost and scale required to do them. And until the infrastructure for standards-based chiplets gets mature, the barrier of entry for companies that want to do this without that scale is just too high. Still, it is going to happen.”

Ensuring processes are consistent is the fourth piece of the puzzle. The tools and the individual processes don’t need to change. “The customer has a ‘target’ for the outcome they want for a particular tool, which typically is a critical dimension measured by a metrology tool,” said David Park, vice president of marketing at Tignis. “As long as there is some ‘measurement’ that determines the goodness of some outcome, which typically is the result of a process step, we can either predict the bad outcome — and engineers have to take some corrective or preventive action — or we can optimize the recipe of that tool in real time to keep the result in the range they want.”

Park noted there is a recipe that controls the inputs. “The tool does whatever it is supposed to do,” he said. “Then you measure the output to see how far you deviated from the acceptable output.”

The challenge is that inside of a 3.5D system, what is considered acceptable output is still being defined. There are many processes with different tolerances. Defining what is consistent enough will require a broad understanding of how all the pieces work together under specific workloads, and where the potential weaknesses are that need to be adjusted.

“One of the problems here is as these densities get higher and the copper pillars get smaller, the amount of space you need between the pillar and the substrate have to be highly controlled,” said Dick Otte, president and CEO of Promex. “There’s a conflict — not so much with how you fabricate the chip, because it usually has the copper pillars on it — but with the substrate. A lot of the substrate technologies are not inherently flat. It’s the same issue with glass. You’ve got a really nice flat piece of glass. The first thing you’re going to do is put down a layer of metal and you’re going to pattern it. And then you put down a layer of dielectric, and suddenly you’ve got a lump where the conductor goes. And now, where do you put the contact points? So you always have the one plan which is going to be the contact point where all the pillars come in. But what if I only need one layer and I don’t need three?”

Conclusion
For the past decade, the chip industry has been trying to figure out a way to balance faster processing, domain-specific designs, limited reticle size, and the enormous cost of scaling an SoC. After investigating nearly every possible packaging approach, interconnect, power delivery method, substrate and dielectric material, 3.5D has emerged as the front runner — at least for now.

This approach provides the chip industry with a common thread on which to begin developing assembly design kits, commercial chiplets, and to fill in the missing tools and services throughout the supply chain. Whether this ultimately becomes a springboard for full 3D-ICs, or a platform on which to use 3D stacking more effectively, remains to be seen. But for the foreseeable future, large chipmakers have converged on a path forward to provide orders of magnitude performance improvements and a way to contain costs. The rest of the industry will be working to smooth out that path for years to come.

Related Reading
Intel Vs. Samsung Vs. TSMC
Foundry competition heats up in three dimensions and with novel technologies as planar scaling benefits diminish.
3D Metrology Meets Its Match In 3D Chips And Packages
Next-generation tools take on precision challenges in three dimensions.
Design Flow Challenged By 3D-IC Process, Thermal Variation
Rethinking traditional workflows by shifting left can help solve persistent problems caused by process and thermal variations.
Floor-Planning Evolves Into The Chiplet Era
Automatically mitigating thermal issues becomes a top priority in heterogeneous designs.

The post 3.5D: The Great Compromise appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Controlling Warpage In Advanced PackagesLaura Peters
    Warpage is becoming a serious concern in advanced packaging, where a heterogeneous mix of materials can cause uneven stress points during assembly and packaging, and under real workloads in the field. Warpage plays a critical role in determining whether an advanced package can be assembled successfully and meet long-term reliability targets. New advances, such as molding compounds with improved thermal properties, advanced modeling techniques, and creative architectures involving two molding ste
     

Controlling Warpage In Advanced Packages

24. Červen 2024 v 09:01

Warpage is becoming a serious concern in advanced packaging, where a heterogeneous mix of materials can cause uneven stress points during assembly and packaging, and under real workloads in the field.

Warpage plays a critical role in determining whether an advanced package can be assembled successfully and meet long-term reliability targets. New advances, such as molding compounds with improved thermal properties, advanced modeling techniques, and creative architectures involving two molding steps are enabling greater control over package warpage, while also providing more flexibility to optimize a robust multi-chiplet system.

Warpage is the inevitable result of the mismatch in coefficients of thermal expansion (CTEs) between the silicon chip, molding compound, copper, polyimide, and other materials. It changes throughout the assembly process, and can cause cracking or delamination failures. The most vulnerable spots include low-k cores, which are subject to cracking and shorts, or non-wet failures in micro-bumps.

“One thing that’s very hot these days is the discussion around warpage and stress of the package,” said Kenneth Larsen, senior director of product management at Synopsys. “This is not only when you’re going through the manufacturing process, where you change temperatures. That can cause warpage. But it’s also when the device you’re building needs to be inserted into a socket. You can have issues around warpage there, as well.”

Even when warpage is effectively addressed during assembly and packaging, a device still may warp under heavy usage in the field. This is particularly true with heterogeneous designs, where chiplets are developed using different materials or processes, and where logic is concentrated in specific areas of an asymmetrical package.

The transition to multi-chiplet packaging is accelerating rapidly due to demands for ever-higher processing speeds and low latency, especially in mobile, automotive and high-performance compute/AI applications. Engineers increasingly are turning to modeling and simulation to understand temperature-dependent warpage, which can vary depending on die thickness, mold-to-silicon ratio, and substrate type. Organic substrates are very attractive because they are inexpensive and can be customized to any size, but they are much more flexible and susceptible to warpage than silicon substrates.

All these considerations point to the need for thermal and structural models of complex heterogeneous assemblies and packages. “Advanced modeling allows companies to simulate the behavior of different materials, thermal dynamics, and mechanical stresses during the assembly process,” said Mike Kelly, vice president of chiplets/FCBGA integration at Amkor. “Through this virtual experimentation, one can predict and mitigate potential challenges, ensuring that the final product meets stringent quality and reliability standards.”

How warpage happens
The assembly process includes multiple heating and cooling steps, which induce a certain amount of deformation between adjacent materials with different thermal and mechanical properties. In advanced packaging, warpage in the 100 micron range is not unheard of.

One of the reasons warpage is such a problem today is the large size of chiplets and the very tight process windows for chiplets, redistribution layers (RDLs), substrates, and bumps of various sizes. The relative expansion and contraction of neighboring materials depends on differences in the material’s CTE, which spells out the increase in size with each degree change of temperature (ppm/°C).

“Chiplets are typically relatively large die,” said Dick Otte, CEO of Promex Industries. “In the iPad, it’s 20 x 30 millimeters, with as many as 10,000 I/Os — usually copper pillar. Just simply taking a single die and putting it down on a substrate can be quite a challenge because the pitches are so small. So what’s critical for these assemblies is controlling warpage and planarity. It needs to stay planar through the whole reflow solder process to bridge that gap between the copper pillar and the contact on the circuit board without warping.”

Warpage can either happen upward, bending at the edges (smiling), or downward (crying), depending on the relative CTEs of the materials in the stack. Silicon, for example, is 2.8; copper is 17; FR4 PCB is 14 to 17 ppm/°C. The worst CTE mismatch is between a silicon interposer and an organic substrate.

It helps to envision stacks in packaging as groups of materials. “You have to look at the CTE of the materials and their reaction at temperatures, so you’ve got relatively low expansion copper on the top and solder at the bottom,” Otte said. “They’re kind of equal with a high expansion dielectric in the middle, so that when you heat this thing up, it kind of expands by the same amount. If you just put all the copper on the top, that thing is going to warp toward the copper side when you heat it up. Copper is 15 ppm per degree C. The organics are more like twice that, at 25 to 30 ppm/°C.

Other key metrics are the modulus, or the elasticity of a material, and the glass transition temperature (Tg), the temperature at which a material begins to flow. These values are related, too. For example, when it comes to the thermal behavior of polymers like epoxy molding compound (EMC), the modulus tends to plummet above its glass transition temperature. That happens because polymer chains tend to slide freely in the liquid state, whereas they are stiffer in a solid form.

In addition to solder reflow, warpage tends to occur at the post-molding curing step. Hung-Chun Yang and colleagues at ASE recently determined that die thickness substantially influences warpage levels measured at multiple steps in an existing process for chip-first fan-out chip on substrate package. [1] They noted that “severe wafer warpage occurred after curing, resulting in misalignment and difficulty in handling in the subsequent process.” To reduce package warpage, the team replaced a metal carrier/thin film approach with a glass carrier. The team also determined that a 3D finite element method (FEM) captures the warpage behavior and agreed well with actual test vehicle data.


Fig. 1: The glass carrier in the improved flow (right) induced less warpage than the original flow. Increasing the die thickness also dramatically reduced warpage. Source: ASE

The chip-first process begins with probing the fabricated wafers, thinning and then electroplating copper studs prior to sawing and placement of known good die in two schemes. The initial process used a metal carrier that is removed after molding and replaced with a thin film. The improved process uses a glass carrier that remained through molding, curing, mold grinding, RDL, and copper pillar processes, and then wass de-bonded.

Warpage reaches its maximum level during post-mold curing, and it changes most dramatically at the curing step and after glass carrier debonding. The glass carrier flow reduces warpage overall. In addition, the ASE engineers determined they can reduce warpage an additional 35% by increasing the wafer thickness from 0.54mm to 0.7mm.

A second strategy for reducing warpage involves using EMCs with different thermal properties, especially when the process calls for two molding steps. Amkor engineers recently evaluated the reliability performance of two high-performance multi-chiplet packages by modeling and fabricating two high-performance test vehicles. One used a module approximately the size of one reticle, containing 1 ASIC, 2 HBMs and 2 bridge die (33 x 26mm). The second module was 3 reticles in size, with 2 ASICs, 8 HBMs and 10 bridge dies (54 x 46mm). [2] Heejun Jang and colleagues at Amkor Technology Korea carried out modeling and simulation using the Ansys Parametric Design Language (APDL) version 16.1 simulator and compared results with test vehicles containing dummy dies.

Amkor’s die-last S-Connect process starts with a carrier wafer, on which copper studs for the bridge die and copper pillars are fabricated (see figure 2). The integrated passives and bridge die are embedded in the first mold, which is cured and then ground back. RDL is deposited on the mold and solder capture pads and dies attached to the pads using micro-bumps. Then, the solder is reflowed and underfilled. The second mold around the face-up die is cured and ground back, followed by C4 bumping on the bottom for flip-chip connect to the substrate. The simulation analyzes warpage with 9 combinations of 3 different EMCs with high, medium, and low CTEs (7 to 12 ppm below Tg, 22 to 46 ppm above Tg) and high-to-low glass transition temperatures (145°C to 175°C). [2]


Fig. 2: Process flow for S-Connect Package. Source: Amkor

Warpage as a function of EMC choice showed all materials followed the same smile pattern at room temperature, and cry pattern at high temperature (250°C). The EMCs with the lower CTEs caused less warpage. And in cases where the mold occupies more area relative to chip area, the warpage level is more pronounced. More importantly, the warpage levels were roughly 50% higher for 450µm die relative to 650µm-thick die. Interestingly, the thicker silicon die was 3X more effective in controlling warpage relative to EMC material selection on overall module warpage, so die thickness is the biggest lever in reducing warpage in cases where it can be increased.

Reliability testing is paramount once the package configuration is chosen. Amkor ran its advanced packaging test vehicles through moisture resistance testing, highly accelerated stress testing, thermal cycling condition B, and high temperature storage tests. These are needed to root out infant mortality issues, and cross-sectional analysis can reveal any cracks or latent defects that could precipitate into failures in field use.

While the above example may constitute a large multi-chiplet package today, package sizes are growing larger still, which means even more attention to warpage will be needed. More and more this will drive assembly lines toward digital twin or virtual representations to enable process and package optimization.

“By creating virtual representations of the semiconductor assembly line, one can identify potential areas of concern and optimize control strategies,” said Amkor’s Kelly. “Virtual fabrication in package assembly enables companies to assess the impact of design changes on manufacturing processes before physical prototypes are even created. This not only accelerates the product development cycle, but also minimizes the risk of costly errors.”

The early identification of potential bottlenecks further shortens cycle times, and enhances overall efficiency.

Conclusion
Going forward, even greater attention to mechanical and thermal properties will be required by teams comprised of designers and packaging engineers. “Tight tolerances in new packaging design require an accurate analysis of mechanical and electrical tolerances during stack up,” said Curtis Zwenger, vice president of engineering and technical marketing at Amkor. “Increasingly higher levels of process capability are required, with common metrics like CpK. Identification of these critical interactions in the design can be accomplished early in process development with this type of modeling. In turn, these analyses guide the investment of advanced process control to ensure process capability is maintained.”

References

  1. C. Yang, et al, “Investigation of Wafer Warpage Evolution Based on Fan-out Chip-first Process,” 2024 International Conference on Electronics Packaging (ICEP), Toyama, Japan, 2024, pp. 151-152, doi: 10.23919/ICEP61562.2024.10535572.
  2. H. Jang et al., “Reliability Performance of S-Connect Module (Bridge Technology) for Heterogeneous Integration Packaging,” 2023 IEEE 73rd Electronic Components and Technology Conference (ECTC), Orlando, FL, USA, 2023, pp. 1027-1031, doi: 10.1109/ECTC51909.2023.00175.

Related Reading
What Works Best For Chiplets
Not all chiplets are interchangeable, and options will be limited.

The post Controlling Warpage In Advanced Packages appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Predicting And Preventing Process DriftGregory Haley
    Increasingly tight tolerances and rigorous demands for quality are forcing chipmakers and equipment manufacturers to ferret out minor process variances, which can create significant anomalies in device behavior and render a device non-functional. In the past, many of these variances were ignored. But for a growing number of applications, that’s no longer possible. Even minor fluctuations in deposition rates during a chemical vapor deposition (CVD) process, for example, can lead to inconsistencie
     

Predicting And Preventing Process Drift

22. Duben 2024 v 09:05

Increasingly tight tolerances and rigorous demands for quality are forcing chipmakers and equipment manufacturers to ferret out minor process variances, which can create significant anomalies in device behavior and render a device non-functional.

In the past, many of these variances were ignored. But for a growing number of applications, that’s no longer possible. Even minor fluctuations in deposition rates during a chemical vapor deposition (CVD) process, for example, can lead to inconsistencies in layer uniformity, which can impact the electrical isolation properties essential for reliable circuit operation. Similarly, slight variations in a photolithography step can cause alignment issues between layers, leading to shorts or open circuits in the final device.

Some of these variances can be attributed to process error, but more frequently they stem from process drift — the gradual deviation of process parameters from their set points. Drift can occur in any of the hundreds of process steps involved in manufacturing a single wafer, subtly altering the electrical properties of chips and leading to functional and reliability issues. In highly complex and sensitive ICs, even the slightest deviations can cause defects in the end product.

“All fabs already know drift. They understand drift. They would just like a better way to deal with drift,” said David Park, vice president of marketing at Tignis. “It doesn’t matter whether it’s lithography, CMP (chemical mechanical polishing), CVD or PVD (chemical/physical vapor deposition), they’re all going to have drift. And it’s all going to happen at various rates because they are different process steps.”

At advanced nodes and in dense advanced packages, where a nanometer can be critical, controlling process drift is vital for maintaining high yield and ensuring profitability. By rigorously monitoring and correcting for drift, engineers can ensure that production consistently meets quality standards, thereby maximizing yield and minimizing waste.

“Monitoring and controlling hundreds of thousands of sensors in a typical fab requires the ability to handle petabytes of real-time data from a large variety of tools,” said Vivek Jain, principal product manager, smart manufacturing at Synopsys. “Fabs can only control parameters or behaviors they can measure and analyze. They use statistical analysis and error budget breakdowns to define upper control limits (UCLs) and lower control limits (LCLs) to monitor the stability of measured process parameters and behaviors.”

Dialing in legacy fabs
In legacy fabs — primarily 200mm — most of the chips use 180nm or older process technology, so process drift does not need to be as precisely monitored as in the more advanced 300mm counterparts. Nonetheless, significant divergence can lead to disparities in device performance and reliability, creating a cascade of operational challenges.

Manufacturers operating at older technology nodes might lack the sophisticated, real-time monitoring and control methods that are standard in cutting-edge fabs. While the latter have embraced ML to predict and correct for drift, many legacy operations still rely heavily on periodic manual checks and adjustments. Thus, the management of process drift in these settings is reactive rather than proactive, making changes after problems are detected rather than preventing them.

“There is a separation between 300-millimeter and 200-millimeter fabs,” said Park. “The 300-millimeter guys are all doing some version of machine learning. Sometimes it’s called advanced process control, and sometimes it’s actually AI-powered process control. For some of the 200-millimeter fabs with more mature process nodes, they basically have a recipe they set and a bunch of technicians looking at machines and looking at the CDs. When the drift happens, they go through their process recipe and manually adjust for the out-of-control processes, and that’s just what they’ve always done. It works for them.”

For these older fabs, however, the repercussions of process drift can be substantial. Minor deviations in process parameters, such as temperature or pressure during the deposition or etching phases, gradually can lead to changes in the physical structure of the semiconductor devices. Over time, these minute alterations can compound, resulting in layers of materials that deviate from their intended characteristics. Such deviations affect critical dimensions and ultimately can compromise the electrical performance of the chip, leading to slower processing speeds, higher power consumption, or outright device failure.

The reliability equation is equally impacted by process drift. Chips are expected to operate consistently over extended periods, often under a range of environmental conditions. However, when process-induced variability can weaken the device’s resilience, precipitating early wear-out mechanisms and reducing its lifetime. In situations where dependability is non-negotiable, such as in automotive or medical applications, those variations can have dire consequences.

But with hundreds of process steps for a typical IC, eliminating all variability in fabs is simply not feasible.

“Process drift is never going to not happen, because the processes are going to have some sort of side effect,” Park said. “The machines go out of spec and things like pumps and valves and all sorts of things need to be replaced. You’re still going to have preventive maintenance (PM). But if the critical dimensions are being managed correctly, which is typically what triggers the drift, you can go a longer period of time between cleanings or the scheduled PMs and get more capacity.”

Process drift pitfalls
Managing process drift in semiconductor manufacturing presents several complex challenges. Hysteresis, for example, is a phenomenon where the output of a process varies not solely because of current input conditions, but also based on the history of the states through which the process already has passed. This memory effect can significantly complicate precision control, as materials and equipment might not reset to a baseline state after each operational cycle. Consequently, adjustments that were effective in previous cycles may not yield the same outcomes due to accumulated discrepancies.

One common cause of hysteresis is thermal cycling, where repeated heating and cooling create mechanical stresses. Those stresses can be additive, releasing inconsistently based on temperature history.  That, in turn, can lead to permanent changes in the output of a circuit, such as a voltage reference, which affects its precision and stability.

In many field-effect transistors (FETs), hysteresis also can occur due to charge trapping. This happens when charges are captured in ‘trap states’ within the semiconductor material or at the interface with another material, such as an oxide layer. The trapped charges then can modulate the threshold voltage of the device over time and under different electrical biases, potentially leading to operational instability and variability in device performance.

Human factors also play a critical role in process drift, with errors stemming from incorrect settings adjustments, mishandling of materials, misinterpretation of operational data, or delayed responses to process anomalies. Such errors, though often minor, can lead to substantial variations in manufacturing processes, impacting the consistency and reliability of semiconductor devices.

“Once in production, the biggest source of variability is human error or inconsistency during maintenance,” said Russell Dover, general manager of service product line at Lam Research. “Wet clean optimization (WCO) and machine learning through equipment intelligence solutions can help address this.”

The integration of new equipment into existing production lines introduces additional complexities. New machinery often features increased speed, throughput, and tighter tolerances, but it must be integrated thoughtfully to maintain the stringent specifications required by existing product lines. This is primarily because the specifications and performance metrics of legacy chips have been long established and are deeply integrated into various applications with pre-existing datasheets.

“From an equipment supplier perspective, we focus on tool matching,” said Dover. “That includes manufacturing and installing tools to be identical within specification, ensuring they are set up and running identically — and then bringing to bear systems, tooling, software and domain knowledge to ensure they are maintained and remain as identical as possible.”

The inherent variability of new equipment, even those with advanced capabilities, requires careful calibration and standardization.

“Some equipment, like transmission electron microscopes, are incredibly powerful,” said Jian-Min Zuo, a materials science and engineering professor at the University of Illinois’ Grainger College of Engineering. “But they are also very finicky, depending on how you tune the machine. How you set it up under specific conditions may vary slightly every time. So there are a number of things that can be done when you try to standardize those procedures, and also standardize the equipment. One example is to generate a curate, like a certain type of test case, where you can collect data from different settings and make sure you’re taking into account the variability in the instruments.”

Process drift solutions
As semiconductor manufacturers grapple with the complexities of process drift, a diverse array of strategies and tools has emerged to address the problem. Advanced process control (APC) systems equipped with real-time monitoring capabilities can extract patterns and predictive insights from massive data sets gathered from various sensors throughout the manufacturing process.

By understanding the relationships between different process variables, APC can predict potential deviations before they result in defects. This predictive capability enables the system to make autonomous adjustments to process parameters in real-time, ensuring that each process step remains within the defined control limits. Essentially, APC acts as a dynamic feedback mechanism that continuously fine-tunes the production process.

Fig. 1: Reduced process drift with AI/ML advanced process control. Source: Tignis

Fig. 1: Reduced process drift with AI/ML advanced process control. Source: Tignis

While APC proactively manages and optimizes the process to prevent deviations, fault detection and classification (FDC) reacts to deviations by detecting and classifying any faults that still occur.

FDC data serves as an advanced early-warning system. This system monitors the myriad parameters and signals during the chip fabrication process, rapidly detecting any variances that could indicate a malfunction or defect in the production line. The classification component of FDC is particularly crucial, as it does more than just flag potential issues. It categorizes each detected fault based on its characteristics and probable causes, vastly simplifying the trouble-shooting process. This allows engineers to swiftly pinpoint the type of intervention needed, whether it’s recalibrating instruments, altering processing recipes, or conducting maintenance repairs.

Statistical process control (SPC) is primarily focused on monitoring and controlling process variations using statistical methods to ensure the process operates efficiently and produces output that meets quality standards. SPC involves plotting data in real-time against control limits on control charts, which are statistically determined to represent the expected normal process behavior. When process measurements stray outside these control limits, it signals that the process may be out of control due to special causes of variation, requiring investigation and correction. SPC is inherently proactive and preventive, aiming to detect potential problems before they result in product defects.

“Statistical process control (SPC) has been a fundamental methodology for the semiconductor industry almost from its very foundation, as there are two core factors supporting the need,” said Dover. “The first is the need for consistent quality, meaning every product needs to be as near identical as possible, and second, the very high manufacturing volume of chips produced creates an excellent workspace for statistical techniques.”

While SPC, FDC, and APC might seem to serve different purposes, they are deeply interconnected. SPC provides the baseline by monitoring process stability and quality over time, setting the stage for effective process control. FDC complements SPC by providing the tools to quickly detect and address anomalies and faults that occur despite the preventive measures put in place by SPC. APC takes insights from both SPC and FDC to adjust process parameters proactively, not just to correct deviations but also to optimize process performance continually.

Despite their benefits, integrating SPC, FDC and APC systems into existing semiconductor manufacturing environments can pose challenges. These systems require extensive configuration and tuning to adapt to specific manufacturing conditions and to interface effectively with other process control systems. Additionally, the success of these systems depends on the quality and granularity of the data they receive, necessitating high-fidelity sensors and a robust data management infrastructure.

“For SPC to be effective you need tight control limits,” adds Dover. “A common trap in the world of SPC is to keep adding control charts (by adding new signals or statistics) during a process ramp, or maybe inheriting old practices from prior nodes without validating their relevance. The result can be millions of control charts running in parallel. It is not a stretch to state that if you are managing a million control charts you are not really controlling much, as it is humanly impossible to synthesize and react to a million control charts on a daily basis.”

This is where AI/ML becomes invaluable, because it can monitor the performance and sustainability of the new equipment more efficiently than traditional methods. By analyzing data from the new machinery, AI/ML can confirm observations, such as reduced accumulation, allowing for adjustments to preventive maintenance schedules that differ from older equipment. This capability not only helps in maintaining the new equipment more effectively but also in optimizing the manufacturing process to take full advantage of the technological upgrades.

AI/ML also facilitate a smoother transition when integrating new equipment, particularly in scenarios involving ‘copy exact’ processes where the goal is to replicate production conditions across different equipment setups. AI and ML can analyze the specific outputs and performance variations of the new equipment compared to the established systems, reducing the time and effort required to achieve optimal settings while ensuring that the new machinery enhances production without compromising the quality and reliability of the legacy chips being produced.

AI/ML
Being more proactive in identifying drift and adjusting parameters in real-time is a necessity. With a very accurate model of the process, you can tune your recipe to minimize that variability and improve both quality and yield.

“The ability to quickly visualize a month’s worth of data in seconds, and be able to look at windows of time, is a huge cost savings because it’s a lot more involved to get data for the technicians or their process engineers to try and figure out what’s wrong,” said Park. “AI/ML has a twofold effect, where you have fewer false alarms, and just fewer alarms in general. So you’re not wasting time looking at things that you shouldn’t have to look at in the first place. But when you do find issues, AI/ML can help you get to the root cause in the diagnostics associated with that much more quickly.”

When there is a real alert, AI/ML offers the ability to correlate multiple parameters and inputs that are driving that alert.

“Traditional process control systems monitor each parameter separately or perform multivariate analysis for key parameters that require significant effort from fab engineers,” adds Jain. “With the amount of fab data scaling exponentially, it is becoming humanly impossible to extract all the actionable insights from the data. Machine learning and artificial intelligence can handle big data generated within a fab to provide effective process control with minimal oversight.”

AI/ML also can look for more other ways of predicting when the drift is going to take your process out of specification. Those correlations can be bivariate and multivariate, as well as univariate. And a machine learning engine that is able to sift through tremendous amounts of data and a larger number of variables than most humans also can turn up some interesting correlations.

“Another benefit of AI/ML is troubleshooting when something does trigger an alarm or alert,” adds Park. “You’ve got SPC and FDC that people are using, and a lot of them have false positives, or false alerts. In some cases, it’s as high as 40% of the alerts that you get are not relevant for what you’re doing. This is where AI/ML becomes vital. It’s never going to take false alerts to zero, but it can significantly reduce the amount of false alerts that you have.”

Engaging with these modern drift solutions, such as AI/ML-based systems, is not mere adherence to industry trends but an essential step towards sustainable semiconductor production. Going beyond the mere mitigation of process drift, these technologies empower manufacturers to optimize operations and maintain the consistency of critical dimensions, allowed by the intelligent analysis of extensive data and automation of complex control processes.

Conclusion
Monitoring process drift is essential for maintaining quality of the device being manufactured, but it also can ensure that the entire fabrication lifecycle operates at peak efficiency. Detecting and managing process drift is a significant challenge in volume production because these variables can be subtle and may compound over time. This makes identifying the root cause of any drift difficult, particularly when measurements are only taken at the end of the production process.

Combating these challenges requires a vigilant approach to process control, regular equipment servicing, and the implementation of AI/ML algorithms that can assist in predicting and correcting for drift. In addition, fostering a culture of continuous improvement and technological adaptation is crucial. Manufacturers must embrace a mindset that prioritizes not only reactive measures, but also proactive strategies to anticipate and mitigate process drift before it affects the production line. This includes training personnel to handle new technologies effectively and to understand the dynamics of process control deeply. Such education enables staff to better recognize early signs of drift and respond swiftly and accurately.

Moreover, the integration of comprehensive data analytics platforms can revolutionize how fabs monitor and analyze the vast amounts of data they generate. These platforms can aggregate data from multiple sources, providing a holistic view of the manufacturing process that is not possible with isolated measurements. With these insights, engineers can refine their process models, enhance predictive maintenance schedules, and optimize the entire production flow to reduce waste and improve yields.

Related Reading
Tackling Variability With AI-Based Process Control
How AI in advanced process control reduces equipment variability and corrects for process drift.

The post Predicting And Preventing Process Drift appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Electromigration Concerns Grow In Advanced PackagesLaura Peters
    The incessant demand for more speed in chips requires forcing more energy through ever-smaller devices, increasing current density and threatening long-term chip reliability. While this problem is well understood, it’s becoming more difficult to contain in leading-edge designs. Of particular concern is electromigration, which is becoming more troublesome in advanced packages with multiple chiplets, where various bonding and interconnect schemes create abrupt changes in materials and geometries.
     

Electromigration Concerns Grow In Advanced Packages

18. Duben 2024 v 09:09

The incessant demand for more speed in chips requires forcing more energy through ever-smaller devices, increasing current density and threatening long-term chip reliability. While this problem is well understood, it’s becoming more difficult to contain in leading-edge designs.

Of particular concern is electromigration, which is becoming more troublesome in advanced packages with multiple chiplets, where various bonding and interconnect schemes create abrupt changes in materials and geometries. For example, electrons may travel from a copper trace to a solder bump of SAC (tin-silver-copper), then to an underbump metal based on nickel, and finally to an interposer copper pad. That, in turn, can cause atoms to shift, resulting in failures in solder joints or in copper redistribution layers in high-density fan-out packages.

“From an electromigration perspective, advanced packaging causes increased packaging density, reduced packaging size, and the dimensions of interconnects to shrink, so the current density is now in close proximity to the maximum current density limit per EM design rules,” said Dermott Lynch, director of technical product management in Synopsys‘ EDA Group.

Any additional stresses the package may be subjected to during assembly and use, whether mechanical or thermal, also can help induce or accelerate electromigration. “Electromigration, in general, gets worse due to temperature and stress, both of which advanced packaging increases,” said Lynch. “Electromigration is also cumulative, so essentially it integrates all the temperature highs and stress over the lifetime until an interconnect breaks down or shorts. Larger processing temperature and operation temperature will make it worse, but it also depends on time under that temperature.”

In fact, managing thermal pathways is perhaps the greatest challenge associated the movement toward the ultimate package, a 3D-IC. “Electromigration is very temperature-sensitive,” said Marc Swinnen, director of product marketing in Ansys’ Semiconductor Division. “Depending on your thermal map, your power integrity will have to adapt to the local temperature profile that you have. So when you look at a chip, you can calculate how much power the chip is putting out, but you cannot tell how hot the chip will get because ‘it depends.’ Is it sitting on a cold plate or sitting in the sun in the Sahara? System concerns come in, and multi-physics modeling is important to understanding these co-dependent effects.”

Thermal engineering also means moving heat away from the most vulnerable points of failure, such as solder bumps. “Effective thermal management is essential for bump reliability,” said Curtis Zwenger, vice president of engineering and technical marketing at Amkor. “Engineers are incorporating thermal enhancement techniques, such as the use of thermal interface materials and advanced heat dissipation solutions, to ensure that bumps are not subjected to excessive temperature-related stresses.”

Zwenger noted that engineers are looking into new materials, while optimizing the use of existing materials to minimize the possibility of electromigration. “Semiconductor packaging engineers are implementing a range of measures to enhance bump reliability and maximize bump yield. These strategies include new materials for solder bumps and underbump metallization, optimizing bump size, pitch and shape for reliability, advanced process control methods to control variability and maximize yield, and simulating and modeling reliability.”

What is electromigration?
Electromigration is the mass transport of metal atoms caused by the electron wind from current flowing through a conductor, typically copper. When current density is high enough, metal will diffuse in the direction of current flow, creating tiny hillocks downstream and leaving behind vacancies or voids. With enough electromigration, failures occur due to severe line thinning, causing opens, or due to hillocks that bridge adjacent lines, causing short circuits.

Electromigration is a diffusion-controlled mechanism that can take three forms — bulk, grain boundary, or surface diffusion, depending on the metal. Aluminum migrates by grain boundary diffusion whereas copper migrates on the surface or at its grain boundaries.

For most of the semiconductor industry’s history, electromigration was primarily an on-chip concern, but on-chip EM is largely under control by reliability engineers. But with the scaling and rapid developments in advanced packaging — implementing TSVs, fan-out packaging with redistribution layers, and copper pillar bumps — electromigration has emerged as a major threat at the package level. Current flowing through the solder bump causes joule heating, and heat from other parts of the package may also dissipate through the solder bumps. EM can become an issue for solder joint connections between chip and interposer, or chip and PCB, as well as in RDLs. Solder joint failures typically manifest as voids or cracks.

Fig. 1: Electromigration can create short circuits between two interconnects through the development of hillocks, or an open circuit through the creation of voids in interconnect. Source: Ansys

Fig. 1: Electromigration can create short circuits between two interconnects through the development of hillocks, or an open circuit through the creation of voids in interconnect. Source: Ansys

Electromigration progresses more quickly at higher temperatures, at higher currents, under greater mechanical stress and in the presence of defects or impurities in the metal. Black’s equation describes an interconnect’s mean time-to-failure with respect to its temperature, current density and the activation energy needed to dislodge a metal atom as:

Black's equation

J is the current density, k is Boltzmann’s constant, T is temperature, Ea is the activation energy, and N is a scaling factor that depends on the metal’s properties. Black’s equation is useful because it easily shows how shorter, wider interconnects will tend to have longer MTTF. In addition, electromigration time-to-failure very strongly depends on the interconnect’s temperature. That temperature is primarily the result of the chip’s environmental temperature, self-heating of the conductor caused by current flow, the heat from neighboring interconnects or transistors, and the thermal conductivity of the surrounding material.

It is also important to note that electromigration is a runaway process. As current density and/or temperature increases, electromigration increases, which raises current density, causing more metal to migrate in a destructive feedback loop.

EM failure modes and allowable current density
In the case of copper redistribution layers in polyimide material, as current flows through the RDL, heat accumulates in the conductor due to Joule heating generation, which can degrade performance. As the required current density and Joule heating temperature is increasing in the fine-line Cu RDL structures (<5nm lines and spaces), self-heating is considered a key factor in the reliability of high-density fan out packages.

JiHye Kwon, senior manager of R&D at Amkor, recently used EM testing and Black’s equation to determine the electromigration failure mechanisms for a given RDL stack and high-density fan-out package with 2µm or 10µm wide RDL layers, 1,000µm long. [1]

High density fan-out is an emerging technology, as it features more aggressive scaling than wafer level fan-out packages. The three layers of copper RDL (3µm thick with Ta/Cu seed) were fabricated followed by polyimide fill, copper pillar deposition, die attach, and overmold. Kwon’s team tested both 2 and 10µm RDL at different current densities and temperatures until resistance increased by 100% (EM failure), but the maximum allowed current density corresponded with a 20% resistance increase. The failure modes occurred in two stages, first by void nucleation and growth and second with copper reduction and oxidation. The study yielded Ea and current density exponent values that can be useful in future designs of RDLs.

Meanwhile, a team of researchers from ASE recently demonstrated how susceptibility to electromigration is determined on copper pillar interconnects in flip chip quad flat no-lead (FCQFN) for high-power automotive applications. The multi-layered copper pillar bumps with a Cu/Ni/Sn1.8Ag configuration were bonded to a silver-plated copper leadframe and tested under extreme EM conditions of 10 kA/cm2 current density and temperatures of 150°C, 160°C and 180°C, while taking in-situ resistance measurements. [2] The EM failures corresponded with rapid rises in electrical resistance that corresponded with the formation of intermetallic compounds and voids at the Cu/solder interfaces. The team built an EM prediction model of interconnects based on a Black-type EM equation, following the JEDEC standard with five test conditions.

After the statistic calculation from the lifetime of samples, the ASE team determined activation energy of Cu pillar interconnects in the FCQFN package (1.12 ± 0.03 eV). The maximum current of the Cu pillar interconnects allowable lasting 10 years at a 105°C operating temperature at a 0.1% failure rate was larger than 2A for the FCQFN Cu pillar structure. “The FCQFN package has great potential in terms of its excellent anti-EM performance for future high-power applications,” the article said.

Designing/manufacturing for EM resiliency
Building electromigration resilience into advanced devices begins with using only EM-compliant linewidths in circuit designs based on the current density and heat profile that the interconnects will experience during operation over the lifetime of the device. Electromigration mitigation also requires process and materials engineering to ensure durability, for instance, of copper pillar bumps under BGA packages. It also calls for an optimized assembly process window and tight process control to prevent tiny violations of design rules that can later precipitate as EM failures.

As the industry makes its way toward true 3D packages, and eventually 3D-ICs, it seems clear that modeling and simulation will play an increasing role in determining many of the guard rails for manufacturing and assembly before manufacturing and assembly even begins. “Reliability modeling and simulation tools are being used to better understand the reliability of bump structures. This proactive approach helps in identifying potential issues before they arise, enabling engineers to implement preventive measures,” said Zwenger.

Modeling and simulation at the system level also will be essential to understanding the complex interplay between reliability mechanisms with thermal and mechanical stress in multi-chiplet systems during operation.

“Electromigration for stacked die is challenging,” said Synopsys’ Lynch. “Localized, die-to-die workloads cause repetitive current flow in specific areas. This generates local heat, increasing EM resulting in wire degradation, while producing even more heat. Reducing the thermal issue becomes critical to ensuring EM reliability.”

As stated previously, solder bumps can become a site for EM reliability failure. “Engineers fine-tune bump design in terms of bump size, pitch, and shape to ensure uniformity and reliability across the entire package. This includes the adoption of innovative Cu bump structures for improved mechanical and electrical properties,” said Amkor’s Zwenger.

In flip-chip BGA and other flip-chip applications, underfill materials — typically thermoset epoxies — are used to reduce the thermal stresses on solder bumps. “Underfill materials play a critical role in providing mechanical support and thermal stability to the bumps,” Zwenger said. “Engineers are investing in the development of advanced underfill formulations with enhanced properties, such as improved adhesion, thermal conductivity, and stress relief.”

Conclusion
Because of its dependence on temperature, electromigration is a failure mechanism to watch and plan for as devices continue to scale and systems integrators continue to cram more and more chiplets of various functions into advanced packages.

“In advanced technologies, the current density is now in close proximity to the maximum density,” said Synopsys’ Lynch. “Anything that causes an increase in temperature poses a threat. Designers of multi-die systems need to understand the impact of temperature and design systems to remove the heat.”

References

  1. JiHye Kwon, “Electromigration Performance Of Fine-Line Cu Redistribution Layer (RDL) For HDFO Packaging,” Semiconductor Engineering, Jan. 18, 2024, https://semiengineering.com/electromigration-performance-of-fine-line-cu-redistribution-layer-rdl-for-hdfo-packaging/
  2. -Y. Tsai, et al., “An Electromigration Study of Cu Pillar Interconnects in Flip-chip QFN Packaging under Extreme Conditions for High-power Applications,” 2023 IEEE 25th Electronics Packaging Technology Conference (EPTC), Singapore, 2023, pp. 326-332, doi: 10.1109/EPTC59621.2023.10457564.

Related Reading
What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.
Thermal Integrity Challenges Grow In 2.5D
Work is underway to map heat flows in interposer-based designs, but there’s much more to be done.
Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.

The post Electromigration Concerns Grow In Advanced Packages appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • What Works Best For ChipletsAnne Meixner
    The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield. To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least i
     

What Works Best For Chiplets

18. Duben 2024 v 09:08

The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield.

To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least initially. The basic challenge is aligning domain-specific performance demands of end systems, which contain a growing number of chiplets, with the assembly and packaging capabilities and methodologies of IDMs, foundries, and OSATs. This includes the creation of assembly development kits (ADKs) that are roughly the equivalent of process development kits (PDKs), which today are codified with manufacturing specifications.

A PDK provides the appropriate level of detail needed to develop planar chips, marrying design tools with fab processes to achieve a predictable outcome. But making this work for an ADK with heterogeneous chiplets is many times more complex. Design and assembly teams need to manage thermal, mechanical, and electrical co-dependencies that cause electrical and mechanical stress, resulting in warpage, reduced yield, and reliability issues under real-world workloads. Layered on top of this the business and legal issues related to packaging of different devices from different manufacturers.

“Chiplets are a growing trend, especially in the HPC and networking segments, with potential to scale to other applications,” said Gabriela Pereira, technology and market analyst for semiconductor packaging at Yole Intelligence. “The industry has understood that high-end advanced packaging technologies are needed to connect them — but that’s much more complex than it seems. Connecting chiplets requires the design of high-bandwidth interconnections at the package level, which can take different forms — e.g., 2D, 2.5D or 3D — while ensuring that the thermal and power requirements are fulfilled.”

Commercial chiplet-based devices generally are domain-specific, and sometimes developed for a specific workload. So despite a big industry push to create a LEGO-like mix-and-match ecosystem for chiplets — which today includes multiple IP and EDA vendors, foundries, memory suppliers, OSATs, substrate suppliers, etc. — making this work as planned will require time and a massive amount of work.

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

In creating heterogeneous integrated designs, it’s essential to have much tighter collaboration between foundries, IDMs, OSATs, and PCB manufacturers. And because each chiplet-based system will be customized, the number of assembly processes will grow substantially. For example, one OSAT noted that among its ~5,000 customers, there are ~1,000 different assembly processes.

That diversity in products and processes makes it difficult to achieve predictable results by choosing chiplets from a large menu of options.

“We’ve already encountered a lot of limitations including not only the silicon, but also integration and the ecosystem,” said Lihong Cao, senior director at ASE Group, at MEPTEC’s Road to Chiplets forum. She stressed that customers continue to push for a low-cost chiplet assembly process, which is creating constructive tension between developing a sophisticated assembly process and the economic realities of different industry sectors. Computing devices for automotive have a higher cost sensitivity than for data centers, for example, but their chips operate in a harsher environment over a longer lifetime.

What’s needed is a defined set of assembly process recipes — basically, a highly limited menu of choices — that are specific to the end application (HPC, automotive, RF telecommunications) in order to lower the cost of chiplet-based systems. OSATs and foundries already are moving in that direction for high-performance computing. For example, at its 2024 Direct Connect event, Intel shared its six different package processes for chiplets. TSMC and Samsung also offer defined sets of chiplet processes. But the success of these assembly processes requires engineering teams to co-optimize the flows, processes, and materials to best match the system requirements.

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

“Previously, when we designed a system we only had to be worried about the system requirements. Once we start segregating into dies and reassembling them, we have to start looking at other things. We have to worry about putting them together while considering signal integrity between dies, reliability, thermals, etc.,” said Itai Leshniak, director of AI systems solutions at Applied Materials, at the MEPTEC forum. “If we take the case of AI-based computer vision, we can break it down layer by layer — on the hardware side, determining which computer vision processors, sensors, filters are needed to break it down into the architecture at layer. Then we begin to go through how to package all these chiplets, and then which materials to use and how to take advantage of those materials.”

Materials and assembly processes
Conceptually, design engineers will use chiplets to design a system. However, the co-design and integration is far more complicated than assembling a set of LEGO blocks, because the chiplets, interposers, and package substrates come from different design houses and manufacturing facilities. The advanced packaging technologies used to connect chiplets vary with an alphabet soup of names — FOWLP, FOPLP, CoWoS, etc., each of which poses additional design and material choices along with certain process limitations.

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Currently engineering teams determine the tradeoffs among the different packaging options to select materials, derive a process recipe, and determine design rules.

Materials are a good starting point. “Materials are very important because they enable new products and packaging technologies,” Tanja Braun, deputy group manager at the Fraunhofer Institute for Reliability and Microintegration IZM. “As you move into more advanced packaging, process is getting much more complex because you are putting more things together. In the end, it’s a combination of equipment, materials, and process development.”

There are three thermal parameters that are critical in package assembly processes — coefficients of thermal expansion (CTE), glass transition temperature (Tg), and thermal conductivity. These factors affect how a material behaves in manufacturing to packaging processes, as well as how it behaves in the field.

“Challenges for our materials include temperature limitations of different die,” said Rama Puligadda, CTO at Brewer Science. “We have to ensure that the temperatures used for bonding materials don’t exceed the thermal limitations of any of the chips that are being integrated into the package. Additionally, there may be some subsequent processes like redistribution layer (RDL) formation or molding. Our materials have to survive those processes. They have to survive the chemicals they come in contact with throughout the packaging process scheme. Mechanical stresses in the package add additional challenges for bonding materials.”

Within a stack of chiplets-on-substrate with an optional interposer, their material attributes affect the thermal-mechanical stresses between neighboring materials, as well. This directly impacts interconnect dimensional control over a large area substrate area.

“If you go work the numbers, you will find that the level of tolerance and control required is frightening,” said Dick Otte, CEO of Promex Industries. “You’re talking about controlling dimensions equivalent to the width of a grass blade over the length of a football field, so that’s roughly 1 in 100,000.”

The goal is uniform heating of the structure in reflow in order to attain the best process results and to avoid cracking. “When you’re taking it through a 250 degrees centigrade temperature change, then you need to heat up slowly so that the top doesn’t get hot before the bottom does,” said Otte.

Multi-physics to comprehend co-optimization
Multi-physics modeling has become the go-to method for co-optimizing packaging design and assembly process development. That affects both permanent and temporary materials, as well the placement of processors, memories, and other components.

“You always looking to what the customer needs electrically, because that’s going to help define the material set. The material set is broadly applicable to a bunch of speed ranges. As long as you don’t step outside of those electrical specifications, theoretically you should be okay,” said Mike Kelly, vice president of advanced package and technology integration at Amkor Technology.

To save many iterations of empirically based development, engineers can use physics-based simulations to understand the impact of a material set’s properties impact on the assembly process, power/thermals, and mechanical vibrations.

Consider that HPC chiplet products can consume ~1,000 watts at peak performance so the power and thermal interactions need to be fully understood.

We’ve struggled, as everybody has, with this blizzard of complexity in the different techniques. Not only do they vary across different vendors, but they’re also varying over time,” said Marc Swinnen, director of product marketing at Ansys. “Our approach has been to identify the essentials that need to be worked on. We work jointly with customers to develop a simulation flow that actually achieves what is needed now.”

Materials are just one piece of the puzzle. “Then there’s the assembly stresses that need to be modeled to know whether you can correctly assemble this device. The third one is mechanical vibration,” Swinnen said. “Can your device withstand those regular vibrations? Modeling these attributes ties directly into our mechanical analysis tools — acoustic, thermal, vibration, etc. In the end, you’re going to have to do physics simulation. We’re trying to make it accessible to people in many different forms. But the bedrock of our tool offerings is that we have the meshing simulation and analysis. It’s a question of getting the data in the right format in a way that’s practical and usable.”

Evolving assembly design kits
For conventional packages, OSATs provide design rules for each packaging technology. These need to consider electrical, mechanical and thermal design requirements and manufacturing process limitations. In effect this is a multi-dimensional bounding box. Suppliers perform iterations with the customer to create a product specific process recipe.

Rules cover the macro-level attributes. “At a minimum, what you see from design rules is maximum package size, maximum silicon size, and whether silicon can be [mounted] on both sides of the substrate, such that when you follow these constructions the final product will have a lifetime of 1,000 thermal cycles, for example,” said Fraunhofer’s Braun.

In addition, design rules need to describe routing constraints for the interposer and/or redistribution layer, such as RDL line widths and spaces, ball-grid/pillar/pad size and pitches, and the maximum number of interconnections.

Breaking up a monolithic HPC device into multiple dies shifts some of the semiconductor design/process complexity into the packaging space. That makes things much more complicated. Consider that to connect 10 dies requires on order of 100,000 traces within the interposer’s or substrate’s redistribution layer.

To cope with the complexity at the chip level, the IC industry has long relied upon process design kits (PDKs) to capture design rules in an electronic file that can be imported into EDA tools. Their counterparts, assembly design kits (ADKs), are relatively immature.

“We call it Smart Package,” said Amkor’s Kelly. “It’s an ADK that we give to every customer who’s doing their own design. It is a set of macros, and a customization of a database tailored to a customer’s particular design. For chiplets, it is a high-density fan-out package technology. And it’s cognizant of the limitations for metal density and metal spacing, etc. This makes it easier for us to do design rule checks (DRCs).”

But right now, with the level of customization still required, how an ADK is derived and what it entails is in flux. Partnerships between EDA tool vendors, OSATs, and semiconductor device providers are required.

“We come from the IC world where everything is very rigid,” said Kenneth Larsen, director of 3D-IC product management in Synopsys‘ EDA Group. “On the OSAT side, and maybe this is because it’s so custom, design rules seem like a data sheet. Then you build and optimize the products over time or in collaboration with the OSAT. It’s not an electronic exchange. In the IC world, this would be totally unheard of. While it is possible to tweak a few things, you have a qualification process. And it seems like that’s not there yet for packaging.”

Materials and associated assembly recipes ultimately drive what’s possible for a chiplet-substrate stack in terms of pillar pitch, RDL line widths and spaces, bonding processes, and chiplet placement tolerances. But within a handful of ADKs, there are many possible interactions to consider.

The current focus is on co-optimizing the system design with the chiplet assembly process, leading to an assembly process development flow (see figure 4). This flow considers the needs of customization of an assembly process, and it creates the necessary design rules to be used by package designers.

Fig. 4: Chip-package hybrid flow. Source: ASE

Fig. 4: Chip-package hybrid flow. Source: ASE

“First you need to define your structure using chiplets. Are you using substrate RDL, 2.5D RDL, or a bridge? After that you need to consider your structure’s materials. What kind of material do you choose to fulfill your electrical performance and the mechanical stress requirements,” said Cao. “After that, you do pre-analysis to ensure all the structures and materials you use are workable in terms of electrical, warpage and mechanical stress.”

The design planning flow also includes the evaluation of die-to-die interconnects through the documents for co-design sign-off.

Conclusion
Before chiplet-based designs can be enabled outside the IDM model, the industry needs to complete the ecosystem that bridges the manufacturing and design complexity. This is because the need to co-optimize the system architecture based on materials, process, and integration capabilities is essential. While this would be easier with a set of well-defined products for the chiplet ecosystem to drive forward on, that has not happened yet.

Engineering teams across the design and manufacturing stack will need to collaborate to choose the appropriate materials, architectures, processes, etc., to develop a final chiplet-based product that is designable. As ASE group’s Cao noted, “An integrated design and manufacturing ecosystem is important. It is very critical to have collaboration among IDM, vendors, materials suppliers. Everyone needs to work together to really enable integration for the real applications.”

Related stories
Fan-Out Packaging Gets Competitive
Manufacturability reaches sufficient level to compete with flip-chip BGA and 2.5D.

Inside Panel-Level Fan-Out Technology
Fraunhofer’s panel experts dig into why this approach is needed and where the challenges are to making it work.

Next Steps For Panel-Level Packaging
Where it’s working, and what challenges remain for even broader adoption.

Mini-Consortia Forming Around Chiplets
Commercial chiplet marketplaces are still on the distant horizon, but companies are getting an early start with more limited partnerships.

What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.

Mechanical Challenges Rise With Heterogeneous Integration
But gaps in tools make it difficult to address warpage, structural issues, and new materials in multi-die/multi-chiplet designs.

Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.

The post What Works Best For Chiplets appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Enabling Advanced Devices With Atomic Layer ProcessesKatherine Derbyshire
    Atomic layer deposition (ALD) used to be considered too slow to be of practical use in semiconductor manufacturing, but it has emerged as a critical tool for both transistor and interconnect fabrication at the most advanced nodes. ALD can be speeded up somewhat, but the real shift is the rising value of precise composition and thickness control at the most advanced nodes, which makes the extra time spent on deposition worthwhile. ALD is a close cousin of chemical vapor deposition, initially intr
     

Enabling Advanced Devices With Atomic Layer Processes

Atomic layer deposition (ALD) used to be considered too slow to be of practical use in semiconductor manufacturing, but it has emerged as a critical tool for both transistor and interconnect fabrication at the most advanced nodes.

ALD can be speeded up somewhat, but the real shift is the rising value of precise composition and thickness control at the most advanced nodes, which makes the extra time spent on deposition worthwhile.

ALD is a close cousin of chemical vapor deposition, initially introduced in high volume to the semiconductor industry for hafnium oxide (high-k) gate dielectrics. Both CVD and ALD are inherently conformal processes. Deposition occurs on all surfaces exposed to a precursor gas. In ALD, though, the reaction is self-limiting.

The process works like this: First, a precursor gas (A) is introduced into the process chamber, where it adsorbs onto all available substrate sites. No further adsorption occurs once all surface sites are occupied. An inert purge gas, typically nitrogen or argon, flushes out any remaining precursor gas, then a second precursor (B) is introduced. Precursor B reacts with the chemisorbed precursor A to produce the desired film. Once all of the adsorbed molecules are consumed, the reaction stops. After a second purge step, the cycle repeats.

ALD opportunities expand as features shrink
The step-by-step nature of ALD is both its strength and its weakness. Depositing one monolayer at a time gives manufacturers extremely precise thickness control. Using different precursor gases in different ratios can tune the film composition. Unfortunately, the repeated precursor/purge gas cycles take a lot of time. In an interview, CEA-Leti researcher Rémy Gassilloud estimated that in a single wafer process, two minutes per wafer is the maximum cost-effective process time. But two minutes is only enough time to deposit about a 2nm-thick film.

Some process adjustments can improve throughput. Silicon dioxide ALD often uses large furnaces to process many wafers at once. Plasma activation can ionize reagents and accelerate film formation. Still, Gassilloud estimates that 10nm is the maximum practical thickness for ALD films.

As transistors shrink, though, the number of layers in that thickness range is increasing. Transistor structures also are becoming more complex, requiring deposition on vertical surfaces, into deep trenches, and other places not readily accessible by line-of-sight PVD methods. Replacement gates for gate-all-around transistors, for instance, need a process that can fill nanometer-scale cavities.

As noted above, HfO2 was the first successful application of ALD in semiconductor manufacturing. Its precursors, HfCl4 and water, are both chemically simple small molecules, whose by-products are volatile and easily removed. Such simple chemistries are the exception, though. ALD of silicon dioxide typically uses aminosilane precursors.⁠[1] Metal nitrides often have complex metal-organic precursor gases. Gassilloud noted that ligands might be added to a precursor molecule to change its vapor pressure or reactivity, or to facilitate adhesion to the substrate. In selective deposition processes, discussed below, ligands might improve selectivity between growth and non-growth surfaces. These larger molecules can be difficult to insinuate into smaller features, and byproducts can be difficult to remove. Complex byproducts can also become a contamination source.

One of the advantages of ALD is its very low process temperature, typically between 200°C and 300°C. It is thermally compatible with both transistor and interconnect processes in CMOS, as well as with deposition on plastic and other novel substrates. Even so, Aditya Kumar and colleagues at GlobalFoundries showed that precise temperature control is important.[2] TDMAT (tetrakis- dimethylamino titanium) condensation in a TiN deposition process was a significant source of particle defects. To maintain the desired process temperature, both the precursor and purge gas temperatures matter. Introducing cold purge gas into a warm process chamber can cause rapid condensation.

As ALD has become a mainstream process, the industry has found applications for it beyond core device materials, in a variety of sacrificial and spacer layers. For example, double- and quadruple-patterning schemes often use ALD for “pitch-doubling.” By depositing a spacer material on either side of a patterned “mandrel,” then removing the mandrel, the process can cut the original pitch in half without the need for an additional, more costly lithography step.[3]

Fig. 1: Self-aligned double patterning with ALD spacers. Source: IOPScience

Fig. 1: Self-aligned double patterning with ALD spacers. Source: Creative Commons

Depositing a doped oxide on the vertical silicon fins of a finFET device is a less directional and less damaging alternative to ion implantation.[4]

Selective deposition brings lateral control
These last two examples depend on surface characteristics to mediate deposition. A precursor might adhere more readily to a hard mask than to the underlying material. The vertical face of a silicon fin might offer more (or fewer) adsorption sites than the horizontal face. Selective deposition on more complicated structures may require a pre-deposited growth template, functionalizing substrate regions to encourage or discourage growth. Selective deposition is especially important in interconnect applications. In general, though, a comprehensive review by Rong Chen and colleagues at Huazhong University of Science and Technology explained that selective deposition methods need to replenish the template material as the film grows while needing a mechanism to selectively remove the unwanted material.⁠[5]

For example, tungsten preferentially deposits on silicon relative to SiO2, but the selectivity diminishes after only a few cycles. Researchers at North Carolina State University successfully re-passivated the oxide by incorporating hydrogen into the tungsten precursor.[⁠6] Similarly, a group at Eindhoven University of Technology found that SiO2 preferentially deposited on SiO2 relative to other oxides for only 10 to 15 cycles. A so-called ABC-cycle — adding acetylacetone (“Inhibitor A”) as an inhibitor every 5 to 10 cycles — restored selectivity.⁠[7]

Alternatively, or in addition, atomic layer etching (ALE) might be used to remove unwanted material. ALE operates in the same step-by-step manner as ALD. The first half of a cycle reacts with the existing surface, weakening the bond to the underlying material. Then, a second step — typically ion bombardment — removes the weakened layer. For example, in ALE etching of silicon, chlorine gas reacts with the surface to form various SiClx compounds. The chlorination process weakens the inter-silicon bonds between the surface and the bulk, and the chlorinated layer is easily sputtered away. The layer-by-layer nature of ALE depends on preferential removal of the surface material relative to the bulk (SiClx vs. Si in this case). The “ALE window” is the combination of energy and temperature at which the surface layer is completely removed without damaging the underlying material.

Somewhat counter-intuitively, Keren Kanarik and colleagues at Lam Research found that higher ion energies actually expanded the ALE window for silicon etching. High ion energies with short exposure times delayed the onset of silicon sputtering relative to conventional RIE.[8]

Adding and subtracting, one atomic layer at a time
For a long time, the semiconductor industry has been looking for alternatives to process schemes that deposit material, pattern it, then etch most of it away. Wouldn’t it be simpler to only deposit the material we will ultimately need? Meanwhile, atomic layer deposition has been filling the spaces under nanosheets and inside cavities. Bulk deposition and etch tools are still with us, and will be for the foreseeable future. In more and more cases, though, those tools provide the frame while ALD and ALE processes fill in the details.

Correction: Corrected attribution of the work on ABC cycles and selective deposition of SiO2.

References

  1. Wenling Li, et al., “Impact of aminosilane and silanol precursor structure on atomic layer deposition process,”Applied Surface Science, Vol 621, 2023,156869, https://doi.org/10.1016/j.apsusc.2023.156869.
  2. Kumar, et al., “ALD TiN Surface Defect Reduction for 12nm and Beyond Technologies,” 2020 31st Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC), Saratoga Springs, NY, USA, 2020, pp. 1-4, doi: 10.1109/ASMC49169.2020.9185271.
  3. Shohei Yamauchi, et al., “Extendibility of self-aligned type multiple patterning for further scaling”, Proc. SPIE 8682, Advances in Resist Materials and Processing Technology XXX, 86821D (29 March 2013); https://doi.org/10.1117/12.2011953
  4. Kalkofen, et al., “Atomic layer deposition of phosphorus oxide films as solid sources for doping of semiconductor structures,” 2018 IEEE 18th International Conference on Nanotechnology (IEEE-NANO), Cork, Ireland, 2018, pp. 1-4, doi: 10.1109/NANO.2018.8626235.
  5. Rong Chen et al., “Atomic level deposition to extend Moore’s law and beyond,” 2020 Int. J. Extrem. Manuf. 2 022002 DOI 10.1088/2631-7990/ab83e0
  6. B Kalanyan, et al., “Using hydrogen to expand the inherent substrate selectivity window during tungsten atomic layer deposition,” 2016 Chem. Mater. 28 117–26 https://doi.org/10.1021/acs.chemmater.5b03319
  7. Alfredo Mameli et al., “Area-Selective Atomic Layer Deposition of SiO2 Using Acetylacetone as a Chemoselective Inhibitor in an ABC-Type Cycle” ACS Nano 2017, 11, 9, 9303–9311. https://doi.org/10.1021/acsnano.7b04701
  8. Keren J. Kanarik, et al., “Universal scaling relationship for atomic layer etching,” J. Vac. Sci. Technol. A 39, 010401 (2021); doi: 10.1116/6.0000762

The post Enabling Advanced Devices With Atomic Layer Processes appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Powering The Automotive Revolution: Advanced Packaging For Next-Generation Vehicle ComputingPrasad Dhond
    Automotive processors are rapidly adopting advanced process nodes. NXP announced the development of 5 nm automotive processors in 2020 [1], Mobileye announced EyeQ Ultra using 5 nm technology during CES 2022 [2], and TSMC announced its “Auto Early” 3 nm processes in 2023 [3]. In the past, the automotive industry was slow to adopt the latest semiconductor technologies due to reliability concerns and lack of a compelling need. Not anymore. The use of advanced processes necessitates the use of adva
     

Powering The Automotive Revolution: Advanced Packaging For Next-Generation Vehicle Computing

18. Duben 2024 v 09:06

Automotive processors are rapidly adopting advanced process nodes. NXP announced the development of 5 nm automotive processors in 2020 [1], Mobileye announced EyeQ Ultra using 5 nm technology during CES 2022 [2], and TSMC announced its “Auto Early” 3 nm processes in 2023 [3]. In the past, the automotive industry was slow to adopt the latest semiconductor technologies due to reliability concerns and lack of a compelling need. Not anymore.

The use of advanced processes necessitates the use of advanced packaging as seen in high performance computing (HPC) and mobile applications because [4][5]:

  1. While transistor density has skyrocketed, I/O density has not increased proportionally and is holding back chip size reductions.
  2. Processors have heterogeneous, specialized blocks to support today’s workloads.
  3. Maximum chip sizes are limited by the slowdown of transistor scaling, photo reticle limits and lower yields.
  4. Cost per transistor improvements have slowed down with advanced nodes.
  5. Off-package dynamic random-access memory (DRAM) throttles memory bandwidth.

These have been drivers for the use of advanced packages like fan-out in mobile and 2.5D/3D in HPC. In addition, these drivers are slowly but surely showing up in automotive compute units in a variety of automotive architectures as well (see figure 1).

Fig. 1: Vehicle E/E architectures. (Image courtesy of Amkor Technology)

Vehicle electrical/electronic (E/E) architectures have evolved from 100+ distributed electronic control units (ECUs) to 10+ domain control units (DCUs) [6]. The most recent architecture introduces zonal or zone ECUs that are clustered in physical locations in cars and connect to powerful central computing units for processing. These newer architectures improve scalability, cost, and reliability of software-defined vehicles (SDVs) [7]. The processors in each of these architectures are more complex than those in the previous generation.

Multiple cameras, radar, lidar and ultrasonic sensors and more feed data into the compute units. Processing and inferencing this data require specialized functional blocks on the processor. For example, the Tesla Full Self-Driving (FSD) HW 3.0 system on chip (SoC) has central processing units (CPUs), graphic processing units (GPUs), neural network processing units, Low-Power Double Data Rate 4 (LPDDR4) controllers and other functional blocks – all integrated on a single piece of silicon [8]. Similarly, Mobileye EyeQ6 has functional blocks of CPU clusters, accelerator clusters, GPUs and an LPDDR5 interface [9]. As more functional blocks are introduced, the chip size and complexity will continue to increase. Instead of a single, monolithic silicon chip, a chiplet approach with separate functional blocks allows intellectual property (IP) reuse along with optimal process nodes for each functional block [10]. Additionally, large, monolithic pieces of silicon built on advanced processes tend to have yield challenges, which can also be overcome using chiplets.

Current advanced driver-assistance systems (ADAS) applications require a DRAM bandwidth of less than 60GB/s, which can be supported with standard double data rate (DDR) and LPDDR solutions. However, ADAS Level 4 and Level 5 will need up to 1024 GB/s memory bandwidth, which will require the use of solutions such as Graphic DDR (GDDR) or High Bandwidth Memory (HBM) [11][12].

Fig. 2: Automotive compute package roadmap. (Image courtesy of Amkor Technology)

Automotive processors have been using Flip Chip BGA (FCBGA) packages since 2010. FCBGA has become the mainstay of several automotive SoCs, such as EyeQ from Mobileye, Tesla FSD and NVIDIA Drive. Consumer applications of FCBGA packaging started around 1995 [13], so it took more than 15 years for this package to be adopted by the automotive industry. Computing units in the form of multichip modules (MCMs) or System-in-Package (SiP) have also been in automotive use since the early 2010s for infotainment processors. The use of MCMs is likely to increase in automotive compute to enable components like the SoC, DRAM and power management integrated circuit (PMIC) to communicate with each other without sending signals off-package.

As cars move to a central computing architecture, the SoCs will become more complex and run into size and cost challenges. Splitting these SoCs into chiplets becomes a logical solution and packaging these chiplets using fan-out or 2.5D packages becomes necessary. Just as FCBGA and MCMs transitioned into automotive from non-automotive applications, so will fan-out and 2.5D packaging for automotive compute processors (see figure 2). The automotive industry is cautious but the abovementioned architecture changes are pushing faster adoption of advanced packages. Materials, processes, and factory controls are key considerations for successful qualification of these packages in automotive compute applications.

In summary, the automotive industry is adopting advanced semiconductor technologies, such as 5 nm and 3 nm processes, which require the use of advanced packaging due to limitations in I/O density, chip size reductions, and memory bandwidth. Processors in the latest vehicle E/E architectures are more complex and require specialized functional blocks to process data from multiple sensors. As cars move to the central computing architecture, the SoCs will become more complex and run into size and cost challenges. Splitting these SoCs into chiplets becomes a logical solution and packaging these chiplets using fan-out or 2.5D technology becomes necessary.

Sources

  1. NXP. “NXP Selects TSMC 5nm Process for Next-Generation High-Performance Automotive Platform.” NXP, https://www.nxp.com/company/about-nxp/nxp-selects-tsmc-5nm-process-for-next-generation-high-performance-automotive-platform:NW-TSMC-5NM-HIGH-PERFORMANCE.
  2. Mobileye. “Mobileye at CES 2022.” Mobileye, https://www.mobileye.com/news/mobileye-ces-2022-tech-news/.
  3. Business Wire. “TSMC Showcases New Technology Developments at 2023 Technology Symposium.” Business Wire, https://www.businesswire.com/news/home/20230426005359/en/TSMC-Showcases-New-Technology-Developments-at-2023-Technology-Symposium.
  4. Swaminathan, Raja. “Advanced Packaging: Enabling Moore’s Law’s Next Frontier Through Heterogeneous Integration.” HotChips33, https://hc33.hotchips.org/assets/program/tutorials/2021%20Hot%20Chips%20AMD%20Advanced%20Packaging%20Swaminathan%20Final%20%2020210820.pdf
  5. SemiAnalysis. “Advanced Packaging Part 1” SemiAnalysis, https://www.semianalysis.com/p/advanced-packaging-part-1-pad-limited?utm_source=%2Fsearch%2Fadvanced%2520packaging&utm_medium=reader2.
  6. McKinsey & Company. “Getting Ready for Next-Generation EE Architecture with Zonal Compute.” McKinsey & Company, https://www.mckinsey.com/industries/semiconductors/our-insights/getting-ready-for-next-generation-ee-architecture-with-zonal-compute.
  7. NXP. “How Zonal E/E Architectures with Ethernet are Enabling Software-Defined Vehicles.” NXP, https://www.nxp.com/company/blog/how-zonal-e-e-architectures-with-ethernet-are-enabling-software-defined-vehicles:BL-HOW-ZONAL-EE-ARCHITECTURES.
  8. WikiChip. “Tesla (Car Company)/FSD Chip.” WikiChip, https://en.wikichip.org/wiki/tesla_(car_company)/fsd_chip.
  9. Mobileye. “EyeQ Chip.” Mobileye, https://www.mobileye.com/technology/eyeq-chip/.
  10. Ziadeh, Bassam. “Driving Adoption of Advanced IC Packaging in Automotive Applications.” Presentation at IMAPS DPC, March 2023. General Motors, Fountain Hills AZ, March 16, 2023.
  11. K Matthias Jung and Norbert Wehn. “Driving Against the Memory Wall: The Role of Memory for Autonomous Driving.” Fraunhofer IESE, Kaiserslautern, Germany, and Microelectronic Systems Design Research Group, University of Kaiserslautern, Kaiserslautern, Germany. Kluedo, https://kluedo.ub.rptu.de/frontdoor/deliver/index/docId/5286/file/_memory.pdf.
  12. Micron. “Cinco de Play: Memory – Is That Critical to Autonomous Driving?” Micron, https://www.micron.com/about/blog/2017/october/cinco-play-memory-is-that-critical-to-autonomous-driving.
  13. McKinsey & Company. “Advanced Chip Packaging: How Manufacturers Can Play to Win.” McKinsey & Company, https://www.mckinsey.com/industries/semiconductors/our-insights/advanced-chip-packaging-how-manufacturers-can-play-to-win.

The post Powering The Automotive Revolution: Advanced Packaging For Next-Generation Vehicle Computing appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Enabling New Applications With SiC IGBT And GaN HEMT For Power Module DesignShela Aboud
    The need to mitigate climate change is driving a need to electrify our infrastructure, vehicles, and appliances, which can then be charged and powered by renewable energy sources. The most visible and impactful electrification is now under way for electric vehicles (EVs). Beyond the transition to electric engines, several new features and technologies are driving the electrification of vehicles. The number of sensors in a vehicle is skyrocketing, driven by autonomous driving and other safety fea
     

Enabling New Applications With SiC IGBT And GaN HEMT For Power Module Design

18. Duben 2024 v 09:05

The need to mitigate climate change is driving a need to electrify our infrastructure, vehicles, and appliances, which can then be charged and powered by renewable energy sources. The most visible and impactful electrification is now under way for electric vehicles (EVs). Beyond the transition to electric engines, several new features and technologies are driving the electrification of vehicles. The number of sensors in a vehicle is skyrocketing, driven by autonomous driving and other safety features, while a modern software-defined vehicle (SDV) is electrifying everything from air-conditioned seats to self-parking technology.

An important technology for EVs and SDVs is power modules. These are super high-voltage devices that convert one form of electricity to another (e.g., AC to DC), which is necessary to convert the vehicle battery energy to a current that can run the vehicles electrical system, including the drive train. These modules demand the highest power loads and are rated at 1000s of voltages – and the design of power devices, which are the fundamental electronic component of the power modules, is crucial, as a bad design can lead to catastrophe events.

Power devices, much more than other types of electrical devices, are designed for specific applications. In comparison, logic transistors can be used in everything from toasters to smartphones. Not only does the architecture of power devices change at higher voltages, different power ratings, or higher switching frequencies as needed, but the material can change as well.

New power requirements need wide-band gap materials

To meet new and future power demands for EVs, electric infrastructure, and other novel electrical systems, wide-band gap (WBG) materials are being developed and introduced. Silicon carbide (SiC) IGBTs are now available and being deployed, while gallium arsenide (GaN) HEMTs are a promising technology that is in the development stage.

Power density vs. switching frequency of power devices based on different materials.

Continuing with our EV example, SiC inverters can generally increase the potential range by approximately 10%, even after accounting for other design considerations. In addition, increasing the drive train voltage from 400V range to 800V can reduce the charging speeds by half. These voltages are only possible to realize with wide-band gap materials like SiC-based power devices. Tesla introduced SiC MOSFETs into its Model S back in 2018. Since then, numerous automotive manufacturers have also adopted SiC in their EVs, including Hyundai and BMW, for example.

GaN still has many design hurdles to overcame to improve reliability and decrease cost – but if it can be made affordable, perhaps the next realization of EVs will allow for charging in seconds with ranges of thousands of miles.

Simulating power devices

Because of the huge number of design parameters, simulation is important in the design of power devices. One crucial part for device design is the calculation of the breakdown voltage – the voltage at which the device can essentially melt, or catch fire, but will never operate again. These simulations need to be highly physics-based and capture the mechanisms by which electrons can be released or absorbed by the crystal lattice of these materials. The increasing band gaps in WBG materials like SiC and GaN increase the breakdown voltage. In addition, these materials have a smaller effective electron mass (i.e., the mass of an electron in a material dictates how fast it will move in an electric field) – which makes the switching frequency in devices based on these WBG materials faster.

A critical area of all electronics design is variability and reliability. Device performance needs to be stable and last a long time. A key factor for variability and reliability is defects in the crystal lattice. These defects, or traps, act as charge centers that can drastically impact how well a device works. Simulation can also help to identify the types of traps, providing a mechanistic understanding of how the traps will impact the device physics. Recently, Synopsys issued a paper using first-principles quantum solutions to characterize specific traps in SiC with QuantumATK.

Going forward, wind energy, solar, home appliances, and even the electric grid itself are going to need new devices with different structures and materials. The future is extremely exciting for power devices, which can be found in our EVs and will soon power a huge range of applications across our society.

The post Enabling New Applications With SiC IGBT And GaN HEMT For Power Module Design appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Exploring Process Scenarios To Improve DRAM Device PerformanceYu De Chen
    In the world of advanced semiconductor fabrication, creating precise device profiles (edge shapes) is an important step in achieving targeted on-chip electrical performance. For example, saddle fin profiles in a DRAM memory device must be precisely fabricated during process development in order to avoid memory performance issues. Saddle fins were introduced in DRAM devices to increase channel length, prevent short channel effects, and increase data retention times. Critical process equipment set
     

Exploring Process Scenarios To Improve DRAM Device Performance

18. Duben 2024 v 09:04

In the world of advanced semiconductor fabrication, creating precise device profiles (edge shapes) is an important step in achieving targeted on-chip electrical performance. For example, saddle fin profiles in a DRAM memory device must be precisely fabricated during process development in order to avoid memory performance issues. Saddle fins were introduced in DRAM devices to increase channel length, prevent short channel effects, and increase data retention times. Critical process equipment settings like etch selectivity, or the gas ratio of the etch process, can significantly impact the shape of fabricated saddle fin profiles. These process and profile changes have significant impact on DRAM device performance. It can be challenging to explore all possible saddle fin profile combinations using traditional silicon testing, since wafer-based testing is time-consuming and expensive. To address this issue, virtual fabrication software (SEMulator3D) can be used to test different saddle fin profile shapes without the time and cost of wafer-based development. In this article, we will review an example of using virtual fabrication for DRAM saddle fin profile development. We will also assess DRAM device performance under different saddle fin profile conditions. This methodology can be used to guide process and integration teams in the development of process recipes and specifications for DRAM devices.

The challenge of exploring different profiles

Imagine that you are a DRAM process engineer, and have received nominal process conditions, device specifications and a target saddle fin profile for a new DRAM design. You would like to explore some different process options and saddle fin profiles to improve the performance of your DRAM device. What should you do? This is a common situation for integration and process engineers during the early R&D stages of DRAM process development.

Traditional methods of exploring saddle fin profiles are difficult and sometimes impractical. These methods involve the creation of a series of unique saddle fin profiles on silicon wafers. The process is time-consuming, expensive, and in many cases impractical, due to the large number of scenarios that must be tested.

One solution to these challenges is to use virtual fabrication. SEMulator3D allows us to create and analyze saddle fin profiles within a virtual environment and to subsequently extract and compare device characteristics of these different profiles. The strength of this approach is its ability to accurately simulate the real-world performance of these devices, but to do so faster and less-expensively than using wafer-based testing.

Methodology

Let’s dive into the methodology behind our approach:

Creating saddle fin profiles in a virtual environment

First, we input the design data and process flow (or process steps) for our device in SEMulator3D. The software can then generate a “virtual” 3D DRAM structure and provide a visualization of saddle fin profiles (figure 1). In figure 1(a), a full 3D DRAM structure including the entire simulation domain is displayed. To enable detailed device study, we have cropped a small portion of the simulation domain from this large 3D area. In figure 1(b), we have extracted a cross sectional view of the saddle fin structure, which can be modified by varying a set of multi-etch steps in the process model. The section of the saddle fin that we would like to modify is identified as the “AA” (active area). We can finely tune the etch taper angle, AA/fin CD, fin height, taper angle and additional nominal device parameters to modify the AA profile.

Figure 1: Process flow set up by SEMulator3D containing 3 figures marked A,B and C. Figure A contains a 3D simulated DRAM structure, with metals, nitrides, oxides and silicon structures shown in different colors. Figure B contains a cross section view of the saddle fin, with the bitline, active area, CC and wordline areas highlighted in the figure. Figure C highlights the key specifications of the saddle fin profile that can will be changed during simulation, including the etch taper angle, AA/fin CD, fin height, and taper angle to modify the saddle fin profile and shape.

Fig. 1: Process flow set up by SEMulator3D: (a) DRAM structure and (b) Cross section view of saddle fin along with key specifications of the saddle fin profile.

Using the structures that we have built in SEMulator3D, we can next assign dopants and ports to the simulated structure and perform electrical performance evaluation. Accurately assigning dopant species, and defining dopant concentrations within the structure, is critical to ensuring the accuracy of our simulation. In figure 2(a), we display a dopant concentration distribution generated in SEMulator3D.

Ports are contact points in the model which are used to apply or extract electrical signals during a device study. Proper assignment of the ports is very important. Figure 2(b) provides an example of port assignment in our test DRAM structure. By accurately assigning the ports and dopants, we can extract the device’s electrical characteristics under different process scenarios.

Figure 2: Dopant concentration and Port Setup for the DRAM device, marked at Figures 2A and 2B. In Figure 2(a), we display a dopant concentration distribution generated in SEMulator3D. The highest dopant concentration is found in the center of the device, shown in red and yellow. Figure 2(b) provides an example of port assignment in our test DRAM structure, with assignments shown against a device cross-section. Ports are assigned at the drain, source and gate of the device.

Fig. 2: (a) Dopant concentration and (b) Port assignments (in blue).

Manufacturability validation

It is important to ensure that our simulation models match real world results. We can validate our model against cross-sectional images (SEM or TEM images) from an actual fabricated device. To ensure that our simulated device matches the behavior of an actual manufactured chip, we can create real silicon test wafers containing DRAM structures with different saddle fin profiles. To study different saddle fin profiles, we will use different etch recipes on an etch machine to vary the DRAM wordline etch step. This allows us to create specific saddle fin profiles in silicon that can be compared to our simulated profiles. A process engineer can change etch recipes and easily create silicon-based etch profiles that match simulated cross section images, as shown in figure 3. In this case, the engineer created a nominal (Process of Record) profile, a “round” profile (with a rounded top), and a triangular shaped profile (with a triangular top). This wafer-based data is not only used to test electrical performance of the DRAM under different saddle fin profile conditions, but can also be fed back into the virtual model to calibrate the model and ensure that it is accurate during future use.

Figure 3: Cross section TEM/SEM images of saddle fin profiles taken from actual silicon devices are displayed, compared to the predicted model results from SEMulator3D. 3 side-by-side TEM images are shown for the saddle fin profiles vs. the model results, for : (a) Nominal condition (Process of Record), (b) Round profile and (c) Triangle profile

Fig. 3: Cross section images vs. models: (a) Nominal condition (Process of Record), (b) Round profile and (c) Triangle profile.

Device simulation and validation

In the final stage of our study, we will review the electrical simulation results for different saddle fin profile shapes. Figure 4 displays simulated electrical performance results for the round profile and triangular saddle fin profile. For each of the two profiles, the value of the transistor Subthreshold Swing (SS), On Current (Ion), and Threshold Voltage (Vt) are displayed, with the differences shown. Process integration engineers can use this type of simulation to compare device performance using different process approaches. The same electrical performance differences (trend) were seen on actual fabricated devices, validating the accuracy and reliability of our simulation approach.

Figure 4: Simulated electrical performance results for the round profile and triangular saddle fin profile. For each of the two profiles, the value of the transistor Subthreshold Swing (SS), On Current (Ion), and Threshold Voltage (Vt) are displayed, with the differences shown.

Fig. 4: Device electrical simulation results: the transistor performance difference between the Round and Triangular Saddle Fin profile is shown for Subthreshold Swing (SS), On Current (Ion) and Threshold Voltage (Vt).

Conclusions

SEMulator3D provides numerous benefits for the semiconductor manufacturing industry. It allows process integration teams to understand device performance under different process scenarios, and lets them easily explore new processes and architectural opportunities. In this article, we reviewed an example of how virtual fabrication can be used to assess DRAM device performance under different saddle fin profile conditions. Figure 5 displays a summary of the virtual fabrication process, and how we used it to understand, optimize and validate different process scenarios.

Figure 5: A summary of the virtual fabrication process undertaken in this study, including model setup, followed by an exploration of process conditions, followed by electrical analysis and final silicon verification. This process is circular, with the ability to repeat the loop as new information is collected.

Fig. 5: Summary of virtual fabrication process.

Virtual fabrication can be used to guide process and integration teams in the development of process recipes and specifications for any new memory or logic device, and to do so at greater speed and lower cost than silicon-based experimentation.

The post Exploring Process Scenarios To Improve DRAM Device Performance appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Advanced Packaging Design For Heterogeneous IntegrationCP Hung
    As device scaling slows down, a key system functional integration technology is emerging: heterogeneous integration (HI). It leverages advanced packaging technology to achieve higher functional density and lower cost per function. With the continuous development of major semiconductor applications such as AI HPC, edge AI and autonomous electrical vehicles, traditional chips are transforming into smaller, well-partitioned chiplets that require chip-to-chip interconnections to be denser, faster an
     

Advanced Packaging Design For Heterogeneous Integration

Od: CP Hung
18. Duben 2024 v 09:03

As device scaling slows down, a key system functional integration technology is emerging: heterogeneous integration (HI). It leverages advanced packaging technology to achieve higher functional density and lower cost per function. With the continuous development of major semiconductor applications such as AI HPC, edge AI and autonomous electrical vehicles, traditional chips are transforming into smaller, well-partitioned chiplets that require chip-to-chip interconnections to be denser, faster and more reliable. This boosts the demand for heterogeneous integration, elevating demand for innovative advanced packaging technologies.

HI uses advanced packaging to integrate chiplets with heterogeneous designs and process nodes into a single package. This allows enterprises to choose optimum process nodes for specific system demands, such as 3nm for computing chiplets, 7nm for radio frequency chiplets, or to quickly produce super chips with specific functions in a cost-effective manner. HI not only aims for higher interconnection density, but also integrates various functional components, such as logic chips, sensors, memory, and others, which are needed to complete the whole system in one package. Overall energy efficiency and performance is greatly improved, while package size can be significantly reduced.

Advanced packaging solutions for AI HPC

The typical high-density advanced package size for AI cloud computing processors is 55mm x 55mm or more, and contains a 5-2-5 (top 5 layers, middle 2 layers, bottom 5 layers) advanced substrate, or even up to 11-2-11 wiring layers. Chiplets can be interconnected by fan-out technology with silicon bridge or 2.5D with Si Interposer as the integration platform. Through this technique, industry aims to gain more computing power within the same space.

ASE provides high-density packaging solutions, including Flip Chip Ball Grid Array (FCBGA), Fan Out Chip-on-Substrate (FOCoS), FOCoS-Bridge and 2.5D. The chip-to-chip interconnections in FCBGA is accomplished through BGA substrate, and its minimum L/S (line width/line spacing) is only about 10μm/10μm. The very popular and in-demand CoWoS (Chip on Wafer on Substrate) is a 2.5D packaging technology that uses RDL (redistribution layer) on Si interposer to connect chiplets, and its L/S can be significantly reduced to 0.5μm/0.5μm.

In the Si interposer of a 2.5D package, all the chiplets are connected in a side-by-side arrangement, and as the required number of chiplets increases, its area becomes larger and larger, resulting in fewer and fewer Si interposer chips that can be made from each 12-inch wafer (generally less than 50). This indeed significantly increases the manufacturing cost of 2.5D packaging. However, not all applications require 0.5μm/0.5μm L/S, so ASE came up with FOCoS, which uses fan-out technology’s RDL to integrate different chiplets, and its L/S can reach 2μm/2μm. This gives alternative solutions to the market with lower costs. In addition, ASE’s FOCoS-Bridge technology uses silicon bridge to provide high-density routing for interconnecting different chips (such as logic chips and memory) in areas that require high-speed transmission and uses Fan-Out RDL to integrate in other areas. As such, it delivers both 0.5μm/0.5μm and 2μm/2μm flexibility in L/S design, while achieving a significant increase in packaging density and bandwidth.

High performance chip-package-system co-design

To achieve the aforementioned high bandwidth, the chip, package, and entire system must be designed together to achieve holistic design optimization instead of just considering the individual parts. When using electronic design automation (EDA) for design optimization, consideration must be given to overall signal change along the entire transmission path, including Cu pillar, RDL fine line, TSV, μbump, etc. Eye diagrams can then be used to analyze the SerDes link’s electrical performance. When designing differential pairs for high-speed signals, it is necessary to reduce return and insertion loss, especially in the operating frequency band. From chip to package to the entire system, Taiwan’s manufacturing advantage lies in the ability to accomplish the turnkey design process, from beginning to end.

Providing more computing power with less energy

The industry is currently focused on optimizing energy efficiency. One of the key questions being asked is whether the power regulation and decoupling components, which were previously located on the system board, can be moved closer to the package or processor chip. There is even talk of redesigning the on-chip power delivery network (PDN), including supplying power directly from the backside of the chip (Backside PDN).

Power integrity design for power delivery network (PDN)

Optimizing power integrity and minimizing noise can be achieved by strategically positioning the capacitor. Ideally, the capacitor should be placed as close to the chip as possible, but this is dependent on the capacitor’s size and the manufacturing process, both of which can impact cost and performance. Traditional surface-mount technology (SMT) capacitors are relatively large, but chip-level silicon capacitors (Si-Cap) are now available that offer decent capacitance values.

UCIe (Universal Chiplet Interconnect Express) Consortium

Traditionally, there are many standard communication protocols (such as Block-to-Block, Memory Bus, or Interconnection Interface Protocols) at the chip level and the board level for system designers. Industry protocols that specify package-level integration are growing, especially given the need for a universal interface for chiplet integration using 2.5D and FOCoS packaging technologies.

In March 2022, Intel invited upstream and downstream manufacturers in the semiconductor industry chain to form the UCIe Consortium, and a standardized data transmission architecture for chiplet integration was introduced to reduce the cost of advanced packaging design. ASE is proud to be a founding member (Promoter member).

ASE offers a diverse range of advanced packaging types. We have developed packaging design specifications that can be integrated with foundry solutions specifications as well as the system requirements of original equipment manufacturers (OEMs) and cloud service providers to create a comprehensive UCIe package standard. The standard can assist in realizing ubiquitous chiplet heterogeneous integration for HPC applications using various advanced packaging technology architectures, such as 2.5D, 3D, FOCoS, Fan-out, EMIB, CoWoS, etc. Headquartered in Taiwan, ASE is enthusiastically participating in the formulation of international standards and relentlessly providing integrated solutions to the global industry.

Heterogeneous integration has been in development for many years. It can be used to integrate not only homogeneous and heterogeneous chiplets but also other passive and active components including connectors, into a single package. Achieving this requires not only advanced packaging technologies but also design and testing coordination. ASE offers a comprehensive one-stop service solution that includes system design, packaging, and testing to help customers shorten chip design cycles and accelerate product innovation.

The post Advanced Packaging Design For Heterogeneous Integration appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Electromigration Concerns Grow In Advanced PackagesLaura Peters
    The incessant demand for more speed in chips requires forcing more energy through ever-smaller devices, increasing current density and threatening long-term chip reliability. While this problem is well understood, it’s becoming more difficult to contain in leading-edge designs. Of particular concern is electromigration, which is becoming more troublesome in advanced packages with multiple chiplets, where various bonding and interconnect schemes create abrupt changes in materials and geometries.
     

Electromigration Concerns Grow In Advanced Packages

18. Duben 2024 v 09:09

The incessant demand for more speed in chips requires forcing more energy through ever-smaller devices, increasing current density and threatening long-term chip reliability. While this problem is well understood, it’s becoming more difficult to contain in leading-edge designs.

Of particular concern is electromigration, which is becoming more troublesome in advanced packages with multiple chiplets, where various bonding and interconnect schemes create abrupt changes in materials and geometries. For example, electrons may travel from a copper trace to a solder bump of SAC (tin-silver-copper), then to an underbump metal based on nickel, and finally to an interposer copper pad. That, in turn, can cause atoms to shift, resulting in failures in solder joints or in copper redistribution layers in high-density fan-out packages.

“From an electromigration perspective, advanced packaging causes increased packaging density, reduced packaging size, and the dimensions of interconnects to shrink, so the current density is now in close proximity to the maximum current density limit per EM design rules,” said Dermott Lynch, director of technical product management in Synopsys‘ EDA Group.

Any additional stresses the package may be subjected to during assembly and use, whether mechanical or thermal, also can help induce or accelerate electromigration. “Electromigration, in general, gets worse due to temperature and stress, both of which advanced packaging increases,” said Lynch. “Electromigration is also cumulative, so essentially it integrates all the temperature highs and stress over the lifetime until an interconnect breaks down or shorts. Larger processing temperature and operation temperature will make it worse, but it also depends on time under that temperature.”

In fact, managing thermal pathways is perhaps the greatest challenge associated the movement toward the ultimate package, a 3D-IC. “Electromigration is very temperature-sensitive,” said Marc Swinnen, director of product marketing in Ansys’ Semiconductor Division. “Depending on your thermal map, your power integrity will have to adapt to the local temperature profile that you have. So when you look at a chip, you can calculate how much power the chip is putting out, but you cannot tell how hot the chip will get because ‘it depends.’ Is it sitting on a cold plate or sitting in the sun in the Sahara? System concerns come in, and multi-physics modeling is important to understanding these co-dependent effects.”

Thermal engineering also means moving heat away from the most vulnerable points of failure, such as solder bumps. “Effective thermal management is essential for bump reliability,” said Curtis Zwenger, vice president of engineering and technical marketing at Amkor. “Engineers are incorporating thermal enhancement techniques, such as the use of thermal interface materials and advanced heat dissipation solutions, to ensure that bumps are not subjected to excessive temperature-related stresses.”

Zwenger noted that engineers are looking into new materials, while optimizing the use of existing materials to minimize the possibility of electromigration. “Semiconductor packaging engineers are implementing a range of measures to enhance bump reliability and maximize bump yield. These strategies include new materials for solder bumps and underbump metallization, optimizing bump size, pitch and shape for reliability, advanced process control methods to control variability and maximize yield, and simulating and modeling reliability.”

What is electromigration?
Electromigration is the mass transport of metal atoms caused by the electron wind from current flowing through a conductor, typically copper. When current density is high enough, metal will diffuse in the direction of current flow, creating tiny hillocks downstream and leaving behind vacancies or voids. With enough electromigration, failures occur due to severe line thinning, causing opens, or due to hillocks that bridge adjacent lines, causing short circuits.

Electromigration is a diffusion-controlled mechanism that can take three forms — bulk, grain boundary, or surface diffusion, depending on the metal. Aluminum migrates by grain boundary diffusion whereas copper migrates on the surface or at its grain boundaries.

For most of the semiconductor industry’s history, electromigration was primarily an on-chip concern, but on-chip EM is largely under control by reliability engineers. But with the scaling and rapid developments in advanced packaging — implementing TSVs, fan-out packaging with redistribution layers, and copper pillar bumps — electromigration has emerged as a major threat at the package level. Current flowing through the solder bump causes joule heating, and heat from other parts of the package may also dissipate through the solder bumps. EM can become an issue for solder joint connections between chip and interposer, or chip and PCB, as well as in RDLs. Solder joint failures typically manifest as voids or cracks.

Fig. 1: Electromigration can create short circuits between two interconnects through the development of hillocks, or an open circuit through the creation of voids in interconnect. Source: Ansys

Fig. 1: Electromigration can create short circuits between two interconnects through the development of hillocks, or an open circuit through the creation of voids in interconnect. Source: Ansys

Electromigration progresses more quickly at higher temperatures, at higher currents, under greater mechanical stress and in the presence of defects or impurities in the metal. Black’s equation describes an interconnect’s mean time-to-failure with respect to its temperature, current density and the activation energy needed to dislodge a metal atom as:

Black's equation

J is the current density, k is Boltzmann’s constant, T is temperature, Ea is the activation energy, and N is a scaling factor that depends on the metal’s properties. Black’s equation is useful because it easily shows how shorter, wider interconnects will tend to have longer MTTF. In addition, electromigration time-to-failure very strongly depends on the interconnect’s temperature. That temperature is primarily the result of the chip’s environmental temperature, self-heating of the conductor caused by current flow, the heat from neighboring interconnects or transistors, and the thermal conductivity of the surrounding material.

It is also important to note that electromigration is a runaway process. As current density and/or temperature increases, electromigration increases, which raises current density, causing more metal to migrate in a destructive feedback loop.

EM failure modes and allowable current density
In the case of copper redistribution layers in polyimide material, as current flows through the RDL, heat accumulates in the conductor due to Joule heating generation, which can degrade performance. As the required current density and Joule heating temperature is increasing in the fine-line Cu RDL structures (<5nm lines and spaces), self-heating is considered a key factor in the reliability of high-density fan out packages.

JiHye Kwon, senior manager of R&D at Amkor, recently used EM testing and Black’s equation to determine the electromigration failure mechanisms for a given RDL stack and high-density fan-out package with 2µm or 10µm wide RDL layers, 1,000µm long. [1]

High density fan-out is an emerging technology, as it features more aggressive scaling than wafer level fan-out packages. The three layers of copper RDL (3µm thick with Ta/Cu seed) were fabricated followed by polyimide fill, copper pillar deposition, die attach, and overmold. Kwon’s team tested both 2 and 10µm RDL at different current densities and temperatures until resistance increased by 100% (EM failure), but the maximum allowed current density corresponded with a 20% resistance increase. The failure modes occurred in two stages, first by void nucleation and growth and second with copper reduction and oxidation. The study yielded Ea and current density exponent values that can be useful in future designs of RDLs.

Meanwhile, a team of researchers from ASE recently demonstrated how susceptibility to electromigration is determined on copper pillar interconnects in flip chip quad flat no-lead (FCQFN) for high-power automotive applications. The multi-layered copper pillar bumps with a Cu/Ni/Sn1.8Ag configuration were bonded to a silver-plated copper leadframe and tested under extreme EM conditions of 10 kA/cm2 current density and temperatures of 150°C, 160°C and 180°C, while taking in-situ resistance measurements. [2] The EM failures corresponded with rapid rises in electrical resistance that corresponded with the formation of intermetallic compounds and voids at the Cu/solder interfaces. The team built an EM prediction model of interconnects based on a Black-type EM equation, following the JEDEC standard with five test conditions.

After the statistic calculation from the lifetime of samples, the ASE team determined activation energy of Cu pillar interconnects in the FCQFN package (1.12 ± 0.03 eV). The maximum current of the Cu pillar interconnects allowable lasting 10 years at a 105°C operating temperature at a 0.1% failure rate was larger than 2A for the FCQFN Cu pillar structure. “The FCQFN package has great potential in terms of its excellent anti-EM performance for future high-power applications,” the article said.

Designing/manufacturing for EM resiliency
Building electromigration resilience into advanced devices begins with using only EM-compliant linewidths in circuit designs based on the current density and heat profile that the interconnects will experience during operation over the lifetime of the device. Electromigration mitigation also requires process and materials engineering to ensure durability, for instance, of copper pillar bumps under BGA packages. It also calls for an optimized assembly process window and tight process control to prevent tiny violations of design rules that can later precipitate as EM failures.

As the industry makes its way toward true 3D packages, and eventually 3D-ICs, it seems clear that modeling and simulation will play an increasing role in determining many of the guard rails for manufacturing and assembly before manufacturing and assembly even begins. “Reliability modeling and simulation tools are being used to better understand the reliability of bump structures. This proactive approach helps in identifying potential issues before they arise, enabling engineers to implement preventive measures,” said Zwenger.

Modeling and simulation at the system level also will be essential to understanding the complex interplay between reliability mechanisms with thermal and mechanical stress in multi-chiplet systems during operation.

“Electromigration for stacked die is challenging,” said Synopsys’ Lynch. “Localized, die-to-die workloads cause repetitive current flow in specific areas. This generates local heat, increasing EM resulting in wire degradation, while producing even more heat. Reducing the thermal issue becomes critical to ensuring EM reliability.”

As stated previously, solder bumps can become a site for EM reliability failure. “Engineers fine-tune bump design in terms of bump size, pitch, and shape to ensure uniformity and reliability across the entire package. This includes the adoption of innovative Cu bump structures for improved mechanical and electrical properties,” said Amkor’s Zwenger.

In flip-chip BGA and other flip-chip applications, underfill materials — typically thermoset epoxies — are used to reduce the thermal stresses on solder bumps. “Underfill materials play a critical role in providing mechanical support and thermal stability to the bumps,” Zwenger said. “Engineers are investing in the development of advanced underfill formulations with enhanced properties, such as improved adhesion, thermal conductivity, and stress relief.”

Conclusion
Because of its dependence on temperature, electromigration is a failure mechanism to watch and plan for as devices continue to scale and systems integrators continue to cram more and more chiplets of various functions into advanced packages.

“In advanced technologies, the current density is now in close proximity to the maximum density,” said Synopsys’ Lynch. “Anything that causes an increase in temperature poses a threat. Designers of multi-die systems need to understand the impact of temperature and design systems to remove the heat.”

References

  1. JiHye Kwon, “Electromigration Performance Of Fine-Line Cu Redistribution Layer (RDL) For HDFO Packaging,” Semiconductor Engineering, Jan. 18, 2024, https://semiengineering.com/electromigration-performance-of-fine-line-cu-redistribution-layer-rdl-for-hdfo-packaging/
  2. -Y. Tsai, et al., “An Electromigration Study of Cu Pillar Interconnects in Flip-chip QFN Packaging under Extreme Conditions for High-power Applications,” 2023 IEEE 25th Electronics Packaging Technology Conference (EPTC), Singapore, 2023, pp. 326-332, doi: 10.1109/EPTC59621.2023.10457564.

Related Reading
What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.
Thermal Integrity Challenges Grow In 2.5D
Work is underway to map heat flows in interposer-based designs, but there’s much more to be done.
Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.

The post Electromigration Concerns Grow In Advanced Packages appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • What Works Best For ChipletsAnne Meixner
    The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield. To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least i
     

What Works Best For Chiplets

18. Duben 2024 v 09:08

The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield.

To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least initially. The basic challenge is aligning domain-specific performance demands of end systems, which contain a growing number of chiplets, with the assembly and packaging capabilities and methodologies of IDMs, foundries, and OSATs. This includes the creation of assembly development kits (ADKs) that are roughly the equivalent of process development kits (PDKs), which today are codified with manufacturing specifications.

A PDK provides the appropriate level of detail needed to develop planar chips, marrying design tools with fab processes to achieve a predictable outcome. But making this work for an ADK with heterogeneous chiplets is many times more complex. Design and assembly teams need to manage thermal, mechanical, and electrical co-dependencies that cause electrical and mechanical stress, resulting in warpage, reduced yield, and reliability issues under real-world workloads. Layered on top of this the business and legal issues related to packaging of different devices from different manufacturers.

“Chiplets are a growing trend, especially in the HPC and networking segments, with potential to scale to other applications,” said Gabriela Pereira, technology and market analyst for semiconductor packaging at Yole Intelligence. “The industry has understood that high-end advanced packaging technologies are needed to connect them — but that’s much more complex than it seems. Connecting chiplets requires the design of high-bandwidth interconnections at the package level, which can take different forms — e.g., 2D, 2.5D or 3D — while ensuring that the thermal and power requirements are fulfilled.”

Commercial chiplet-based devices generally are domain-specific, and sometimes developed for a specific workload. So despite a big industry push to create a LEGO-like mix-and-match ecosystem for chiplets — which today includes multiple IP and EDA vendors, foundries, memory suppliers, OSATs, substrate suppliers, etc. — making this work as planned will require time and a massive amount of work.

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

In creating heterogeneous integrated designs, it’s essential to have much tighter collaboration between foundries, IDMs, OSATs, and PCB manufacturers. And because each chiplet-based system will be customized, the number of assembly processes will grow substantially. For example, one OSAT noted that among its ~5,000 customers, there are ~1,000 different assembly processes.

That diversity in products and processes makes it difficult to achieve predictable results by choosing chiplets from a large menu of options.

“We’ve already encountered a lot of limitations including not only the silicon, but also integration and the ecosystem,” said Lihong Cao, senior director at ASE Group, at MEPTEC’s Road to Chiplets forum. She stressed that customers continue to push for a low-cost chiplet assembly process, which is creating constructive tension between developing a sophisticated assembly process and the economic realities of different industry sectors. Computing devices for automotive have a higher cost sensitivity than for data centers, for example, but their chips operate in a harsher environment over a longer lifetime.

What’s needed is a defined set of assembly process recipes — basically, a highly limited menu of choices — that are specific to the end application (HPC, automotive, RF telecommunications) in order to lower the cost of chiplet-based systems. OSATs and foundries already are moving in that direction for high-performance computing. For example, at its 2024 Direct Connect event, Intel shared its six different package processes for chiplets. TSMC and Samsung also offer defined sets of chiplet processes. But the success of these assembly processes requires engineering teams to co-optimize the flows, processes, and materials to best match the system requirements.

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

“Previously, when we designed a system we only had to be worried about the system requirements. Once we start segregating into dies and reassembling them, we have to start looking at other things. We have to worry about putting them together while considering signal integrity between dies, reliability, thermals, etc.,” said Itai Leshniak, director of AI systems solutions at Applied Materials, at the MEPTEC forum. “If we take the case of AI-based computer vision, we can break it down layer by layer — on the hardware side, determining which computer vision processors, sensors, filters are needed to break it down into the architecture at layer. Then we begin to go through how to package all these chiplets, and then which materials to use and how to take advantage of those materials.”

Materials and assembly processes
Conceptually, design engineers will use chiplets to design a system. However, the co-design and integration is far more complicated than assembling a set of LEGO blocks, because the chiplets, interposers, and package substrates come from different design houses and manufacturing facilities. The advanced packaging technologies used to connect chiplets vary with an alphabet soup of names — FOWLP, FOPLP, CoWoS, etc., each of which poses additional design and material choices along with certain process limitations.

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Currently engineering teams determine the tradeoffs among the different packaging options to select materials, derive a process recipe, and determine design rules.

Materials are a good starting point. “Materials are very important because they enable new products and packaging technologies,” Tanja Braun, deputy group manager at the Fraunhofer Institute for Reliability and Microintegration IZM. “As you move into more advanced packaging, process is getting much more complex because you are putting more things together. In the end, it’s a combination of equipment, materials, and process development.”

There are three thermal parameters that are critical in package assembly processes — coefficients of thermal expansion (CTE), glass transition temperature (Tg), and thermal conductivity. These factors affect how a material behaves in manufacturing to packaging processes, as well as how it behaves in the field.

“Challenges for our materials include temperature limitations of different die,” said Rama Puligadda, CTO at Brewer Science. “We have to ensure that the temperatures used for bonding materials don’t exceed the thermal limitations of any of the chips that are being integrated into the package. Additionally, there may be some subsequent processes like redistribution layer (RDL) formation or molding. Our materials have to survive those processes. They have to survive the chemicals they come in contact with throughout the packaging process scheme. Mechanical stresses in the package add additional challenges for bonding materials.”

Within a stack of chiplets-on-substrate with an optional interposer, their material attributes affect the thermal-mechanical stresses between neighboring materials, as well. This directly impacts interconnect dimensional control over a large area substrate area.

“If you go work the numbers, you will find that the level of tolerance and control required is frightening,” said Dick Otte, CEO of Promex Industries. “You’re talking about controlling dimensions equivalent to the width of a grass blade over the length of a football field, so that’s roughly 1 in 100,000.”

The goal is uniform heating of the structure in reflow in order to attain the best process results and to avoid cracking. “When you’re taking it through a 250 degrees centigrade temperature change, then you need to heat up slowly so that the top doesn’t get hot before the bottom does,” said Otte.

Multi-physics to comprehend co-optimization
Multi-physics modeling has become the go-to method for co-optimizing packaging design and assembly process development. That affects both permanent and temporary materials, as well the placement of processors, memories, and other components.

“You always looking to what the customer needs electrically, because that’s going to help define the material set. The material set is broadly applicable to a bunch of speed ranges. As long as you don’t step outside of those electrical specifications, theoretically you should be okay,” said Mike Kelly, vice president of advanced package and technology integration at Amkor Technology.

To save many iterations of empirically based development, engineers can use physics-based simulations to understand the impact of a material set’s properties impact on the assembly process, power/thermals, and mechanical vibrations.

Consider that HPC chiplet products can consume ~1,000 watts at peak performance so the power and thermal interactions need to be fully understood.

We’ve struggled, as everybody has, with this blizzard of complexity in the different techniques. Not only do they vary across different vendors, but they’re also varying over time,” said Marc Swinnen, director of product marketing at Ansys. “Our approach has been to identify the essentials that need to be worked on. We work jointly with customers to develop a simulation flow that actually achieves what is needed now.”

Materials are just one piece of the puzzle. “Then there’s the assembly stresses that need to be modeled to know whether you can correctly assemble this device. The third one is mechanical vibration,” Swinnen said. “Can your device withstand those regular vibrations? Modeling these attributes ties directly into our mechanical analysis tools — acoustic, thermal, vibration, etc. In the end, you’re going to have to do physics simulation. We’re trying to make it accessible to people in many different forms. But the bedrock of our tool offerings is that we have the meshing simulation and analysis. It’s a question of getting the data in the right format in a way that’s practical and usable.”

Evolving assembly design kits
For conventional packages, OSATs provide design rules for each packaging technology. These need to consider electrical, mechanical and thermal design requirements and manufacturing process limitations. In effect this is a multi-dimensional bounding box. Suppliers perform iterations with the customer to create a product specific process recipe.

Rules cover the macro-level attributes. “At a minimum, what you see from design rules is maximum package size, maximum silicon size, and whether silicon can be [mounted] on both sides of the substrate, such that when you follow these constructions the final product will have a lifetime of 1,000 thermal cycles, for example,” said Fraunhofer’s Braun.

In addition, design rules need to describe routing constraints for the interposer and/or redistribution layer, such as RDL line widths and spaces, ball-grid/pillar/pad size and pitches, and the maximum number of interconnections.

Breaking up a monolithic HPC device into multiple dies shifts some of the semiconductor design/process complexity into the packaging space. That makes things much more complicated. Consider that to connect 10 dies requires on order of 100,000 traces within the interposer’s or substrate’s redistribution layer.

To cope with the complexity at the chip level, the IC industry has long relied upon process design kits (PDKs) to capture design rules in an electronic file that can be imported into EDA tools. Their counterparts, assembly design kits (ADKs), are relatively immature.

“We call it Smart Package,” said Amkor’s Kelly. “It’s an ADK that we give to every customer who’s doing their own design. It is a set of macros, and a customization of a database tailored to a customer’s particular design. For chiplets, it is a high-density fan-out package technology. And it’s cognizant of the limitations for metal density and metal spacing, etc. This makes it easier for us to do design rule checks (DRCs).”

But right now, with the level of customization still required, how an ADK is derived and what it entails is in flux. Partnerships between EDA tool vendors, OSATs, and semiconductor device providers are required.

“We come from the IC world where everything is very rigid,” said Kenneth Larsen, director of 3D-IC product management in Synopsys‘ EDA Group. “On the OSAT side, and maybe this is because it’s so custom, design rules seem like a data sheet. Then you build and optimize the products over time or in collaboration with the OSAT. It’s not an electronic exchange. In the IC world, this would be totally unheard of. While it is possible to tweak a few things, you have a qualification process. And it seems like that’s not there yet for packaging.”

Materials and associated assembly recipes ultimately drive what’s possible for a chiplet-substrate stack in terms of pillar pitch, RDL line widths and spaces, bonding processes, and chiplet placement tolerances. But within a handful of ADKs, there are many possible interactions to consider.

The current focus is on co-optimizing the system design with the chiplet assembly process, leading to an assembly process development flow (see figure 4). This flow considers the needs of customization of an assembly process, and it creates the necessary design rules to be used by package designers.

Fig. 4: Chip-package hybrid flow. Source: ASE

Fig. 4: Chip-package hybrid flow. Source: ASE

“First you need to define your structure using chiplets. Are you using substrate RDL, 2.5D RDL, or a bridge? After that you need to consider your structure’s materials. What kind of material do you choose to fulfill your electrical performance and the mechanical stress requirements,” said Cao. “After that, you do pre-analysis to ensure all the structures and materials you use are workable in terms of electrical, warpage and mechanical stress.”

The design planning flow also includes the evaluation of die-to-die interconnects through the documents for co-design sign-off.

Conclusion
Before chiplet-based designs can be enabled outside the IDM model, the industry needs to complete the ecosystem that bridges the manufacturing and design complexity. This is because the need to co-optimize the system architecture based on materials, process, and integration capabilities is essential. While this would be easier with a set of well-defined products for the chiplet ecosystem to drive forward on, that has not happened yet.

Engineering teams across the design and manufacturing stack will need to collaborate to choose the appropriate materials, architectures, processes, etc., to develop a final chiplet-based product that is designable. As ASE group’s Cao noted, “An integrated design and manufacturing ecosystem is important. It is very critical to have collaboration among IDM, vendors, materials suppliers. Everyone needs to work together to really enable integration for the real applications.”

Related stories
Fan-Out Packaging Gets Competitive
Manufacturability reaches sufficient level to compete with flip-chip BGA and 2.5D.

Inside Panel-Level Fan-Out Technology
Fraunhofer’s panel experts dig into why this approach is needed and where the challenges are to making it work.

Next Steps For Panel-Level Packaging
Where it’s working, and what challenges remain for even broader adoption.

Mini-Consortia Forming Around Chiplets
Commercial chiplet marketplaces are still on the distant horizon, but companies are getting an early start with more limited partnerships.

What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.

Mechanical Challenges Rise With Heterogeneous Integration
But gaps in tools make it difficult to address warpage, structural issues, and new materials in multi-die/multi-chiplet designs.

The post What Works Best For Chiplets appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Enabling Advanced Devices With Atomic Layer ProcessesKatherine Derbyshire
    Atomic layer deposition (ALD) used to be considered too slow to be of practical use in semiconductor manufacturing, but it has emerged as a critical tool for both transistor and interconnect fabrication at the most advanced nodes. ALD can be speeded up somewhat, but the real shift is the rising value of precise composition and thickness control at the most advanced nodes, which makes the extra time spent on deposition worthwhile. ALD is a close cousin of chemical vapor deposition, initially intr
     

Enabling Advanced Devices With Atomic Layer Processes

Atomic layer deposition (ALD) used to be considered too slow to be of practical use in semiconductor manufacturing, but it has emerged as a critical tool for both transistor and interconnect fabrication at the most advanced nodes.

ALD can be speeded up somewhat, but the real shift is the rising value of precise composition and thickness control at the most advanced nodes, which makes the extra time spent on deposition worthwhile.

ALD is a close cousin of chemical vapor deposition, initially introduced in high volume to the semiconductor industry for hafnium oxide (high-k) gate dielectrics. Both CVD and ALD are inherently conformal processes. Deposition occurs on all surfaces exposed to a precursor gas. In ALD, though, the reaction is self-limiting.

The process works like this: First, a precursor gas (A) is introduced into the process chamber, where it adsorbs onto all available substrate sites. No further adsorption occurs once all surface sites are occupied. An inert purge gas, typically nitrogen or argon, flushes out any remaining precursor gas, then a second precursor (B) is introduced. Precursor B reacts with the chemisorbed precursor A to produce the desired film. Once all of the adsorbed molecules are consumed, the reaction stops. After a second purge step, the cycle repeats.

ALD opportunities expand as features shrink
The step-by-step nature of ALD is both its strength and its weakness. Depositing one monolayer at a time gives manufacturers extremely precise thickness control. Using different precursor gases in different ratios can tune the film composition. Unfortunately, the repeated precursor/purge gas cycles take a lot of time. In an interview, CEA-Leti researcher Rémy Gassilloud estimated that in a single wafer process, two minutes per wafer is the maximum cost-effective process time. But two minutes is only enough time to deposit about a 2nm-thick film.

Some process adjustments can improve throughput. Silicon dioxide ALD often uses large furnaces to process many wafers at once. Plasma activation can ionize reagents and accelerate film formation. Still, Gassilloud estimates that 10nm is the maximum practical thickness for ALD films.

As transistors shrink, though, the number of layers in that thickness range is increasing. Transistor structures also are becoming more complex, requiring deposition on vertical surfaces, into deep trenches, and other places not readily accessible by line-of-sight PVD methods. Replacement gates for gate-all-around transistors, for instance, need a process that can fill nanometer-scale cavities.

As noted above, HfO2 was the first successful application of ALD in semiconductor manufacturing. Its precursors, HfCl4 and water, are both chemically simple small molecules, whose by-products are volatile and easily removed. Such simple chemistries are the exception, though. ALD of silicon dioxide typically uses aminosilane precursors.⁠[1] Metal nitrides often have complex metal-organic precursor gases. Gassilloud noted that ligands might be added to a precursor molecule to change its vapor pressure or reactivity, or to facilitate adhesion to the substrate. In selective deposition processes, discussed below, ligands might improve selectivity between growth and non-growth surfaces. These larger molecules can be difficult to insinuate into smaller features, and byproducts can be difficult to remove. Complex byproducts can also become a contamination source.

One of the advantages of ALD is its very low process temperature, typically between 200°C and 300°C. It is thermally compatible with both transistor and interconnect processes in CMOS, as well as with deposition on plastic and other novel substrates. Even so, Aditya Kumar and colleagues at GlobalFoundries showed that precise temperature control is important.[2] TDMAT (tetrakis- dimethylamino titanium) condensation in a TiN deposition process was a significant source of particle defects. To maintain the desired process temperature, both the precursor and purge gas temperatures matter. Introducing cold purge gas into a warm process chamber can cause rapid condensation.

As ALD has become a mainstream process, the industry has found applications for it beyond core device materials, in a variety of sacrificial and spacer layers. For example, double- and quadruple-patterning schemes often use ALD for “pitch-doubling.” By depositing a spacer material on either side of a patterned “mandrel,” then removing the mandrel, the process can cut the original pitch in half without the need for an additional, more costly lithography step.[3]

Fig. 1: Self-aligned double patterning with ALD spacers. Source: IOPScience

Fig. 1: Self-aligned double patterning with ALD spacers. Source: Creative Commons

Depositing a doped oxide on the vertical silicon fins of a finFET device is a less directional and less damaging alternative to ion implantation.[4]

Selective deposition brings lateral control
These last two examples depend on surface characteristics to mediate deposition. A precursor might adhere more readily to a hard mask than to the underlying material. The vertical face of a silicon fin might offer more (or fewer) adsorption sites than the horizontal face. Selective deposition on more complicated structures may require a pre-deposited growth template, functionalizing substrate regions to encourage or discourage growth. Selective deposition is especially important in interconnect applications. In general, though, a comprehensive review by Rong Chen and colleagues at Huazhong University of Science and Technology explained that selective deposition methods need to replenish the template material as the film grows while needing a mechanism to selectively remove the unwanted material.⁠[5]

For example, tungsten preferentially deposits on silicon relative to SiO2, but the selectivity diminishes after only a few cycles. Researchers at North Carolina State University successfully re-passivated the oxide by incorporating hydrogen into the tungsten precursor.[⁠6] Similarly, a group at Argonne National Laboratory found that SiO2 preferentially deposited on SiO2 relative to other oxides for only 10 to 15 cycles. Adding acetylacetone (“Precursor C”) as an inhibitor every 5 to 10 cycles — restored selectivity.⁠[7]

Alternatively, or in addition, atomic layer etching (ALE) might be used to remove unwanted material. ALE operates in the same step-by-step manner as ALD. The first half of a cycle reacts with the existing surface, weakening the bond to the underlying material. Then, a second step — typically ion bombardment — removes the weakened layer. For example, in ALE etching of silicon, chlorine gas reacts with the surface to form various SiClx compounds. The chlorination process weakens the inter-silicon bonds between the surface and the bulk, and the chlorinated layer is easily sputtered away. The layer-by-layer nature of ALE depends on preferential removal of the surface material relative to the bulk (SiClx vs. Si in this case). The “ALE window” is the combination of energy and temperature at which the surface layer is completely removed without damaging the underlying material.

Somewhat counter-intuitively, Keren Kanarik and colleagues at Lam Research found that higher ion energies actually expanded the ALE window for silicon etching. High ion energies with short exposure times delayed the onset of silicon sputtering relative to conventional RIE.[8]

Adding and subtracting, one atomic layer at a time
For a long time, the semiconductor industry has been looking for alternatives to process schemes that deposit material, pattern it, then etch most of it away. Wouldn’t it be simpler to only deposit the material we will ultimately need? Meanwhile, atomic layer deposition has been filling the spaces under nanosheets and inside cavities. Bulk deposition and etch tools are still with us, and will be for the foreseeable future. In more and more cases, though, those tools provide the frame while ALD and ALE processes fill in the details.

References

  1. Wenling Li, et al., “Impact of aminosilane and silanol precursor structure on atomic layer deposition process,”Applied Surface Science, Vol 621, 2023,156869, https://doi.org/10.1016/j.apsusc.2023.156869.
  2. Kumar, et al., “ALD TiN Surface Defect Reduction for 12nm and Beyond Technologies,” 2020 31st Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC), Saratoga Springs, NY, USA, 2020, pp. 1-4, doi: 10.1109/ASMC49169.2020.9185271.
  3. Shohei Yamauchi, et al., “Extendibility of self-aligned type multiple patterning for further scaling”, Proc. SPIE 8682, Advances in Resist Materials and Processing Technology XXX, 86821D (29 March 2013); https://doi.org/10.1117/12.2011953
  4. Kalkofen, et al., “Atomic layer deposition of phosphorus oxide films as solid sources for doping of semiconductor structures,” 2018 IEEE 18th International Conference on Nanotechnology (IEEE-NANO), Cork, Ireland, 2018, pp. 1-4, doi: 10.1109/NANO.2018.8626235.
  5. Rong Chen et al., “Atomic level deposition to extend Moore’s law and beyond,” 2020 Int. J. Extrem. Manuf. 2 022002 DOI 10.1088/2631-7990/ab83e0
  6. B Kalanyan, et al., “Using hydrogen to expand the inherent substrate selectivity window during tungsten atomic layer deposition,” 2016 Chem. Mater. 28 117–26 https://doi.org/10.1021/acs.chemmater.5b03319
  7. Yanguas-Gil A, Libera J A and Elam J W, “Modulation of the growth per cycle in atomic layer deposition using reversible surface functionalization,” 2013 Chem. Mater. 25 4849–60 https://doi.org/10.1021/cm4029098
  8. Keren J. Kanarik, et al., “Universal scaling relationship for atomic layer etching,” J. Vac. Sci. Technol. A 39, 010401 (2021); doi: 10.1116/6.0000762

The post Enabling Advanced Devices With Atomic Layer Processes appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Powering The Automotive Revolution: Advanced Packaging For Next-Generation Vehicle ComputingPrasad Dhond
    Automotive processors are rapidly adopting advanced process nodes. NXP announced the development of 5 nm automotive processors in 2020 [1], Mobileye announced EyeQ Ultra using 5 nm technology during CES 2022 [2], and TSMC announced its “Auto Early” 3 nm processes in 2023 [3]. In the past, the automotive industry was slow to adopt the latest semiconductor technologies due to reliability concerns and lack of a compelling need. Not anymore. The use of advanced processes necessitates the use of adva
     

Powering The Automotive Revolution: Advanced Packaging For Next-Generation Vehicle Computing

18. Duben 2024 v 09:06

Automotive processors are rapidly adopting advanced process nodes. NXP announced the development of 5 nm automotive processors in 2020 [1], Mobileye announced EyeQ Ultra using 5 nm technology during CES 2022 [2], and TSMC announced its “Auto Early” 3 nm processes in 2023 [3]. In the past, the automotive industry was slow to adopt the latest semiconductor technologies due to reliability concerns and lack of a compelling need. Not anymore.

The use of advanced processes necessitates the use of advanced packaging as seen in high performance computing (HPC) and mobile applications because [4][5]:

  1. While transistor density has skyrocketed, I/O density has not increased proportionally and is holding back chip size reductions.
  2. Processors have heterogeneous, specialized blocks to support today’s workloads.
  3. Maximum chip sizes are limited by the slowdown of transistor scaling, photo reticle limits and lower yields.
  4. Cost per transistor improvements have slowed down with advanced nodes.
  5. Off-package dynamic random-access memory (DRAM) throttles memory bandwidth.

These have been drivers for the use of advanced packages like fan-out in mobile and 2.5D/3D in HPC. In addition, these drivers are slowly but surely showing up in automotive compute units in a variety of automotive architectures as well (see figure 1).

Fig. 1: Vehicle E/E architectures. (Image courtesy of Amkor Technology)

Vehicle electrical/electronic (E/E) architectures have evolved from 100+ distributed electronic control units (ECUs) to 10+ domain control units (DCUs) [6]. The most recent architecture introduces zonal or zone ECUs that are clustered in physical locations in cars and connect to powerful central computing units for processing. These newer architectures improve scalability, cost, and reliability of software-defined vehicles (SDVs) [7]. The processors in each of these architectures are more complex than those in the previous generation.

Multiple cameras, radar, lidar and ultrasonic sensors and more feed data into the compute units. Processing and inferencing this data require specialized functional blocks on the processor. For example, the Tesla Full Self-Driving (FSD) HW 3.0 system on chip (SoC) has central processing units (CPUs), graphic processing units (GPUs), neural network processing units, Low-Power Double Data Rate 4 (LPDDR4) controllers and other functional blocks – all integrated on a single piece of silicon [8]. Similarly, Mobileye EyeQ6 has functional blocks of CPU clusters, accelerator clusters, GPUs and an LPDDR5 interface [9]. As more functional blocks are introduced, the chip size and complexity will continue to increase. Instead of a single, monolithic silicon chip, a chiplet approach with separate functional blocks allows intellectual property (IP) reuse along with optimal process nodes for each functional block [10]. Additionally, large, monolithic pieces of silicon built on advanced processes tend to have yield challenges, which can also be overcome using chiplets.

Current advanced driver-assistance systems (ADAS) applications require a DRAM bandwidth of less than 60GB/s, which can be supported with standard double data rate (DDR) and LPDDR solutions. However, ADAS Level 4 and Level 5 will need up to 1024 GB/s memory bandwidth, which will require the use of solutions such as Graphic DDR (GDDR) or High Bandwidth Memory (HBM) [11][12].

Fig. 2: Automotive compute package roadmap. (Image courtesy of Amkor Technology)

Automotive processors have been using Flip Chip BGA (FCBGA) packages since 2010. FCBGA has become the mainstay of several automotive SoCs, such as EyeQ from Mobileye, Tesla FSD and NVIDIA Drive. Consumer applications of FCBGA packaging started around 1995 [13], so it took more than 15 years for this package to be adopted by the automotive industry. Computing units in the form of multichip modules (MCMs) or System-in-Package (SiP) have also been in automotive use since the early 2010s for infotainment processors. The use of MCMs is likely to increase in automotive compute to enable components like the SoC, DRAM and power management integrated circuit (PMIC) to communicate with each other without sending signals off-package.

As cars move to a central computing architecture, the SoCs will become more complex and run into size and cost challenges. Splitting these SoCs into chiplets becomes a logical solution and packaging these chiplets using fan-out or 2.5D packages becomes necessary. Just as FCBGA and MCMs transitioned into automotive from non-automotive applications, so will fan-out and 2.5D packaging for automotive compute processors (see figure 2). The automotive industry is cautious but the abovementioned architecture changes are pushing faster adoption of advanced packages. Materials, processes, and factory controls are key considerations for successful qualification of these packages in automotive compute applications.

In summary, the automotive industry is adopting advanced semiconductor technologies, such as 5 nm and 3 nm processes, which require the use of advanced packaging due to limitations in I/O density, chip size reductions, and memory bandwidth. Processors in the latest vehicle E/E architectures are more complex and require specialized functional blocks to process data from multiple sensors. As cars move to the central computing architecture, the SoCs will become more complex and run into size and cost challenges. Splitting these SoCs into chiplets becomes a logical solution and packaging these chiplets using fan-out or 2.5D technology becomes necessary.

Sources

  1. NXP. “NXP Selects TSMC 5nm Process for Next-Generation High-Performance Automotive Platform.” NXP, https://www.nxp.com/company/about-nxp/nxp-selects-tsmc-5nm-process-for-next-generation-high-performance-automotive-platform:NW-TSMC-5NM-HIGH-PERFORMANCE.
  2. Mobileye. “Mobileye at CES 2022.” Mobileye, https://www.mobileye.com/news/mobileye-ces-2022-tech-news/.
  3. Business Wire. “TSMC Showcases New Technology Developments at 2023 Technology Symposium.” Business Wire, https://www.businesswire.com/news/home/20230426005359/en/TSMC-Showcases-New-Technology-Developments-at-2023-Technology-Symposium.
  4. Swaminathan, Raja. “Advanced Packaging: Enabling Moore’s Law’s Next Frontier Through Heterogeneous Integration.” HotChips33, https://hc33.hotchips.org/assets/program/tutorials/2021%20Hot%20Chips%20AMD%20Advanced%20Packaging%20Swaminathan%20Final%20%2020210820.pdf
  5. SemiAnalysis. “Advanced Packaging Part 1” SemiAnalysis, https://www.semianalysis.com/p/advanced-packaging-part-1-pad-limited?utm_source=%2Fsearch%2Fadvanced%2520packaging&utm_medium=reader2.
  6. McKinsey & Company. “Getting Ready for Next-Generation EE Architecture with Zonal Compute.” McKinsey & Company, https://www.mckinsey.com/industries/semiconductors/our-insights/getting-ready-for-next-generation-ee-architecture-with-zonal-compute.
  7. NXP. “How Zonal E/E Architectures with Ethernet are Enabling Software-Defined Vehicles.” NXP, https://www.nxp.com/company/blog/how-zonal-e-e-architectures-with-ethernet-are-enabling-software-defined-vehicles:BL-HOW-ZONAL-EE-ARCHITECTURES.
  8. WikiChip. “Tesla (Car Company)/FSD Chip.” WikiChip, https://en.wikichip.org/wiki/tesla_(car_company)/fsd_chip.
  9. Mobileye. “EyeQ Chip.” Mobileye, https://www.mobileye.com/technology/eyeq-chip/.
  10. Ziadeh, Bassam. “Driving Adoption of Advanced IC Packaging in Automotive Applications.” Presentation at IMAPS DPC, March 2023. General Motors, Fountain Hills AZ, March 16, 2023.
  11. K Matthias Jung and Norbert Wehn. “Driving Against the Memory Wall: The Role of Memory for Autonomous Driving.” Fraunhofer IESE, Kaiserslautern, Germany, and Microelectronic Systems Design Research Group, University of Kaiserslautern, Kaiserslautern, Germany. Kluedo, https://kluedo.ub.rptu.de/frontdoor/deliver/index/docId/5286/file/_memory.pdf.
  12. Micron. “Cinco de Play: Memory – Is That Critical to Autonomous Driving?” Micron, https://www.micron.com/about/blog/2017/october/cinco-play-memory-is-that-critical-to-autonomous-driving.
  13. McKinsey & Company. “Advanced Chip Packaging: How Manufacturers Can Play to Win.” McKinsey & Company, https://www.mckinsey.com/industries/semiconductors/our-insights/advanced-chip-packaging-how-manufacturers-can-play-to-win.

The post Powering The Automotive Revolution: Advanced Packaging For Next-Generation Vehicle Computing appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Enabling New Applications With SiC IGBT And GaN HEMT For Power Module DesignShela Aboud
    The need to mitigate climate change is driving a need to electrify our infrastructure, vehicles, and appliances, which can then be charged and powered by renewable energy sources. The most visible and impactful electrification is now under way for electric vehicles (EVs). Beyond the transition to electric engines, several new features and technologies are driving the electrification of vehicles. The number of sensors in a vehicle is skyrocketing, driven by autonomous driving and other safety fea
     

Enabling New Applications With SiC IGBT And GaN HEMT For Power Module Design

18. Duben 2024 v 09:05

The need to mitigate climate change is driving a need to electrify our infrastructure, vehicles, and appliances, which can then be charged and powered by renewable energy sources. The most visible and impactful electrification is now under way for electric vehicles (EVs). Beyond the transition to electric engines, several new features and technologies are driving the electrification of vehicles. The number of sensors in a vehicle is skyrocketing, driven by autonomous driving and other safety features, while a modern software-defined vehicle (SDV) is electrifying everything from air-conditioned seats to self-parking technology.

An important technology for EVs and SDVs is power modules. These are super high-voltage devices that convert one form of electricity to another (e.g., AC to DC), which is necessary to convert the vehicle battery energy to a current that can run the vehicles electrical system, including the drive train. These modules demand the highest power loads and are rated at 1000s of voltages – and the design of power devices, which are the fundamental electronic component of the power modules, is crucial, as a bad design can lead to catastrophe events.

Power devices, much more than other types of electrical devices, are designed for specific applications. In comparison, logic transistors can be used in everything from toasters to smartphones. Not only does the architecture of power devices change at higher voltages, different power ratings, or higher switching frequencies as needed, but the material can change as well.

New power requirements need wide-band gap materials

To meet new and future power demands for EVs, electric infrastructure, and other novel electrical systems, wide-band gap (WBG) materials are being developed and introduced. Silicon carbide (SiC) IGBTs are now available and being deployed, while gallium arsenide (GaN) HEMTs are a promising technology that is in the development stage.

Power density vs. switching frequency of power devices based on different materials.

Continuing with our EV example, SiC inverters can generally increase the potential range by approximately 10%, even after accounting for other design considerations. In addition, increasing the drive train voltage from 400V range to 800V can reduce the charging speeds by half. These voltages are only possible to realize with wide-band gap materials like SiC-based power devices. Tesla introduced SiC MOSFETs into its Model S back in 2018. Since then, numerous automotive manufacturers have also adopted SiC in their EVs, including Hyundai and BMW, for example.

GaN still has many design hurdles to overcame to improve reliability and decrease cost – but if it can be made affordable, perhaps the next realization of EVs will allow for charging in seconds with ranges of thousands of miles.

Simulating power devices

Because of the huge number of design parameters, simulation is important in the design of power devices. One crucial part for device design is the calculation of the breakdown voltage – the voltage at which the device can essentially melt, or catch fire, but will never operate again. These simulations need to be highly physics-based and capture the mechanisms by which electrons can be released or absorbed by the crystal lattice of these materials. The increasing band gaps in WBG materials like SiC and GaN increase the breakdown voltage. In addition, these materials have a smaller effective electron mass (i.e., the mass of an electron in a material dictates how fast it will move in an electric field) – which makes the switching frequency in devices based on these WBG materials faster.

A critical area of all electronics design is variability and reliability. Device performance needs to be stable and last a long time. A key factor for variability and reliability is defects in the crystal lattice. These defects, or traps, act as charge centers that can drastically impact how well a device works. Simulation can also help to identify the types of traps, providing a mechanistic understanding of how the traps will impact the device physics. Recently, Synopsys issued a paper using first-principles quantum solutions to characterize specific traps in SiC with QuantumATK.

Going forward, wind energy, solar, home appliances, and even the electric grid itself are going to need new devices with different structures and materials. The future is extremely exciting for power devices, which can be found in our EVs and will soon power a huge range of applications across our society.

The post Enabling New Applications With SiC IGBT And GaN HEMT For Power Module Design appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Exploring Process Scenarios To Improve DRAM Device PerformanceYu De Chen
    In the world of advanced semiconductor fabrication, creating precise device profiles (edge shapes) is an important step in achieving targeted on-chip electrical performance. For example, saddle fin profiles in a DRAM memory device must be precisely fabricated during process development in order to avoid memory performance issues. Saddle fins were introduced in DRAM devices to increase channel length, prevent short channel effects, and increase data retention times. Critical process equipment set
     

Exploring Process Scenarios To Improve DRAM Device Performance

18. Duben 2024 v 09:04

In the world of advanced semiconductor fabrication, creating precise device profiles (edge shapes) is an important step in achieving targeted on-chip electrical performance. For example, saddle fin profiles in a DRAM memory device must be precisely fabricated during process development in order to avoid memory performance issues. Saddle fins were introduced in DRAM devices to increase channel length, prevent short channel effects, and increase data retention times. Critical process equipment settings like etch selectivity, or the gas ratio of the etch process, can significantly impact the shape of fabricated saddle fin profiles. These process and profile changes have significant impact on DRAM device performance. It can be challenging to explore all possible saddle fin profile combinations using traditional silicon testing, since wafer-based testing is time-consuming and expensive. To address this issue, virtual fabrication software (SEMulator3D) can be used to test different saddle fin profile shapes without the time and cost of wafer-based development. In this article, we will review an example of using virtual fabrication for DRAM saddle fin profile development. We will also assess DRAM device performance under different saddle fin profile conditions. This methodology can be used to guide process and integration teams in the development of process recipes and specifications for DRAM devices.

The challenge of exploring different profiles

Imagine that you are a DRAM process engineer, and have received nominal process conditions, device specifications and a target saddle fin profile for a new DRAM design. You would like to explore some different process options and saddle fin profiles to improve the performance of your DRAM device. What should you do? This is a common situation for integration and process engineers during the early R&D stages of DRAM process development.

Traditional methods of exploring saddle fin profiles are difficult and sometimes impractical. These methods involve the creation of a series of unique saddle fin profiles on silicon wafers. The process is time-consuming, expensive, and in many cases impractical, due to the large number of scenarios that must be tested.

One solution to these challenges is to use virtual fabrication. SEMulator3D allows us to create and analyze saddle fin profiles within a virtual environment and to subsequently extract and compare device characteristics of these different profiles. The strength of this approach is its ability to accurately simulate the real-world performance of these devices, but to do so faster and less-expensively than using wafer-based testing.

Methodology

Let’s dive into the methodology behind our approach:

Creating saddle fin profiles in a virtual environment

First, we input the design data and process flow (or process steps) for our device in SEMulator3D. The software can then generate a “virtual” 3D DRAM structure and provide a visualization of saddle fin profiles (figure 1). In figure 1(a), a full 3D DRAM structure including the entire simulation domain is displayed. To enable detailed device study, we have cropped a small portion of the simulation domain from this large 3D area. In figure 1(b), we have extracted a cross sectional view of the saddle fin structure, which can be modified by varying a set of multi-etch steps in the process model. The section of the saddle fin that we would like to modify is identified as the “AA” (active area). We can finely tune the etch taper angle, AA/fin CD, fin height, taper angle and additional nominal device parameters to modify the AA profile.

Figure 1: Process flow set up by SEMulator3D containing 3 figures marked A,B and C. Figure A contains a 3D simulated DRAM structure, with metals, nitrides, oxides and silicon structures shown in different colors. Figure B contains a cross section view of the saddle fin, with the bitline, active area, CC and wordline areas highlighted in the figure. Figure C highlights the key specifications of the saddle fin profile that can will be changed during simulation, including the etch taper angle, AA/fin CD, fin height, and taper angle to modify the saddle fin profile and shape.

Fig. 1: Process flow set up by SEMulator3D: (a) DRAM structure and (b) Cross section view of saddle fin along with key specifications of the saddle fin profile.

Using the structures that we have built in SEMulator3D, we can next assign dopants and ports to the simulated structure and perform electrical performance evaluation. Accurately assigning dopant species, and defining dopant concentrations within the structure, is critical to ensuring the accuracy of our simulation. In figure 2(a), we display a dopant concentration distribution generated in SEMulator3D.

Ports are contact points in the model which are used to apply or extract electrical signals during a device study. Proper assignment of the ports is very important. Figure 2(b) provides an example of port assignment in our test DRAM structure. By accurately assigning the ports and dopants, we can extract the device’s electrical characteristics under different process scenarios.

Figure 2: Dopant concentration and Port Setup for the DRAM device, marked at Figures 2A and 2B. In Figure 2(a), we display a dopant concentration distribution generated in SEMulator3D. The highest dopant concentration is found in the center of the device, shown in red and yellow. Figure 2(b) provides an example of port assignment in our test DRAM structure, with assignments shown against a device cross-section. Ports are assigned at the drain, source and gate of the device.

Fig. 2: (a) Dopant concentration and (b) Port assignments (in blue).

Manufacturability validation

It is important to ensure that our simulation models match real world results. We can validate our model against cross-sectional images (SEM or TEM images) from an actual fabricated device. To ensure that our simulated device matches the behavior of an actual manufactured chip, we can create real silicon test wafers containing DRAM structures with different saddle fin profiles. To study different saddle fin profiles, we will use different etch recipes on an etch machine to vary the DRAM wordline etch step. This allows us to create specific saddle fin profiles in silicon that can be compared to our simulated profiles. A process engineer can change etch recipes and easily create silicon-based etch profiles that match simulated cross section images, as shown in figure 3. In this case, the engineer created a nominal (Process of Record) profile, a “round” profile (with a rounded top), and a triangular shaped profile (with a triangular top). This wafer-based data is not only used to test electrical performance of the DRAM under different saddle fin profile conditions, but can also be fed back into the virtual model to calibrate the model and ensure that it is accurate during future use.

Figure 3: Cross section TEM/SEM images of saddle fin profiles taken from actual silicon devices are displayed, compared to the predicted model results from SEMulator3D. 3 side-by-side TEM images are shown for the saddle fin profiles vs. the model results, for : (a) Nominal condition (Process of Record), (b) Round profile and (c) Triangle profile

Fig. 3: Cross section images vs. models: (a) Nominal condition (Process of Record), (b) Round profile and (c) Triangle profile.

Device simulation and validation

In the final stage of our study, we will review the electrical simulation results for different saddle fin profile shapes. Figure 4 displays simulated electrical performance results for the round profile and triangular saddle fin profile. For each of the two profiles, the value of the transistor Subthreshold Swing (SS), On Current (Ion), and Threshold Voltage (Vt) are displayed, with the differences shown. Process integration engineers can use this type of simulation to compare device performance using different process approaches. The same electrical performance differences (trend) were seen on actual fabricated devices, validating the accuracy and reliability of our simulation approach.

Figure 4: Simulated electrical performance results for the round profile and triangular saddle fin profile. For each of the two profiles, the value of the transistor Subthreshold Swing (SS), On Current (Ion), and Threshold Voltage (Vt) are displayed, with the differences shown.

Fig. 4: Device electrical simulation results: the transistor performance difference between the Round and Triangular Saddle Fin profile is shown for Subthreshold Swing (SS), On Current (Ion) and Threshold Voltage (Vt).

Conclusions

SEMulator3D provides numerous benefits for the semiconductor manufacturing industry. It allows process integration teams to understand device performance under different process scenarios, and lets them easily explore new processes and architectural opportunities. In this article, we reviewed an example of how virtual fabrication can be used to assess DRAM device performance under different saddle fin profile conditions. Figure 5 displays a summary of the virtual fabrication process, and how we used it to understand, optimize and validate different process scenarios.

Figure 5: A summary of the virtual fabrication process undertaken in this study, including model setup, followed by an exploration of process conditions, followed by electrical analysis and final silicon verification. This process is circular, with the ability to repeat the loop as new information is collected.

Fig. 5: Summary of virtual fabrication process.

Virtual fabrication can be used to guide process and integration teams in the development of process recipes and specifications for any new memory or logic device, and to do so at greater speed and lower cost than silicon-based experimentation.

The post Exploring Process Scenarios To Improve DRAM Device Performance appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Advanced Packaging Design For Heterogeneous IntegrationCP Hung
    As device scaling slows down, a key system functional integration technology is emerging: heterogeneous integration (HI). It leverages advanced packaging technology to achieve higher functional density and lower cost per function. With the continuous development of major semiconductor applications such as AI HPC, edge AI and autonomous electrical vehicles, traditional chips are transforming into smaller, well-partitioned chiplets that require chip-to-chip interconnections to be denser, faster an
     

Advanced Packaging Design For Heterogeneous Integration

Od: CP Hung
18. Duben 2024 v 09:03

As device scaling slows down, a key system functional integration technology is emerging: heterogeneous integration (HI). It leverages advanced packaging technology to achieve higher functional density and lower cost per function. With the continuous development of major semiconductor applications such as AI HPC, edge AI and autonomous electrical vehicles, traditional chips are transforming into smaller, well-partitioned chiplets that require chip-to-chip interconnections to be denser, faster and more reliable. This boosts the demand for heterogeneous integration, elevating demand for innovative advanced packaging technologies.

HI uses advanced packaging to integrate chiplets with heterogeneous designs and process nodes into a single package. This allows enterprises to choose optimum process nodes for specific system demands, such as 3nm for computing chiplets, 7nm for radio frequency chiplets, or to quickly produce super chips with specific functions in a cost-effective manner. HI not only aims for higher interconnection density, but also integrates various functional components, such as logic chips, sensors, memory, and others, which are needed to complete the whole system in one package. Overall energy efficiency and performance is greatly improved, while package size can be significantly reduced.

Advanced packaging solutions for AI HPC

The typical high-density advanced package size for AI cloud computing processors is 55mm x 55mm or more, and contains a 5-2-5 (top 5 layers, middle 2 layers, bottom 5 layers) advanced substrate, or even up to 11-2-11 wiring layers. Chiplets can be interconnected by fan-out technology with silicon bridge or 2.5D with Si Interposer as the integration platform. Through this technique, industry aims to gain more computing power within the same space.

ASE provides high-density packaging solutions, including Flip Chip Ball Grid Array (FCBGA), Fan Out Chip-on-Substrate (FOCoS), FOCoS-Bridge and 2.5D. The chip-to-chip interconnections in FCBGA is accomplished through BGA substrate, and its minimum L/S (line width/line spacing) is only about 10μm/10μm. The very popular and in-demand CoWoS (Chip on Wafer on Substrate) is a 2.5D packaging technology that uses RDL (redistribution layer) on Si interposer to connect chiplets, and its L/S can be significantly reduced to 0.5μm/0.5μm.

In the Si interposer of a 2.5D package, all the chiplets are connected in a side-by-side arrangement, and as the required number of chiplets increases, its area becomes larger and larger, resulting in fewer and fewer Si interposer chips that can be made from each 12-inch wafer (generally less than 50). This indeed significantly increases the manufacturing cost of 2.5D packaging. However, not all applications require 0.5μm/0.5μm L/S, so ASE came up with FOCoS, which uses fan-out technology’s RDL to integrate different chiplets, and its L/S can reach 2μm/2μm. This gives alternative solutions to the market with lower costs. In addition, ASE’s FOCoS-Bridge technology uses silicon bridge to provide high-density routing for interconnecting different chips (such as logic chips and memory) in areas that require high-speed transmission and uses Fan-Out RDL to integrate in other areas. As such, it delivers both 0.5μm/0.5μm and 2μm/2μm flexibility in L/S design, while achieving a significant increase in packaging density and bandwidth.

High performance chip-package-system co-design

To achieve the aforementioned high bandwidth, the chip, package, and entire system must be designed together to achieve holistic design optimization instead of just considering the individual parts. When using electronic design automation (EDA) for design optimization, consideration must be given to overall signal change along the entire transmission path, including Cu pillar, RDL fine line, TSV, μbump, etc. Eye diagrams can then be used to analyze the SerDes link’s electrical performance. When designing differential pairs for high-speed signals, it is necessary to reduce return and insertion loss, especially in the operating frequency band. From chip to package to the entire system, Taiwan’s manufacturing advantage lies in the ability to accomplish the turnkey design process, from beginning to end.

Providing more computing power with less energy

The industry is currently focused on optimizing energy efficiency. One of the key questions being asked is whether the power regulation and decoupling components, which were previously located on the system board, can be moved closer to the package or processor chip. There is even talk of redesigning the on-chip power delivery network (PDN), including supplying power directly from the backside of the chip (Backside PDN).

Power integrity design for power delivery network (PDN)

Optimizing power integrity and minimizing noise can be achieved by strategically positioning the capacitor. Ideally, the capacitor should be placed as close to the chip as possible, but this is dependent on the capacitor’s size and the manufacturing process, both of which can impact cost and performance. Traditional surface-mount technology (SMT) capacitors are relatively large, but chip-level silicon capacitors (Si-Cap) are now available that offer decent capacitance values.

UCIe (Universal Chiplet Interconnect Express) Consortium

Traditionally, there are many standard communication protocols (such as Block-to-Block, Memory Bus, or Interconnection Interface Protocols) at the chip level and the board level for system designers. Industry protocols that specify package-level integration are growing, especially given the need for a universal interface for chiplet integration using 2.5D and FOCoS packaging technologies.

In March 2022, Intel invited upstream and downstream manufacturers in the semiconductor industry chain to form the UCIe Consortium, and a standardized data transmission architecture for chiplet integration was introduced to reduce the cost of advanced packaging design. ASE is proud to be a founding member (Promoter member).

ASE offers a diverse range of advanced packaging types. We have developed packaging design specifications that can be integrated with foundry solutions specifications as well as the system requirements of original equipment manufacturers (OEMs) and cloud service providers to create a comprehensive UCIe package standard. The standard can assist in realizing ubiquitous chiplet heterogeneous integration for HPC applications using various advanced packaging technology architectures, such as 2.5D, 3D, FOCoS, Fan-out, EMIB, CoWoS, etc. Headquartered in Taiwan, ASE is enthusiastically participating in the formulation of international standards and relentlessly providing integrated solutions to the global industry.

Heterogeneous integration has been in development for many years. It can be used to integrate not only homogeneous and heterogeneous chiplets but also other passive and active components including connectors, into a single package. Achieving this requires not only advanced packaging technologies but also design and testing coordination. ASE offers a comprehensive one-stop service solution that includes system design, packaging, and testing to help customers shorten chip design cycles and accelerate product innovation.

The post Advanced Packaging Design For Heterogeneous Integration appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • eBeam Initiative Marks Major Milestones Over 15 Years Of Photomasks And LithographyJan Willis
    The eBeam initiative celebrated its 15th anniversary at the recent SPIE Advanced Lithography + Patterning Conference. 130 members of the mask and lithography community attended the annual lunch to mark the milestone. The eBeam Initiative welcomed its 53rd member, FUJIFILM Corporation, having grown from 20 members and advisors at its launch. FUJIFILM is the first company from the chemical supply chain and recognizes the defacto role resist plays to the eBeam community. Two of our founding members
     

eBeam Initiative Marks Major Milestones Over 15 Years Of Photomasks And Lithography

18. Duben 2024 v 09:02

The eBeam initiative celebrated its 15th anniversary at the recent SPIE Advanced Lithography + Patterning Conference. 130 members of the mask and lithography community attended the annual lunch to mark the milestone. The eBeam Initiative welcomed its 53rd member, FUJIFILM Corporation, having grown from 20 members and advisors at its launch. FUJIFILM is the first company from the chemical supply chain and recognizes the defacto role resist plays to the eBeam community. Two of our founding members, Matthias Slodowski from Vistec Electron Beam and Aki Fujimura from D2S and co-founder of the eBeam Initiative, made presentations that illustrated the evolution of eBeam and lithography technologies over the past 15 years.  In fact, at the end of Aki’s recap of eBeam innovation,  there was a standing ovation recognizing how much the mask and lithography segments have achieved in the past 15 years.

Fig. 1: Aki Fujimura, CEO of D2S and co-founder of the eBeam Initiative, highlights major eBeam technology innovations over the past 15 years of the eBeam Initiative.

Aki recognized over 80 individuals who have contributed educational content over the past 15 years, whether it be a presentation, video, article or panel discussion. His timeline presentation highlighted key lithography and photomask developments over this period. In the first 5-6 years, the focus was on alternative lithography solutions while the world waited to see if EUV became real. In particular, eBeam direct write was a hot topic. It’s nice to see that 15 years later, that approach has found new applications in advanced packaging, chip security, and high-performance computing to name a few, as Matthias highlighted in a video of his talk (figure 2).

Fig. 2: Matthias Slodowski of Vistec Electron Beam, a founding member of the eBeam Initiative, recaps how cell projection has evolved and describes optical and photonics mastering applications. 

In 2012, the annual eBeam Initiative survey of industry luminaries was launched, asking for their opinions about key trends and requirements in the future. While many of them couldn’t answer questions publicly about what they think, they could answer anonymously to project future trends. In 2015, the survey reflected the turning point for EUV, several years before it became a reality.

Another important development happening at the same time as EUV was the pivot to develop multi-beam mask writers, with the attribute of constant write time no matter what shape you write. This was a contributing factor to the next major trend emerging from 2019… Curvilinear masks. In addition to multibeam mask writing for providing the speed to write curvilinear masks, you need fast full-chip curvilinear ILT software (think ILT in a day) to derive the mask patterns from the shapes you actually get on wafers. As of the 2023 Luminaries survey, 87% thought leading photomask manufacturers could meet the demand for curvilinear masks.

On the day of the lunch, the eBeam Initiative published the fourth annual Deep Learning survey of members products and applications using deep learning (DL) in the photomask to wafer manufacturing flow. Two new members reported this year, DNP and Synopsys, bringing the total to 15 members reporting. Tom Cecil of Synopsys describes in a new eBeam Initiative video how three key areas of AI-driven EDA – optimization, analytics, and generative AI – can be useful across the design-to-manufacturing space, including within the mask synthesis domain (figure 3). Looking at the DL report, it’s clear that speed is the key benefit of applying DL and many applications involve SEM images. But DL is also for accuracy as some members like ASML have pointed out at past lunch events.

Fig. 3: Tom Cecil from Synopsys on AI-Driven EDA.

If the past 15 years is any predictor, the eBeam community will continue to evolve solutions in ways we haven’t thought of yet. In the mid-term, the capability of manufacturing to deliver curvilinear masks is enabling the design community to pursue curvilinear design. Thanks to the diversity of the solutions on the design side, like chiplets, there will be new opportunities for the manufacturing chain. Some of the luminaries in the eBeam community spoke about this and other projections for the next 15 years in a new video (figure 4).

Fig. 4: Luminaries share their perspective on the eBeam Initiative past and future.

15 years ago, the eBeam Initiative focused on the message that eBeam writes all chips. That’s still true today and will be true in 15 years.

The post eBeam Initiative Marks Major Milestones Over 15 Years Of Photomasks And Lithography appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Intel, And Others, InsideEd Sperling
    Intel this week made a strong case for how it will regain global process technology leadership, unfurling an aggressive technology and business roadmap that includes everything from several more process node shrinks that ultimately could scale into the single-digit angstrom range to a broad shift in how it approaches the market. Both will be essential for processing the huge amount of data for AI everywhere, and to win back some of the market share that NVIDIA currently wields. Intel’s strategy
     

Intel, And Others, Inside

22. Únor 2024 v 09:30

Intel this week made a strong case for how it will regain global process technology leadership, unfurling an aggressive technology and business roadmap that includes everything from several more process node shrinks that ultimately could scale into the single-digit angstrom range to a broad shift in how it approaches the market. Both will be essential for processing the huge amount of data for AI everywhere, and to win back some of the market share that NVIDIA currently wields.

Intel’s strategy is many layers thick. It includes a long list of innovations, including backside power delivery, which significantly reduces congestion and noise in more than a dozen metal layers, to 2.5D and 3D-ICs using a standard architecture for internally developed and third-party chiplets and IP. And it encompasses everything from a broad and open ecosystem — a sharp departure from its past own-it-all strategy, which once prompted anti-competitive charges — as well as a willingness to work more closely with customers, competitors, and governments to achieve its goals.

Whether Intel delivers on that promise remains to be seen. But the breadth of its vision, particularly for Intel Foundry, and the ambitious delivery schedule represent a sharp change in direction and culture for the 56-year-old chipmaker.

Of particular note is how the companies internal silos are being deconstructed. Intel CEO Pat Gelsinger said there will be a clean line between Intel products and Intel Foundry, but noted the foundry will be offer Intel developments and innovations developed by the product teams. “The à la carte menu is wide open for the industry,” he said during a Q&A session. “Clearwater Forest, which I showed today, is a construct that was innovated by my Xeon team — how to do hybrid bonding, Intel three base dies, 18A top die, being able to solve a lot of the CoWoS/Foveros problems using EMIB and hybrid bonding. That will become a set of collaterals that will benefit the foundry. They’re going to sell that constructional opportunity as a better way to build AI chips. So clearly, I’m taking product group intellectual property and leveraging it on the foundry side.”

This may sound like business as usual for a foundry, but Intel for years waffled over its commitment to a foundry model. In fact, it was viewed as an IDM that only began selling wafers to customers as the cost of maintaining its own fab began to skyrocket out of country. Creating separation between the two is a fundamental shift, and it’s an essential one for building trust in a complex world of sometimes partners, sometimes competitors. In the past, the company closely guarded its IP, confining that to its own processors rather than unbundling it and selling it to potential competitors. With this new division, Intel potentially can generate profits both from customized chips that leverage technologies it develops internally for other applications, as well as its own chip business.

Gelsinger is adamant about this being a leading-edge technology company, not a supplier of all components required in systems. “I’m not going to solve 200mm supply chain issues,” he said. “And, by the way, there are not going to be more 200mm factories built, for the most part, outside of specialty like SiC. There are some crazy discussions, like when some Europeans say, ‘I don’t need a leading-edge factory in Europe. Give me a 40nm node.’ What a stupid statement. It takes 5 years to build a new 40nm node, which is already 20 years old, and to make it economically viable, we’re going to have to run it 30 more years. Move the designs to more modern nodes as opposed to expecting to build old factories that are already out of date.”

Put in perspective, Intel is basically taking the Apple approach to chipmaking. How it fares against the other two leading-edge foundries isn’t clear at this point. Until 22nm, which was 16/14nm for Samsung and TSMC, Intel was the front runner in process technology. Several nodes later, it was trailing both Samsung and TSMC. The company is on a mission to regain its leadership.

“Great industries have two or three strong players,” said Gelsinger. “TSMC is a great company, and we’re going to build a great foundry, as well. And we’re going to challenge each other to further greatness. They are the best company in terms of customer support, bar none, in the industry, and they do not have a legacy of leadership technologies. They implement technologies with great customer support. We have a deep legacy of leadership technologies across domains that we created…At our innovation conference in September, I showed three companies participating in chiplet standardization — Synopsys, Intel, and TSMC. With our test chips — with our test chips. The world wants chiplets across a range of suppliers. I think we’ll be doing some of that. I think they’re going to be doing some of that. And I want to make sure that our mutual customers have great choice and technology benefits.”

If successful, Intel Foundry may not be the lowest-cost chipmaker, as TSMC founder Morris Chang has intimated. But if it can win back some key customers, and attract a lot of new ones with better performance, higher energy efficiency, and more customization options, that may not matter. What does matter is execution on the roadmap, and the number of companies rallying around Intel appears to indicate that significant changes are afoot, and that the competitive landscape could be a lot more fluid than it was several years ago.

The post Intel, And Others, Inside appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • UCIe Goes Back To The Drawing BoardGregory Haley
    The integration of multiple dies within a single package increasingly is viewed as the next evolution for extending Moore’s Law, but it also presents myriad challenges — particularly in achieving a universally accepted standard integrating plug-and-play chiplets from different vendors. “In some respects, people are already doing this,” says Debendra Das Sharma, Intel senior fellow and chair of the UCIe Consortium. “They’re putting multiple dies on the same package, and we’ve been doing it for de
     

UCIe Goes Back To The Drawing Board

22. Únor 2024 v 09:08

The integration of multiple dies within a single package increasingly is viewed as the next evolution for extending Moore’s Law, but it also presents myriad challenges — particularly in achieving a universally accepted standard integrating plug-and-play chiplets from different vendors.

“In some respects, people are already doing this,” says Debendra Das Sharma, Intel senior fellow and chair of the UCIe Consortium. “They’re putting multiple dies on the same package, and we’ve been doing it for decades back to what was multi-chip modules (MCMs). And if you look in our mainstream CPUs today, they’re all multiple chips on the same package.”

Combining more than one chip in a package becomes a lot more complicated, however, when those chips have different functions or come from different vendors or foundries. That’s where a standard like UCIe becomes necessary.

“For most of the multi-chip products that are in the market, the same company is designing and providing the multiple dies, so they know exactly how they talk to each other and how to divide or partition the chip,” says Vik Chaudhry, senior director of product marketing and business development at Amkor. “That makes it a little easier to understand how one part talks to the other. What UCIe is trying to do is standardize that interconnect between multiple vendors.”

While other protocols like Bunch of Wires (BoW) have made significant strides in recent years and are still being developed, UCIe stands out for its backing by many of the largest chip manufacturers and its support for all major packaging technologies, including organic substrates, silicon interposers, and RDL fan-outs.

Fig. 1: Chiplet diagram with UCIe interconnect highlighted. Source: Keysight
Fig. 1: Chiplet diagram with UCIe interconnect highlighted. Source: Keysight

But the move toward UCIe compatibility necessitates more than a mere afterthought in the chip creation process. It requires a foundational shift back to the drawing board, where compatibility must be conceived as an integral component of the chip, not retrofitted as an expedient solution. As this standard evolves, it has become increasingly apparent that for chiplets to truly embrace UCIe, the blueprint for their design must be reimagined from the ground up.

“UCIe is a layout,” says Chaudry. “It’s designed. But keep in mind, these chiplets can be from different fab nodes. One could be 5nm, another could be 3nm, and third could be 14nm. Somehow you have to connect these dies together. You need to be compatible in terms of how much space you have to run the routes, and that’s what UCIe is addressing.”

The transition to UCIe is not merely about different vendors adapting to a new standard. It requires a willingness among manufacturers throughout the industry to align their design and production processes with a common protocol that is still, in many respects, a work in progress.

While it is commonly assumed that chiplets plus advanced packaging represent the next evolution for extending Moore’s Law, the lack of a fully defined standard, coupled with the uncertainty surrounding the integration with existing technologies, means investing in new designs for UCIe is currently limited to the largest players in the market.

“Anytime you put multiple dies on a substrate or interposer, it’s challenging,” adds Chaudhry. “As we are seeing AI come into the picture, we are seeing a lot of vendors putting multiple dies on a chip, and not just 3 or 4, but 8, 10, or 12 dies. The complexity exponentially grows as you have more and more dies on the same interposer or substrate. You also have to test everything in between, and that increases the complexity and cost. That’s a huge challenge for anybody, and right now only a few companies in the world are capable of committing those kinds of resources and those kinds of expenses to put a line together.”

Moreover, the adoption of UCIe still must overcome significant hurdles in terms of scalability, compatibility with existing systems, and ensuring that the cost implications do not outweigh the benefits.

The chiplet evolution
Large chipmakers have been constrained by the size of the reticle field for at least the last several process nodes, which sharply limited the number of features that could be crammed onto a planar SoC. Today, with node shrinks becoming more costly and challenging, the best solution available is to decompose the SoC into individual blocks, or chiplets.

“Once the dies become really big, you’re up against the reticle limit,” says Intel’s Das Sharma. “That’s where you will see a lot of people deploying chiplets. You’re basically having multiple sets of chips being packaged together to deliver a certain set of functionality.”

Take, for instance, the leap to 50 Tb per second switches that are challenging the limits of reticle size. There’s a growing need to dissect and distribute the functionality of these chips across multiple components. Whether it’s the I/O, memory, or SRAM, the key lies in strategically breaking down the SoC into smaller units. This not only makes the manufacturing process more feasible, but also opens doors to more innovative and efficient design architectures.

It also provides some immediate benefits. Smaller dies yield better than larger ones, which is why in 2012 Xilinx split its 28nm FPGA into four different dies, connected through an interposer. It also provides room to grow, because the individual chiplets are still well below the reticle limit.

But all of the early implementations were homogeneous. They were all developed by the same vendor using the same process technology. A big benefit of advanced packaging is the ability to combine heterogeneous chiplets in the same package, allowing analog circuits and less-critical features to be developed at whatever process node makes sense. This is the challenge facing large chipmakers, foundries, and OSATs today, and it’s one that has not yet been fully solved.

Nevertheless, the chip industry agrees on one thing. There needs to be a common way to connect all of these chiplets together, and this is where UCIe fits into the picture.

The UCIe standard
Achieving a consensus on the electrical characteristics that underpin the UCIe is akin to orchestrating a symphony with a multitude of instruments, each with its own acoustic signature. Ensuring that chiplets from different corners of the industry can connect and communicate efficiently necessitates bridging gaps in voltage levels, signal timing, and power distribution.

In March, 2022, the UCIe consortium released UCIe 1.0 that included specifications for a standardized physical die-to-die interface designed to facilitate seamless communication between chiplets, regardless where they were manufactured or by whom. The specifications encompassed key aspects, such as electrical properties, physical dimensions, and protocols necessary for ensuring compatibility and efficient data transfer between diverse chip components.

“On advanced packages at 45 microns, the numbers are pretty stellar,” says Das Sharma. “You have 188 gigabytes per second per square millimeter as a starting point, up to 1.35 terabytes per second per square millimeter. People will have a hard time even absorbing that kind of bandwidth and processing it.”

UCIe 1.0 uses a layered protocol approach. The physical layer underpins the protocol stack, dedicated to defining and managing electrical signaling, such as clock synchronization and link training, while also incorporating sideband communication channels essential for non-data interactions between chiplets.

At the heart of UCIe’s mechanics is the Die-to-Die (D2D) adapter. This crucial interface acts as the gatekeeper, managing link state and facilitating negotiation parameters for chiplets, crucial for establishing reliable chiplet communication. It optionally extends a safeguard for data integrity through mechanisms like cyclic redundancy check (CRC) and link-level retry capabilities. This not only ensures accuracy in high-speed data transfer, but also aligns different chiplet protocols by providing an arbitration system enabling multiple chips to interact efficiently.

“UCIe is pretty flexible in that way,” says Chaudhry. “It supports your PCIe protocols, XML protocol, or streaming, so you can decide which protocol you want to support. And it has different data rates that it supports. It’s the lowest common denominator that everybody will support. If you’re on a 3nm process, you can support a much higher data rate, but if the other chiplet is at a different process node, then both the parts will support the basic lowest common denominator of the spec, and then you can talk on that.”

UCIe also incorporates strategies to mitigate interconnect defects, such as stuck-at faults and signal discontinuities. Stipulations within UCIe include the implementation of auxiliary pathways, furnishing a means to maintain connectivity if the primary lanes fail. This redundancy helps sustain system functionality by providing avenues for fault tolerance and repair.

UCIe also embraces existing standards such as PCI Express (PCIe) and Compute Express Link (CXL) natively, ensuring a broad resonance across the industry by capitalizing on these well-established protocols. The layered approach of UCIe also encompasses comprehensive usage models.

In August 2023, the consortium published UCIe version 1.1, extending reliability mechanisms to more protocols and supporting additional usage models. These enhancements are not merely incremental. They are geared towards pivotal segments such as automotive, which is gravitating toward chiplets.

One key area where the evolution from UCIe 1.0 to 1.1 becomes evident is in the standard’s preventive monitoring features. UCIe 1.1 expands the protocol with new registers designed to capture detailed Eye Margin information — viewing both width and height — which provides standardized reporting formats and proactive link health monitoring. Rather than reinventing the wheel, UCIe 1.1 leverages the existing periodic parity Flit injection mechanism from version 1.0, enhancing error detection and reporting capabilities through a new error log register. That, in turn, allows for improved assessment of link repair necessities. UCIe 1.1 also offers enhancements for compliance testing.

Another notable aspect is the advent of new and emerging usages, particularly with streaming protocols. Whereas UCIe 1.0’s support for such protocols was restricted to Raw Mode, UCIe 1.1 extends the utility of the die-to-die (D2D) adapter on the FDI interface to streaming protocols. This extension enables a blend of CRC retry power management features and facilitates the coexistence of multiple protocols.

UCIe 1.1 also considers cost optimization for advanced packaging solutions in anticipation of shrinking bump pitches and the advent of 3D integration. The introduction of additional column arrangements in UCIe 1.1 creates broader opportunities for mix-and-match dies.

“In a chiplet environment, the dies are very close to each other and your shoreline is very limited,” says Chaudhry. “You have limited space to connect the dies, and how the number of pins are connected, facing each other, that becomes critical. That is one thing that UCIe is addressing. What should be the pin location? Whether it’s 6-, 8-, or 16-column, how do you arrange it so that when one vendor has an 8-column configuration, they can talk to one with a 12 column configuration and connect to it physically, not just in terms of pins, but also connectivity and shoreline compatibility?”

Designing for interoperability
There are a number of technical hurdles that still stand in the way of the widespread adoption of UCIe. These include a need for precise electrical conformity, predictable signaling realms, and systematic physical interconnects catering to a variety of nodes and manufacturing processes.

“You can also have HBM in there, which can be very tall compared to a single ASIC,” says Amkor’s Chaudhry. “How do you address those height differences? A lot of different issues come into play when you’re putting different dies and different chiplets together.”

Thermal management is also a key element for high-density packaging. Disparate process nodes inevitably present distinct power profiles and heat dissipation characteristics. Bridging these gaps necessitates innovative heat distribution methodologies and sophisticate warpage control to ensure structural integrity and reliable function in complex modules.

“There are a lot of challenges in thermal,” adds Chaudhry. “When you have two dies from different process nodes, how do you make sure that you have a way to dissipate the power equally? Those are some of the challenges as we go along and there’s no general solution to that yet. Those are kind of things that the consortium is looking at right now.”

Continued evolution
Another goal of the UCIe consortium is to ensure that anyone developing a chiplet today will still be able to use that design five years from now, despite progress in the standard during that time.

“It will absolutely evolve,” adds Chaudhry. “PCI did the same thing. They are on Gen 5 or Gen 6 now. USB is the same way with USB 4.0 coming soon. CXL is at 3.1. We expect the same thing to happen to UCIe. It will continuously improve and come up with new and more flexible solutions that our members can adopt.”

“The more people get involved, the more they’re going to start tweaking things,” adds Das Sharma. “Some of them are not going to work out, and some of them are going to work out really well. This is a multi-decade journey, and the key is to learn and adapt and keep moving on.”

Conclusion
The UCIe initiative aims to revolutionize chip package interconnectivity by emulating the success of Peripheral Component Interconnect Express (PCIe) at the PCB level. By facilitating direct inter-die connections within the chip package, UCIe endeavors to drastically cut power usage, enhance bandwidth efficiency, and, ultimately, reduce production costs.

“The good thing about UCIe is that it’s an open standard,” says Chaudhry. “In all, there are about 120 members, and all of them are working together. There are six different working groups that range from mechanical to electrical to security to software and marketing, where they are bringing up new things as they are developing their chiplet-based designs. A lot of things that have happened between UCIe 1.0 and 1.1 are basically due to their input.”

—Ed Sperling contributed to this report.

The post UCIe Goes Back To The Drawing Board appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Building CFETs With Monolithic And Sequential 3DKatherine Derbyshire
    Successive versions of vertical transistors are emerging as the likely successor to finFETs, combining lower leakage with significant area reduction. A stacked nanosheet transistor, introduced at N3, uses multiple channel layers to maintain the overall channel length and necessary drive current while continuing to reduce the standard cell footprint. The follow-on technology, the CFET, pushes further up the z axis, stacking n-channel and p-channel transistors on top of each other, rather than sid
     

Building CFETs With Monolithic And Sequential 3D

Successive versions of vertical transistors are emerging as the likely successor to finFETs, combining lower leakage with significant area reduction.

A stacked nanosheet transistor, introduced at N3, uses multiple channel layers to maintain the overall channel length and necessary drive current while continuing to reduce the standard cell footprint. The follow-on technology, the CFET, pushes further up the z axis, stacking n-channel and p-channel transistors on top of each other, rather than side by side.

In work presented at December’s IEEE Electron Device Meeting, researchers at TSMC estimated that CFETs give a 1.5X to 2X overall size reduction at constant gate dimensions. [1] Those are significant area benefits for any digital logic, but manufacturing these new transistor structures will be a challenge.

Monolithic 3D integration is the simplest integration scheme, and the one likely to see production first. In monolithic 3D integration, the entire structure is assembled on a single piece of silicon. This approach can also be used to fabricate compute-in-memory designs where memory devices are fabricated as part of the metallization layers for a conventional CMOS circuit. While individual layers in monolithic 3D designs can incorporate new technologies — the integration of ReRAM devices, for example — the overall CMOS flow is preserved. All of the materials and processes used must be compatible with that rubric.

Adding more nanosheets for complementary devices
The overall process in this kind of scheme is similar to a stacked nanosheet transistor flow. It starts with a stack of eight or more alternating silicon and silicon germanium layers (four pairs), compared to a stacked nanosheet NFET or PFET, which might have only four such layers (two pairs). In a CFET flow, however, middle dielectric layer is inserted halfway through the stack.

This layer, separating the n-type and p-type transistors, is probably the most important difference from a standard nanosheet transistor flow. To minimize parasitic capacitance, the middle dielectric layer should be as thin as possible, said imec’s Naoto Horiguchi. If it’s too thin, though, edge placement errors can cause isolation failures, landing contacts for the top devices onto bottom devices. [2]

In TSMC’s process, the Si/SiGe superlattice includes a high-germanium SiGe layer as a placeholder for the middle dielectric. After the source/drain etch, a highly selective etch removes this layer and oxidizes the silicon on either side of it to form the middle dielectric.

The inner spacer recess etch, which follows middle dielectric formation in the TSMC process, indents the SiGe layers relative to the silicon nanosheets, defining the gate length and junction overlap.

While TSMC emphasized it has not yet made fully metallized integrated CFET circuits, it did report that more than 90% of the transistors survived.

Fig. 1: TSMC used monolithic integration to stack NFET and PFET devices. [1]
Fig. 1: TSMC used monolithic integration to stack NFET and PFET devices. [1]

Depositing the nanosheet stack is straightforward. Etching it with the precision required is not. A less-than-vertical etch profile will change the relative channel lengths of the top and bottom devices, leading to asymmetric switching characteristics.

Stacking wafers for more flexibility
The alternative, sequential 3D integration is a bit more flexible. While monolithic 3D integration uses a single device layer, sequential 3D integration bonds an additional tier on top of the first. Sequential 3D integration is different from three-dimensional wafer-level packaging and chip stacking, though. In WLP, the component devices are finished, passivated, and tested. The component chips are fully functional as independent circuits. In sequential 3D integration, the two tiers are part of a single integrated circuit.

Often, though not always, the second tier is an unprocessed bare wafer with no devices at all. Ionut Radu, director of research and external collaborations at Soitec, said his company used its SmartCut process to transfer sub-micron silicon layers. [3] One of the advantages of sequential integration, though, is that it opens the door to other possibilities. For example, the second layer could use a different silicon lattice orientation to facilitate stress engineering for improved carrier mobility. It also could use an alternative channel material, such as GaAs or a two-dimensional semiconductor. And up until the transfer occurs, processing of the second wafer has no effect on the thermal budget of the first.

After bonding, the second tier’s process temperature generally must remain below 500° C. Tadeu Mota-Frutuoso, process integration engineer at CEA-Leti, said researchers were able to achieve this benchmark in a conventional CMOS process by using laser annealing for the source/drain activation steps. [4]

While sequential 3D integration can be used to realize CFET devices, the top layers also can contain independent circuitry. Still, as in monolithic integration, the dielectric layer between the two circuit tiers is a critical process step. Analysts at KAIST found that reducing the thickness of the interlayer dielectric improves heat dissipation. It also facilitates the use of a bottom gate to control the top tier devices. On the other hand, the dielectric layer lies at the interface between the original wafer and the transferred layer. Thickness control depends on the polishing step used to prepare the transfer surface. Such precise control is extremely challenging for CMP. [5]

Re-driving wafers without contamination
While the second circuit tier can be added at any point in the process flow, the insertion point constrains not only the first and second tier devices, but also the fab as a whole. When the second layer does not yet contain devices, alignment to the first layer is relatively easy. In contrast, Horiguchi said, aligning one device wafer on top of another imposes an area penalty to accommodate potential overlay error. The total device thickness of sequential 3D structures tends to be greater, as well.

Returning a first-tier wafer with contacts and other metallization to FEOL tools for fabrication of a second transistor layer poses a substantial cross contamination risk. Even if the top surface is well encapsulated, Mota-Frutuoso explained in an interview that the sidewalls and bevels of the bottom tier can still expose metal layers to FEOL processes. CEA-Leti’s proposed bevel contamination wrap (BCW) scheme first cleans the wafer edge, then encapsulates it and the sidewall in a protective oxide layer.

"Fig.

Fig. 2: CEA-Leti’s sequential 3D integration stacked silicon CMOS on an industrial 28nm FDSOI wafer. [4]

Controlling heat dissipation
Heat dissipation is a major challenge for both monolithic and sequential 3D devices. Generalizations are difficult because thermal characteristics depend on the specific integration scheme and even the circuit design. Wei-Yen Woon, senior manager at TSMC, and his colleagues evaluated AlN and diamond as possible thermal dissipation layers. While both have been used in power devices, they are new to CMOS process flows. They achieved good quality columnar AlN films with a low temperature sputtering process, though the columnar structure did impede in-plane heat transport. While diamond offers extremely high thermal conductivity, it also can require extremely high process temperatures. The TSMC group deposited thin films with acceptable quality at BEOL compatible temperatures by using pre-deposited diamond nuclei, but they have not yet attempted to integrate these films with working devices.[6]

What’s next?
In the short term, monolithic 3D integration offers a relatively straightforward path to CFET fabrication, building on existing nanosheet transistor process flows. Even proponents of sequential 3D integration expect the monolithic approach to reach production first. For the longer term, though, the ability to use a completely different material for the second device layer gives device designers many more process optimization knobs.

However it is achieved, the idea that active devices no longer need to confine themselves to a single planar layer has implications far beyond logic transistors. From compute-in-memory modules to image sensors, 3D integration is an important tool for “More than Moore” devices.

References

[1] S. Liao et al., “Complementary Field-Effect Transistor (CFET) Demonstration at 48nm Gate Pitch for Future Logic Technology Scaling,” 2023 International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 2023, pp. 1-4, doi: 10.1109/IEDM45741.2023.10413672.

[2] N. Horiguchi et al., “3D Stacked Devices and MOL Innovations for Post-Nanosheet CMOS Scaling,” 2023 International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 2023, pp. 1-4, doi: 10.1109/IEDM45741.2023.10413701.

[3] I. Radu et al., “Ultimate Layer Stacking Technology for High Density Sequential 3D Integration,” 2023 International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 2023, pp. 1-4, doi: 10.1109/IEDM45741.2023.10413807.

[4] T. Mota-Frutuoso et al., “3D sequential integration with Si CMOS stacked on 28nm industrial FDSOI with Cu-ULK iBEOL featuring RO and HDR pixel,” 2023 International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 2023, pp. 1-4, doi: 10.1109/IEDM45741.2023.10413864.

[5] S. K. Kim et al., “Role of Inter-Layer Dielectric on the Electrical and Heat Dissipation Characteristics in the Heterogeneous 3D Sequential CFETs with Ge p-FETs on Si n-FETs,” 2023 International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 2023, pp. 1-4, doi: 10.1109/IEDM45741.2023.10413845.

[6] W. Y. Woon et al., “Thermal dissipation in stacked devices,” 2023 International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 2023, pp. 1-4, doi: 10.1109/IEDM45741.2023.10413721.

 

The post Building CFETs With Monolithic And Sequential 3D appeared first on Semiconductor Engineering.

Techniques To Identify And Correct Asymmetric Wafer Map Defects Caused By Design And Process Errors

Od: James Kim
22. Únor 2024 v 09:03

Asymmetries in wafer map defects are usually treated as random production hardware defects. For example, asymmetric wafer defects can be caused by particles inadvertently deposited on a wafer during any number of process steps. In this article, I want to share a different mechanism that can cause wafer defects. Namely, that these defects can be structural defects that are caused by a biased deposition or etch process.

It can be difficult for a process engineer to determine the cause of downstream structural defects located at a specific wafer radius, particularly if these defects are located in varying directions or at different locations on the wafer. As a wafer structure is formed, process behavior at that location may vary from other wafer locations based upon the radial direction and specific wafer location. Slight differences in processes at different wafer locations can be exaggerated by the accumulation of other process steps as you move toward that location. In addition, process performance differences (such as variation in equipment performance) can also cause on-wafer structural variability.

In this study, structural defects will be virtually introduced on a wafer to provide an example of how structural defects can be created by differences in wafer location. We will then use our virtual process model to identify an example of a mechanism that can cause these types of asymmetric wafer map defects.

Methods

A 3D process model of a specific metal stack (Cu/TaN/Ta) on a warped wafer was created using SEMulator3D virtual fabrication (figure 1). After the 3D model was generated, electrical analysis of 49 sites on the wafer was completed.

In our model, an anisotropic barrier/liner (TaN/Ta) deposition process was used. Due to wafer tilting, there were TaN/Ta deposition differences seen across the simulated high aspect ratio metal stack. To minimize the number of variables in the model, Cu deposition was assumed to fill in an ideal manner (without voids). Forty-nine (49) corresponding 3D models were created at different locations on the wafer, to reflect differences in tilting due to wafer warping. Next, electrical simulation was completed on these 3D models to monitor metal line resistance at each location. Serpentine metal line patterns were built into the model, to help simulate the projected electrical performance on the warped wafer at different points on the same radius, and across different directions on the wafer (figure 2).

Illustration of an anisotropic liner/barrier metal deposition on a tilted silicon wafer structure caused by wafer warping. In the illustration, the deposition direction is represented by arrows at the top of the image pointed down toward a silicon wafer at the bottom of the image. Forty-nine (49) corresponding 3D models were created at different locations on the wafer, to reflect differences in tilting due to wafer warping. These 49 models are represented in the image by rectangular blocks shown between the deposition direction arrows and the silicon wafer itself.

Fig. 1: Anisotropic liner/barrier metal deposition on a tilted structure caused by wafer warping.

Composite image displaying the resistance extraction simulation and cross section analysis performed in this study. 4 images make up the composite image. Upper left: 3D visualization of serpentine metal line patterns built into the model. Upper right: Top view of TaN/Ta deposition in simulated high aspect ratio metal stack, along with visible Cu deposition (shown in brown and blue colors). Lower left: Cross section view of metal stack. Lower right: Resistance extraction simulation of serpentine metal line patterns, with different colors (blue to red) highlighting areas of lower to higher resistance.

Fig. 2: Resistance extraction simulation and cross section analysis.

Using only incoming structure and process behavior, we can develop a behavioral process model and extend our device performance predictions and behavioral trend analysis outside of our proposed process window range. In the case of complicated processes with more than one mechanism or behavior, we can split processes into several steps and develop models for each individual process step. There will be phenomena or behavior in manufacturing which can’t be fully captured by this type of process modeling, but these models provide useful insight during process window development.

Results

Of the forty-nine 3D models, the models on the far edge of the wafer were heavily tilted by wafer warpage. Interestingly, not all of the models at the same wafer radius exhibited the same behavior. This was due to the metal pattern design. With anisotropic deposition into high aspect ratio trenches, deposition in specific directions was blocked at certain locations in the trenches (depending upon trench depth and tilt angle). This affected both the device structure and electrical behavior at different locations on the wafer.

Since the metal lines were extending across the x-axis, there were minimal differences seen when tilting the wafer across the x-axis in our model. X-axis tilting created only a small difference in thickness of the Ta/TaN relative to the Cu. However, when the wafer was tilted in the y-axis using our model, the high aspect ratio wall blocked Ta/TaN deposition due to the deposition angle. This lowered the volume of Ta/TaN deposition relative to Cu, which decreased the metal resistance and placed the resistance outside of our design specification.

X-axis wafer tilting had little influence on the device structure. The resistance on the far edge of the x-axis did not significantly change and remained in-spec. Y-axis wafer tilting had a more significant influence on the device structure. The resistance on the far edge of the y-axis was outside of our electrical specification (figure 3).

Electrical simulation results shown on a wafer map. Locations on the far edge of the Y-axis exhibit out-of-spec resistance. Resistance varied between 40,430 and 40,438 ohm/SQ across the wafer. In the image, out of spec resistance on the wafer is highlighted in blue (lower resistance within the range) or red (higher resistance within the range).

Fig. 3: Electrical simulation results shown on a wafer map. Locations on the far edge of the Y-axis exhibit out-of-spec resistance.

Conclusion

Even though wafer warpage occurs in a circular manner due to accumulated stress, unexpected structural failures can occur in different radial directions on the wafer due to variations in pattern design and process behavior across the wafer. From this study, we demonstrated that asymmetric structures caused by wafer warping can create top-bottom or left-right wafer performance differences, even though processes have been uniformly applied in a circular distribution across the wafer. Process simulation can be used to better understand structural failures that can cause performance variability at different wafer locations. A better understanding of these structural failure mechanisms can help engineers improve overall wafer yield, by taking corrective action (such as performing line scanning at specific wafer locations) or by adjusting specific process windows to minimize asymmetric wafer defects.

The post Techniques To Identify And Correct Asymmetric Wafer Map Defects Caused By Design And Process Errors appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Utilizing Artificial Intelligence For Efficient Semiconductor ManufacturingVivek Jain
    The challenges before semiconductor fabs are expansive and evolving. As the size of chips shrinks from nanometers to eventually angstroms, the complexity of the manufacturing process increases in response. It can take hundreds of process steps and more than a month to process a single wafer. It can subsequently take more than another month to go through the assembly, testing, and packaging steps necessary to get to the final product. Artificial Intelligence (AI) can be deployed within a fab to a
     

Utilizing Artificial Intelligence For Efficient Semiconductor Manufacturing

22. Únor 2024 v 09:02

The challenges before semiconductor fabs are expansive and evolving. As the size of chips shrinks from nanometers to eventually angstroms, the complexity of the manufacturing process increases in response. It can take hundreds of process steps and more than a month to process a single wafer. It can subsequently take more than another month to go through the assembly, testing, and packaging steps necessary to get to the final product.

Artificial Intelligence (AI) can be deployed within a fab to address the complexity and intricacy of semiconductor manufacturing. A fab generates petabytes of data as wafers go through the multitude of process and test operations. This wealth of data also presents a challenge in that it needs to be analyzed and acted on quickly to ensure tight process control, high yield, and avoid process excursions. Beyond navigating the complexity of the manufacturing process, new solutions are necessary to help make the process as efficient as possible and the yield as high as possible to produce the most business value for fabs.

The benefits of AI-enabled analysis tools for IC manufacturers

Traditional techniques to detect issues in the manufacturing process have run out of steam, especially at advanced technology nodes. For example, an engineer must do their own yield analysis to seek out potential problems. Once they identify an issue, they communicate with the defect and process teams to determine the root cause and then troubleshoot it. The defect team will begin work to find some correlation behind the issue and the process team troubleshoot and link it to the root cause.

All these steps take up significant time that could be focused on achieving the highest yield of chips possible, driving costs down and reducing time to market. One of the biggest benefits of enabling AI in analysis tools is that an engineer can quickly recognize and pinpoint an issue in a specific chip to see which process step and/or equipment has caused the issue.

Beyond the fast and accurate process control that AI allows for, there are numerous other benefits that result from the saved time and money, including:

  • Predictive applications: Enables fabs to take leap from reactive to predictive process control
  • Scalability: Analyzes petabytes of data, connects multiple fabs, and comes cloud-ready
  • Efficiency: Allows fab to make better decisions and reduce false alarms

To enable the next generation of manufacturing, Synopsys is enabling AI and Machine Learning (ML) for a comprehensive process control solution.

Actionable insights with AI and ML

Wafer, equipment, design, mask, test, and yield are silos within a fab that can benefit from a comprehensive AI/ML enabled solution. Such a solution can specifically help engineers generate actionable insights into the following:

  • Fault detection and classification (FDC)
  • Statistical process control (SPC)
  • Dynamic fault detection (DFD)
  • Defect classification and image analytics
  • Defect image analytics
  • Decision support system (DSS)

Fast analysis of petabytes of data, from equipment sensors or process parameters, allows manufacturers to quickly identify the root cause of process excursions and take action to maintain yield.

AI and ML in the fab

Synopsys is a provider of software solutions for silicon manufacturing and silicon lifecycle management, including solutions for TCAD, mask solutions, and manufacturing analytics. Its existing solutions are connected to thousands of pieces of equipment over multiple fabs with millions of sensors, analyzing hundreds of petabytes of data. By providing real-time visibility into the manufacturing process, Synopsys enables predictive analytics and optimizes product quality and yield to help give semiconductor fabs a leg up in this competitive landscape.

Synopsys has introduced an AI/ML enabled software offering, Fab.da, to make semiconductor manufacturing efficient. Fab.da is a part of the Synopsys EDA Data Analytics solution, which brings together data analytics and insights from the entire chip lifecycle

It offers a complete data continuum by bringing together these different data types from many different sources into one platform for both advanced and mature node chips. This data continuum allows for high user productivity, maximum data scalability, and increased speed and accuracy in root cause analysis for issues.

Delivering process control solutions to manage complexity at leading-edge fabs, Fab.da can help chip designers and manufacturers drive operational excellence and productivity, providing a competitive edge in today’s manufacturing landscape.

The post Utilizing Artificial Intelligence For Efficient Semiconductor Manufacturing appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Integrating Digital Twins In Semiconductor OperationsMark da Silva
    By Mark da Silva, Nishita Rao and Karim Somani Chipmakers must adopt transformative technologies including Digital Twins (DT) to keep pace with unprecedented global semiconductor industry growth that is expected to drive its total market value to $1 trillion[1] as soon as 2030. Leveraging predictive modeling and other efficiency-enhancing innovations, DTs promise to optimize semiconductor design, manufacturing processes and equipment maintenance while improving overall operational efficiency. Wi
     

Integrating Digital Twins In Semiconductor Operations

22. Únor 2024 v 09:01

By Mark da Silva, Nishita Rao and Karim Somani

Chipmakers must adopt transformative technologies including Digital Twins (DT) to keep pace with unprecedented global semiconductor industry growth that is expected to drive its total market value to $1 trillion[1] as soon as 2030. Leveraging predictive modeling and other efficiency-enhancing innovations, DTs promise to optimize semiconductor design, manufacturing processes and equipment maintenance while improving overall operational efficiency.

With DTs rising in prominence as a critical enabler of industry growth, key players from across the semiconductor ecosystem – including OEMs, platforms and end users – gathered at the Semiconductor Digital Twin Workshop last December at SEMI headquarters in Milpitas, Calif. to discuss the latest DT developments and explore the path to advancing the technology.

Following are highlights from the sold-out event hosted by the SEMI Smart Manufacturing Initiative.

Key takeaways

  • Industry Alignment on DT Definition and Taxonomy
    • The semiconductor industry needs to align on the definition and taxonomy of DTs in semiconductor operations.
    • With collaboration crucial to advances in DTs, the industry must come together to develop a common understanding of the technology.
  • Data Sharing for Sustainability Improvements
    • Sharing data among various chip ecosystem players will be vital to driving sustainability improvements.
    • Focusing on equipment and operational DTs with sustainability in mind will help foster collaboration among industry stakeholders.
  • Advocacy for Standardized DT Architecture and Framework
    • A standardized DT framework architecture must be established to enhance interoperability, reliability, synchronization, and security.
    • The adoption of digital twin technical standards is in its early stages but increasing in importance as DT technology evolves.
    • Collaboration will be essential to accelerate the availability and adoption of several digital twin technical standards under development by SEMI and other Standards Development Organizations (SDOs).

Key challenges

  • Robust DT Framework and Overcoming Development Silos
    • Establishing a robust DT framework and overcoming isolated development silos in microelectronics are challenges the industry must overcome.
  • Managing Unclean Factory Data
    • Challenges include managing unclean factory data, varying data granularity, and addressing the lifecycle of data models.
  • Sharing Data Between Tools and Process Steps
    • Data sharing between various semiconductor tools and process steps must be seamless. Data provenance is critical for DT accuracy and validation.
  • Legacy Factories & Small/Medium Firms
    • Factories with older generation tools and processes have a unique challenge in developing process level DTs for existing products.

Workshop sessions

The workshop consisted of four sessions focused on DT efforts by equipment makers, solution providers, device makers, and factory integration providers.

Equipment-level digital twins session

The session focused on OEM efforts to develop tool-level DTs and highlighted the potential to improve efficiency, performance, and sustainability. The session also featured discussions on equipment-level data sharing, standards, and interoperability challenges that need to be addressed. Speakers included IRDS Co-Chair Supika Mashiro of TEL, Ala Moradian of Applied Materials, Joseph Ervin of LAM Research, Sean Glazier of Onto Innovation, Basil Milton and Chan-Pin Chong of Kulicke & Soffa, and Mark Huntington of McKinsey & Company.

Session speakers: (L) Supika Mashiro, TEL, and (R) Ala Moradien, AMAT. 

Speakers discussed existing DTs deployed in manufacturing such as Run-to-Run (R2R) control, virtual metrology, and predictive maintenance (PdM) and the need for standardized DTs that can communicate with each other. Tool-level DT solutions such as Applied Materials EcoTwin within the AppliedTwin platform provide a virtualized replica of chipmaking equipment for development and improvement of chip-level processes. The platform has also demonstrated extensibility to sustainability analysis, a significant development.

Other focus areas were the connectivity of DTs across different levels (tools to factories) and the use of AI to make them self-adjusting for manufacturing processes. The importance of DT infrastructure and associated challenges such as ensuring clean and accessible data, data flow, and communication to keeping DTs synchronized were raised as significant challenges. In the back-end, OEMs are making steady progress to virtualize various tools such as wire bonding. The session also highlighted DTs as a major investment across industries, with huge potential in chipmaking. Building a strong data sharing foundation is key to success.

Chamber process, operations and planning level digital twin session

The session was led by solution providers from across the semiconductor ecosystem that develop tools to facilitate DTs at various hierarchical levels. The providers offer a variety of products and services across areas such as process physics-based models, chamber processes, operations, as well as planning modelling approaches to help companies implement and manage DTs. The session included technical details of DT models and their potential impact on the entire manufacturing process.

While the session made clear that DTs promise to revolutionize the semiconductor industry, it must overcome significant technical development challenges of integrating DTs into day-to-day operations. Speakers included Sarbajit Ghosal of SC Solutions, Norman Chang of Ansys, Holland Smith of INFICON, Chandra Reddy of IBM Research, Jon Herlocker of TIGNIS, Ken Smerz of ZELUS and John Behnke of INFICON.

Speakers emphasized the need for fast, multi-physics-based (and data-assisted) accurate DTs for real-time control and monitoring and that react instantly to changes, just like physical equipment. Think of it as having a virtual process line that can predict how different processes will interact. Sitting on top of the DTs are AI-powered (physics and/or data-driven) models that can then be harnessed to optimize manufacturing processes and predict yield.

Speakers also discussed operational-level DTs and the need for a central hub for all factory operational data to boost efficiency, maximize productivity, and reduce waste – all critical as the number of fabs grows in the years ahead. Construction DTs for pre-construction planning in the building of new chip fabs or expanding brown-field sites provide a preconstruction virtual blueprint that can help identify potential problems early on and minimize time to wafer starts. Lastly, how these various levels of DTs are integrated vertically within a factory play a key role in making decisions about autonomous fabs.

Digital twin adoption and implementation session

The session was led by device makers and owners of fabs, where DTs are critical for improving productivity by predicting yield, quality, and efficiency. A process-level DT enables a virtual representation of a product’s process flow in the fab, and it can be used to speed integration efforts (MRL 5-7), simulate specific outcomes, and optimize operations. Imagine a future where chip fabs are run by AI agents, with virtual models predicting problems before they happen and optimizing processes on the fly. That’s the vision shared by the session’s expert speakers. Their insights painted a fascinating picture of what’s next for the semiconductor industry. Speakers included Professor. H.-S Philip Wong of Stanford University, Steven J Meyer of Intel, Jae Yong Park of Samsung, Rosa Javadi of JABIL, Professor Amit Lal and Peter Doerschuk of Cornell University, Ben Davaji of Northeastern University, Pushkar Apte of SEMI and Bobby Mitra of Deloitte.

Session speakers: (L) Steven J Meyer, Intel, and (R) Jae Yong Park, Samsung.

The key development target is advanced AI-assisted manufacturing with three layers of virtual models – processes, tools, and the entire fab itself – all working together seamlessly is critical. This ambitious vision aligns with the National Semiconductor Technology Center (NSTC) DT Grand Challenge, which focuses on generating, sharing, and using data effectively. Intel’s AFS Software Suite, which includes high-speed simulators and graphical models to enable better planning and decision-making across multiple sites, is a real-world example of DTs used in today’s fabs.

Use cases of deploying AI to improve Automated Material Handling Systems (AMHS) asset utilization by 30% have also been demonstrated in real-world fab environments. The session highlighted the importance of scheduling with AI-powered DTs and standardizing data availability across the industry. Other impressive product development use case studies shared included a rapid COVID-19 tester system development and a global supply chain DT.

Speakers described how challenges such as infrastructure readiness, talent gaps, and data privacy concerns are slowing industrywide adoption. They also discussed efforts to develop an open-access academic cleanroom dedicated to developing and testing DT models for lithography and etching processes, with investigation of federated learning to address data privacy & sharing concerns. The experts characterized the hierarchy of DT types as a framework based on the ISA-95 standard to ensure seamless communication and collaboration between DTs across various levels, from process development to production. This interconnected approach could revolutionize chipmaking across the entire enterprise, as demonstrated by an example showing DTs spanning the enterprise.

Digital twin connectivity and platform integration session

The session focused on a variety of product and service offerings by cloud, facilities, and supply chain solution providers that help companies implement and manage DTs of various levels. These solutions include integration, connectivity, security and horizontal integration across the supply chain. Almost all speakers pointed to the importance of standardization efforts as crucial for future development. Speakers included Rad Desiraju of Microsoft, Gautham Unni of AWS, David Gross and Srividya Jayaram of Siemens, Slava Libman of FTD Solutions, Becky Kelderman of Rockwell Automation, Ram Walvekar of HCL Technologies, and Paul Trio of SEMI International Standards

Session speakers touched on definitions and categorization of DTs, including types and uses, as well as building dedicated infrastructure to support their development. The experts highlighted a few DT development challenges in areas such as data sources and provenance, as well as visualization and shared their solution offerings for creating, connecting, and maintaining these digital twins both vertically and horizontally within an enterprise.

The presenters also shared use cases on how DTs bridge design and manufacturing, enabling simulations and faster production, and how connecting DTs for various assets, processes, and products creates a holistic view. Session speakers also discussed a DT maturity scorecard that enables players from across the supply chain to track their progress and identify areas for improvement. Use cases of facility-level DTs for water management in fabs for promoting sustainability was also a topic of discussion.

The semiconductor industry’s commitment to digital twins

The Semiconductor Digital Twin Workshop showcased the industry’s commitment to adopting and advancing the technology. Continued collaboration and adherence to standards and sustainable practices will play a crucial role in unlocking the full potential of DT technology in semiconductor manufacturing.

SEMI thanks the speakers who provided access to their material presented at the workshop. Visit Semiconductor Digital Twin Workshop OnDemand | SEMI for the workshop materials.

Reference

  1. Ondrej Burkacky, Julia Dragon, and Nikolaus Lehmann, The semiconductor decade: A trillion-dollar industry, McKinsey & Company (blog), April 1, 2021

Nishita Rao is senior product marketing manager at SEMI.

Karim Somani is program manager at SEMI.  

The post Integrating Digital Twins In Semiconductor Operations appeared first on Semiconductor Engineering.

Make The Impossible Possible: Use Variable-Shaped Beam Mask Writers And Curvilinear Full-Chip Inverse Lithography Technology For 193i Contacts/Vias With Mask-Wafer Co-Optimization

22. Únor 2024 v 09:01

Abstract:

“Full-chip curvilinear inverse lithography technology (ILT) requires mask writers to write full reticle curvilinear mask patterns in a reasonable write time. We jointly study and present the benefits of a full-chip, curvilinear, stitchless ILT with mask-wafer co-optimization (MWCO) for variable-shaped beam (VSB) mask writers and validate its benefits on mask and wafer at Micron Technology. The full-chip ILT technology employed, first demonstrated in a paper presented at the 2019 SPIE Photomask Technology Conference, produces curvilinear ILT mask patterns without stitching errors, and with process windows enlarged by over 100% compared to the OPC process of record, while the mask was written by multibeam mask writer. At the 2020 SPIE Advanced Lithography Conference, a method was introduced in which MWCO is performed during ILT optimization. This approach enables curvilinear ILT for 193i masks to be written on VSB mask writers within a practical, 12-h time frame, while also producing the largest process windows. We first review MWCO technology, then curvilinear ILT mask patterns written by VSB mask writer, and then show the corresponding 193i process wafer prints. Evaluations of mask write times and mask quality in terms of critical dimension uniformity and process windows are also presented.”

Find the technical paper here. Published February 2024.

Pang, Linyong, Sha Lu, Ezequiel Vidal Russell, Yang Lu, Michael Lee, Jennefir Digaum, Ming-Chuan Yang et al. “Make the impossible possible: use variable-shaped beam mask writers and curvilinear full-chip inverse lithography technology for 193i contacts/vias with mask-wafer co-optimization.” Journal of Micro/Nanopatterning, Materials, and Metrology 23, no. 1 (2024): 011207-011207.

Author Affiliations: D2S Inc. and Micron Technology Inc.

The post Make The Impossible Possible: Use Variable-Shaped Beam Mask Writers And Curvilinear Full-Chip Inverse Lithography Technology For 193i Contacts/Vias With Mask-Wafer Co-Optimization appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Tackling Variability With AI-based Process ControlAnne Meixner
    Jon Herlocker, co-founder and CEO of Tignis, sat down with Semiconductor Engineering to talk about how AI in advanced process control reduces equipment variability and corrects for process drift. What follows are excerpts of that conversation. SE: How is AI being used in semiconductor manufacturing and what will the impact be? Herlocker: AI is going to create a completely different factory. The real change is going to happen when AI gets integrated, from the design side all the way through the m
     

Tackling Variability With AI-based Process Control

22. Únor 2024 v 09:01

Jon Herlocker, co-founder and CEO of Tignis, sat down with Semiconductor Engineering to talk about how AI in advanced process control reduces equipment variability and corrects for process drift. What follows are excerpts of that conversation.

SE: How is AI being used in semiconductor manufacturing and what will the impact be?

Herlocker: AI is going to create a completely different factory. The real change is going to happen when AI gets integrated, from the design side all the way through the manufacturing side. We are just starting to see the beginnings of this integration right now. One of the biggest challenges in the semiconductor industry is it can take years from the time an engineer designs a new device to that device reaching high-volume production. Machine learning is going to cut that to half, or even a quarter. The AI technology that Tignis offers today accelerates that very last step — high-volume manufacturing. Our customers want to know how to tune their tools so that every time they process a wafer the process is in control. Traditionally, device makers get the hardware that meets their specifications from the equipment manufacturer, and then the fab team gets their process recipes working. Depending on the size of the fab, they try to physically replicate that process in a ‘copy exact’ manner, which can take a lot of time and effort. But now device makers can use machine learning (ML) models to autonomously compensate for the differences in equipment variation to produce the exact same outcome, but with significantly less effort by process engineers and equipment technicians.

SE: How is this typically done?

Herlocker: A classic APC system on the floor today might model three input parameters using linear models. But if you need to model 20 or 30 parameters, these linear models don’t work very well. With AI controllers and non-linear models, customers can ingest all of their rich sensor data that shows what is happening in the chamber, and optimally modulate the recipe settings to ensure that the outcome is on-target. AI tools such as our PAICe Maker solution can control any complex process with a greater degree of precision.

SE: So, the adjustments AI process control software makes is to tweak inputs to provide consistent outputs?

Herlocker: Yes, I preach this all the time. By letting AI automate the tasks that were traditionally very manual and time-consuming, engineers and technicians in the fab can remove a lot of the manual precision tasks they needed to do to control their equipment, significantly reducing module operating costs. AI algorithms also can help identify integration issues — interacting effects between tools that are causing variability. We look at process control from two angles. Software can autonomously control the tool by modulating the recipe parameters in response to sensor readings and metrology. But your autonomous control cannot control the process if your equipment is not doing what it is supposed to do, so we developed a separate AI learning platform that ensures equipment is performing to specification. It brings together all the different data silos across the fab – the FDC trace data, metrology data, test data, equipment data, and maintenance data. The aggregation of all that data is critical to understanding the causes of a variation in equipment. This is where ML algorithms can automatically sift through massive amount of data to help process engineers and data scientists determine what parameters are most influencing their process outcomes.

SE: Which process tools benefit the most from AI modeling of advanced process control?

Herlocker: We see the most interest in thin film deposition tools. The physics involved in plasma etching and plasma-enhanced CVD are non-linear processes. That is why you can get much better control with ML modeling. You also can model how the process and equipment evolves over time. For example, every time you run a batch through the PECVD chamber you get some amount of material accumulation on the chamber walls, and that changes the physics and chemistry of the process. AI can build a predictive model of that chamber. In addition to reacting to what it sees in the chamber, it also can predict what the chamber is going to look like for the next run, and now the ML model can tweak the input parameters before you even see the feedback.

SE: How do engineers react to the idea that the AI will be shifting the tool recipe?

Herlocker: That is a good question. Depending on the customer, they have different levels of comfort about how frequently things should change, and how much human oversight there needs to be for that change. We have seen everything from, ‘Just make a recommendation and one of our engineers will decide whether or not to accept that recommendation,’ to adjusting the recipe once a day, to autonomously adjusting for every run. The whole idea behind these adjustments is for variability reduction and drift management, and customers weigh the targeted results versus the perceived risk of taking a novel approach.

SE: Does this involve building confidence in AI-based approaches?

Herlocker: Absolutely, and our systems have a large number of fail-safes, and some limits are hard-coded. We have people with PhDs in chemical engineering and material science who have operated these tools for years. These experts understand the physics of what is happening in these tools, and they have the practical experience to know what level of change can be expected or not.

SE: How much of your modeling is physics-based?

Herlocker: In the beginning, all of our modeling was physics-based, because we were working with equipment makers on their next-generation tools. But now we are also bringing our technology to device makers, where we can also deliver a lot of value by squeezing the most juice out of a data-driven approach. The main challenge with physics models is they are usually IP-protected. When we work with equipment makers, they typically pay us to build those physics-based models so they cannot be shared with other customers.

SE: So are your target customers the toolmakers or the fabs?

Herlocker: They are both our target customers. Most of our sales and marketing efforts are focused on device makers with legacy fabs. In most cases, the fab manager has us engage with their team members to do an assessment. Frequently, that team includes a cross section of automation, process, and equipment teams. The automation team is most interested in reducing the time to detect some sort of deviation that is going to cause yield loss, scrap, or tool downtime. The process and equipment engineers are interested in reducing variability or controlling drift, which also increases chamber life.

For example, let’s consider a PECVD tool. As I mentioned, every time you run the process, byproducts such as polymer materials build up on the chamber walls. You want a thickness of x in your deposition, but you are getting a slightly different wafer thickness uniformity due to drift of that chamber because of plasma confinement changes. Eventually, you must shut down the tool, wet clean the chamber, replace the preventive maintenance kit parts, and send them through the cleaning loop (i.e., to the cleaning vendor shop). Then you need to season the chamber and bring it back online. By controlling the process better, the PECVD team does not have to vent the chamber as often to clean parts. Just a 5% increase in chamber life can be quite significant from a maintenance cost reduction perspective (e.g., parts spend, refurb spend, cleaning spend, etc.). Reducing variability has a similarly large impact, particularly if it is a bottleneck tool, because then that reduction directly contributes to higher or more stable yields via more ‘sweet spot’ processing time, and sometimes better wafer throughput due to the longer chamber lifetime. The ROI story is more nuanced on non-bottleneck tools because they don’t modulate fab revenue, but the ROI there is still there. It is just more about chamber life stability.

SE: Where does this go next?

Herlocker: We also are working with OEMs on next-generation toolsets. Using AI/ML as the core of process control enables equipment makers to control processes that are impossible to implement with existing control strategies and software. For example, imagine on each process step there are a million different parameters that you can control. Further imagine that changing any one parameter has a global effect on all the other parameters, and only by co-varying all the million parameters in just the right way will you get the ideal outcome. And to further complicate things, toss in run-to-run variance, so that the right solution continues to change over time. And then there is the need to do this more than 200 times per hour to support high-volume manufacturing. AI/ML enables this kind of process control, which in turn will enable a step function increase in the ability to produce more complex devices more reliably.

SE: What additional changes do you see from AI-based algorithms?

Herlocker: Machine learning will dramatically improve the agility and productivity of the facility broadly. For example, process engineers will spend less time chasing issues and have more time to implement continuous improvement. Maintenance engineers will have time to do more preventive maintenance. Agility and resiliency — the ability to rapidly adjust to or maintain operations, despite disturbances in the factory or market — will increase. If you look at ML combined with upcoming generative AI capabilities, within a year or two we are going to have agents that effectively will understand many aspects of how equipment or a process works. These agents will make good engineers great, and enable better capture, aggregation, and transfer of manufacturing knowledge. In fact, we have some early examples of this running in our labs. These ML agents capture and ingest knowledge very quickly. So when it comes to implementing the vision of smart factories, machine learning automation will have a massive impact on manufacturing in the future.

The post Tackling Variability With AI-based Process Control appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Computational Lithography Solutions To Enable High NA EUVSynopsys
    This white paper identifies and discusses the computational needs required to support the development, optimization, and implementation of high NA extreme ultraviolet (EUV) lithography. It explores the challenges associated with the increased complexity of high NA systems, proposes potential solutions, and highlights the importance of computational lithography in driving the success of advanced EUV lithography technologies. Click here to read more. The post Computational Lithography Solutions To
     

Computational Lithography Solutions To Enable High NA EUV

Od: Synopsys
21. Únor 2024 v 16:42

This white paper identifies and discusses the computational needs required to support the development, optimization, and implementation of high NA extreme ultraviolet (EUV) lithography. It explores the challenges associated with the increased complexity of high NA systems, proposes potential solutions, and highlights the importance of computational lithography in driving the success of advanced EUV lithography technologies.

Click here to read more.

The post Computational Lithography Solutions To Enable High NA EUV appeared first on Semiconductor Engineering.

❌
❌