FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • Making Adaptive Test Work BetterEd Sperling
    One of the big challenges for IC test is making sense of mountains of data, a direct result of more features being packed onto a single die, or multiple chiplets being assembled into an advanced package. Collecting all that data through various agents and building models on the tester no longer makes sense for a couple reasons — there is too much data, and there are multiple customers using the same equipment. Steve Zamek, director of product management at PDF Solutions, and Eli Roth, product ma
     

Making Adaptive Test Work Better

10. Červen 2024 v 09:15

One of the big challenges for IC test is making sense of mountains of data, a direct result of more features being packed onto a single die, or multiple chiplets being assembled into an advanced package. Collecting all that data through various agents and building models on the tester no longer makes sense for a couple reasons — there is too much data, and there are multiple customers using the same equipment. Steve Zamek, director of product management at PDF Solutions, and Eli Roth, product manager at Teradyne, explain how to optimize testing around different data sources, how to partition that data between the edge and the cloud, and how to ensure it remains secure.

The post Making Adaptive Test Work Better appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Efficient ElectronicsAndy Heinig
    Attention nowadays has turned to the energy consumption of systems that run on electricity. At the moment, the discussion is focused on electricity consumption in data centers: if this continues to rise at its current rate, it will account for a significant proportion of global electricity consumption in the future. Yet there are other, less visible electricity consumers whose power needs are also constantly growing. One example is mobile communications, where ongoing expansion – especially with
     

Efficient Electronics

16. Květen 2024 v 09:07

Attention nowadays has turned to the energy consumption of systems that run on electricity. At the moment, the discussion is focused on electricity consumption in data centers: if this continues to rise at its current rate, it will account for a significant proportion of global electricity consumption in the future. Yet there are other, less visible electricity consumers whose power needs are also constantly growing. One example is mobile communications, where ongoing expansion – especially with the new current 5G standard and the future 6G standard – is pushing up the number of base stations required. This, too, will drive up electricity demand, as the latter increases linearly with the number of stations; at least, if the demand per base station is not reduced. Another example is electronics for the management of household appliances and in the industrial sector: more and more such systems are being installed, and their electronics are becoming significantly more powerful. They are not currently optimized for power consumption, but rather for performance.

This state of affairs simply cannot continue into the future for two reasons: first, the price of electricity will continue to rise worldwide; and second, many companies are committed to becoming carbon neutral. Their desire for carbon neutrality in turn makes electricity yet more expensive and restricts the overall quantity much more severely. As a result, there will be a significant demand for efficient electronics in the coming years, particularly as regards electricity consumption.

This development is already evident today, especially in power electronics, where the use of new semiconductor materials such as GaN or SiC has made it possible to reduce power consumption. A key driver for the development and introduction of such new materials was the electric car market, as reduced losses in the electronics leads directly to increased vehicle range. In the future, these materials will also find their way into other areas; for instance, they are already beginning to establish themselves in voltage transformers in various industries. However, this shift requires more factories and more suppliers for production, and further work also needs to be carried out to develop appropriate circuit concepts for these technologies.

In addition to the use of new materials, other concepts to reduce energy consumption are needed. The data center sector will require increasingly better-adapted circuits – ones that have been developed for a specific task, and as a result can perform this task much more efficiently than universal processors. This involves striking the optimum balance between universal architectures, such as microprocessors and graphics cards, and highly specialized architectures that are suitable for only one use case. Some products will also fall between these two extremes. The increased energy efficiency is then “purchased” through the effort and expense of developing exceptionally specially adapted architectures. It’s important to note that the more specialized an adapted architecture is, the smaller the market for it. That means the only way such architectures will be economically viable is if they can be developed efficiently. This calls for new approaches to derive these architectures directly from high-level hardware/software optimization, without the additional implementation steps that are still necessary today. In sum, the only way to make this approach possible is by using novel concepts and tools to generate circuits directly from a high-level description.

The post Efficient Electronics appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Electromigration Concerns Grow In Advanced PackagesLaura Peters
    The incessant demand for more speed in chips requires forcing more energy through ever-smaller devices, increasing current density and threatening long-term chip reliability. While this problem is well understood, it’s becoming more difficult to contain in leading-edge designs. Of particular concern is electromigration, which is becoming more troublesome in advanced packages with multiple chiplets, where various bonding and interconnect schemes create abrupt changes in materials and geometries.
     

Electromigration Concerns Grow In Advanced Packages

18. Duben 2024 v 09:09

The incessant demand for more speed in chips requires forcing more energy through ever-smaller devices, increasing current density and threatening long-term chip reliability. While this problem is well understood, it’s becoming more difficult to contain in leading-edge designs.

Of particular concern is electromigration, which is becoming more troublesome in advanced packages with multiple chiplets, where various bonding and interconnect schemes create abrupt changes in materials and geometries. For example, electrons may travel from a copper trace to a solder bump of SAC (tin-silver-copper), then to an underbump metal based on nickel, and finally to an interposer copper pad. That, in turn, can cause atoms to shift, resulting in failures in solder joints or in copper redistribution layers in high-density fan-out packages.

“From an electromigration perspective, advanced packaging causes increased packaging density, reduced packaging size, and the dimensions of interconnects to shrink, so the current density is now in close proximity to the maximum current density limit per EM design rules,” said Dermott Lynch, director of technical product management in Synopsys‘ EDA Group.

Any additional stresses the package may be subjected to during assembly and use, whether mechanical or thermal, also can help induce or accelerate electromigration. “Electromigration, in general, gets worse due to temperature and stress, both of which advanced packaging increases,” said Lynch. “Electromigration is also cumulative, so essentially it integrates all the temperature highs and stress over the lifetime until an interconnect breaks down or shorts. Larger processing temperature and operation temperature will make it worse, but it also depends on time under that temperature.”

In fact, managing thermal pathways is perhaps the greatest challenge associated the movement toward the ultimate package, a 3D-IC. “Electromigration is very temperature-sensitive,” said Marc Swinnen, director of product marketing in Ansys’ Semiconductor Division. “Depending on your thermal map, your power integrity will have to adapt to the local temperature profile that you have. So when you look at a chip, you can calculate how much power the chip is putting out, but you cannot tell how hot the chip will get because ‘it depends.’ Is it sitting on a cold plate or sitting in the sun in the Sahara? System concerns come in, and multi-physics modeling is important to understanding these co-dependent effects.”

Thermal engineering also means moving heat away from the most vulnerable points of failure, such as solder bumps. “Effective thermal management is essential for bump reliability,” said Curtis Zwenger, vice president of engineering and technical marketing at Amkor. “Engineers are incorporating thermal enhancement techniques, such as the use of thermal interface materials and advanced heat dissipation solutions, to ensure that bumps are not subjected to excessive temperature-related stresses.”

Zwenger noted that engineers are looking into new materials, while optimizing the use of existing materials to minimize the possibility of electromigration. “Semiconductor packaging engineers are implementing a range of measures to enhance bump reliability and maximize bump yield. These strategies include new materials for solder bumps and underbump metallization, optimizing bump size, pitch and shape for reliability, advanced process control methods to control variability and maximize yield, and simulating and modeling reliability.”

What is electromigration?
Electromigration is the mass transport of metal atoms caused by the electron wind from current flowing through a conductor, typically copper. When current density is high enough, metal will diffuse in the direction of current flow, creating tiny hillocks downstream and leaving behind vacancies or voids. With enough electromigration, failures occur due to severe line thinning, causing opens, or due to hillocks that bridge adjacent lines, causing short circuits.

Electromigration is a diffusion-controlled mechanism that can take three forms — bulk, grain boundary, or surface diffusion, depending on the metal. Aluminum migrates by grain boundary diffusion whereas copper migrates on the surface or at its grain boundaries.

For most of the semiconductor industry’s history, electromigration was primarily an on-chip concern, but on-chip EM is largely under control by reliability engineers. But with the scaling and rapid developments in advanced packaging — implementing TSVs, fan-out packaging with redistribution layers, and copper pillar bumps — electromigration has emerged as a major threat at the package level. Current flowing through the solder bump causes joule heating, and heat from other parts of the package may also dissipate through the solder bumps. EM can become an issue for solder joint connections between chip and interposer, or chip and PCB, as well as in RDLs. Solder joint failures typically manifest as voids or cracks.

Fig. 1: Electromigration can create short circuits between two interconnects through the development of hillocks, or an open circuit through the creation of voids in interconnect. Source: Ansys

Fig. 1: Electromigration can create short circuits between two interconnects through the development of hillocks, or an open circuit through the creation of voids in interconnect. Source: Ansys

Electromigration progresses more quickly at higher temperatures, at higher currents, under greater mechanical stress and in the presence of defects or impurities in the metal. Black’s equation describes an interconnect’s mean time-to-failure with respect to its temperature, current density and the activation energy needed to dislodge a metal atom as:

Black's equation

J is the current density, k is Boltzmann’s constant, T is temperature, Ea is the activation energy, and N is a scaling factor that depends on the metal’s properties. Black’s equation is useful because it easily shows how shorter, wider interconnects will tend to have longer MTTF. In addition, electromigration time-to-failure very strongly depends on the interconnect’s temperature. That temperature is primarily the result of the chip’s environmental temperature, self-heating of the conductor caused by current flow, the heat from neighboring interconnects or transistors, and the thermal conductivity of the surrounding material.

It is also important to note that electromigration is a runaway process. As current density and/or temperature increases, electromigration increases, which raises current density, causing more metal to migrate in a destructive feedback loop.

EM failure modes and allowable current density
In the case of copper redistribution layers in polyimide material, as current flows through the RDL, heat accumulates in the conductor due to Joule heating generation, which can degrade performance. As the required current density and Joule heating temperature is increasing in the fine-line Cu RDL structures (<5nm lines and spaces), self-heating is considered a key factor in the reliability of high-density fan out packages.

JiHye Kwon, senior manager of R&D at Amkor, recently used EM testing and Black’s equation to determine the electromigration failure mechanisms for a given RDL stack and high-density fan-out package with 2µm or 10µm wide RDL layers, 1,000µm long. [1]

High density fan-out is an emerging technology, as it features more aggressive scaling than wafer level fan-out packages. The three layers of copper RDL (3µm thick with Ta/Cu seed) were fabricated followed by polyimide fill, copper pillar deposition, die attach, and overmold. Kwon’s team tested both 2 and 10µm RDL at different current densities and temperatures until resistance increased by 100% (EM failure), but the maximum allowed current density corresponded with a 20% resistance increase. The failure modes occurred in two stages, first by void nucleation and growth and second with copper reduction and oxidation. The study yielded Ea and current density exponent values that can be useful in future designs of RDLs.

Meanwhile, a team of researchers from ASE recently demonstrated how susceptibility to electromigration is determined on copper pillar interconnects in flip chip quad flat no-lead (FCQFN) for high-power automotive applications. The multi-layered copper pillar bumps with a Cu/Ni/Sn1.8Ag configuration were bonded to a silver-plated copper leadframe and tested under extreme EM conditions of 10 kA/cm2 current density and temperatures of 150°C, 160°C and 180°C, while taking in-situ resistance measurements. [2] The EM failures corresponded with rapid rises in electrical resistance that corresponded with the formation of intermetallic compounds and voids at the Cu/solder interfaces. The team built an EM prediction model of interconnects based on a Black-type EM equation, following the JEDEC standard with five test conditions.

After the statistic calculation from the lifetime of samples, the ASE team determined activation energy of Cu pillar interconnects in the FCQFN package (1.12 ± 0.03 eV). The maximum current of the Cu pillar interconnects allowable lasting 10 years at a 105°C operating temperature at a 0.1% failure rate was larger than 2A for the FCQFN Cu pillar structure. “The FCQFN package has great potential in terms of its excellent anti-EM performance for future high-power applications,” the article said.

Designing/manufacturing for EM resiliency
Building electromigration resilience into advanced devices begins with using only EM-compliant linewidths in circuit designs based on the current density and heat profile that the interconnects will experience during operation over the lifetime of the device. Electromigration mitigation also requires process and materials engineering to ensure durability, for instance, of copper pillar bumps under BGA packages. It also calls for an optimized assembly process window and tight process control to prevent tiny violations of design rules that can later precipitate as EM failures.

As the industry makes its way toward true 3D packages, and eventually 3D-ICs, it seems clear that modeling and simulation will play an increasing role in determining many of the guard rails for manufacturing and assembly before manufacturing and assembly even begins. “Reliability modeling and simulation tools are being used to better understand the reliability of bump structures. This proactive approach helps in identifying potential issues before they arise, enabling engineers to implement preventive measures,” said Zwenger.

Modeling and simulation at the system level also will be essential to understanding the complex interplay between reliability mechanisms with thermal and mechanical stress in multi-chiplet systems during operation.

“Electromigration for stacked die is challenging,” said Synopsys’ Lynch. “Localized, die-to-die workloads cause repetitive current flow in specific areas. This generates local heat, increasing EM resulting in wire degradation, while producing even more heat. Reducing the thermal issue becomes critical to ensuring EM reliability.”

As stated previously, solder bumps can become a site for EM reliability failure. “Engineers fine-tune bump design in terms of bump size, pitch, and shape to ensure uniformity and reliability across the entire package. This includes the adoption of innovative Cu bump structures for improved mechanical and electrical properties,” said Amkor’s Zwenger.

In flip-chip BGA and other flip-chip applications, underfill materials — typically thermoset epoxies — are used to reduce the thermal stresses on solder bumps. “Underfill materials play a critical role in providing mechanical support and thermal stability to the bumps,” Zwenger said. “Engineers are investing in the development of advanced underfill formulations with enhanced properties, such as improved adhesion, thermal conductivity, and stress relief.”

Conclusion
Because of its dependence on temperature, electromigration is a failure mechanism to watch and plan for as devices continue to scale and systems integrators continue to cram more and more chiplets of various functions into advanced packages.

“In advanced technologies, the current density is now in close proximity to the maximum density,” said Synopsys’ Lynch. “Anything that causes an increase in temperature poses a threat. Designers of multi-die systems need to understand the impact of temperature and design systems to remove the heat.”

References

  1. JiHye Kwon, “Electromigration Performance Of Fine-Line Cu Redistribution Layer (RDL) For HDFO Packaging,” Semiconductor Engineering, Jan. 18, 2024, https://semiengineering.com/electromigration-performance-of-fine-line-cu-redistribution-layer-rdl-for-hdfo-packaging/
  2. -Y. Tsai, et al., “An Electromigration Study of Cu Pillar Interconnects in Flip-chip QFN Packaging under Extreme Conditions for High-power Applications,” 2023 IEEE 25th Electronics Packaging Technology Conference (EPTC), Singapore, 2023, pp. 326-332, doi: 10.1109/EPTC59621.2023.10457564.

Related Reading
What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.
Thermal Integrity Challenges Grow In 2.5D
Work is underway to map heat flows in interposer-based designs, but there’s much more to be done.
Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.

The post Electromigration Concerns Grow In Advanced Packages appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Electromigration Concerns Grow In Advanced PackagesLaura Peters
    The incessant demand for more speed in chips requires forcing more energy through ever-smaller devices, increasing current density and threatening long-term chip reliability. While this problem is well understood, it’s becoming more difficult to contain in leading-edge designs. Of particular concern is electromigration, which is becoming more troublesome in advanced packages with multiple chiplets, where various bonding and interconnect schemes create abrupt changes in materials and geometries.
     

Electromigration Concerns Grow In Advanced Packages

18. Duben 2024 v 09:09

The incessant demand for more speed in chips requires forcing more energy through ever-smaller devices, increasing current density and threatening long-term chip reliability. While this problem is well understood, it’s becoming more difficult to contain in leading-edge designs.

Of particular concern is electromigration, which is becoming more troublesome in advanced packages with multiple chiplets, where various bonding and interconnect schemes create abrupt changes in materials and geometries. For example, electrons may travel from a copper trace to a solder bump of SAC (tin-silver-copper), then to an underbump metal based on nickel, and finally to an interposer copper pad. That, in turn, can cause atoms to shift, resulting in failures in solder joints or in copper redistribution layers in high-density fan-out packages.

“From an electromigration perspective, advanced packaging causes increased packaging density, reduced packaging size, and the dimensions of interconnects to shrink, so the current density is now in close proximity to the maximum current density limit per EM design rules,” said Dermott Lynch, director of technical product management in Synopsys‘ EDA Group.

Any additional stresses the package may be subjected to during assembly and use, whether mechanical or thermal, also can help induce or accelerate electromigration. “Electromigration, in general, gets worse due to temperature and stress, both of which advanced packaging increases,” said Lynch. “Electromigration is also cumulative, so essentially it integrates all the temperature highs and stress over the lifetime until an interconnect breaks down or shorts. Larger processing temperature and operation temperature will make it worse, but it also depends on time under that temperature.”

In fact, managing thermal pathways is perhaps the greatest challenge associated the movement toward the ultimate package, a 3D-IC. “Electromigration is very temperature-sensitive,” said Marc Swinnen, director of product marketing in Ansys’ Semiconductor Division. “Depending on your thermal map, your power integrity will have to adapt to the local temperature profile that you have. So when you look at a chip, you can calculate how much power the chip is putting out, but you cannot tell how hot the chip will get because ‘it depends.’ Is it sitting on a cold plate or sitting in the sun in the Sahara? System concerns come in, and multi-physics modeling is important to understanding these co-dependent effects.”

Thermal engineering also means moving heat away from the most vulnerable points of failure, such as solder bumps. “Effective thermal management is essential for bump reliability,” said Curtis Zwenger, vice president of engineering and technical marketing at Amkor. “Engineers are incorporating thermal enhancement techniques, such as the use of thermal interface materials and advanced heat dissipation solutions, to ensure that bumps are not subjected to excessive temperature-related stresses.”

Zwenger noted that engineers are looking into new materials, while optimizing the use of existing materials to minimize the possibility of electromigration. “Semiconductor packaging engineers are implementing a range of measures to enhance bump reliability and maximize bump yield. These strategies include new materials for solder bumps and underbump metallization, optimizing bump size, pitch and shape for reliability, advanced process control methods to control variability and maximize yield, and simulating and modeling reliability.”

What is electromigration?
Electromigration is the mass transport of metal atoms caused by the electron wind from current flowing through a conductor, typically copper. When current density is high enough, metal will diffuse in the direction of current flow, creating tiny hillocks downstream and leaving behind vacancies or voids. With enough electromigration, failures occur due to severe line thinning, causing opens, or due to hillocks that bridge adjacent lines, causing short circuits.

Electromigration is a diffusion-controlled mechanism that can take three forms — bulk, grain boundary, or surface diffusion, depending on the metal. Aluminum migrates by grain boundary diffusion whereas copper migrates on the surface or at its grain boundaries.

For most of the semiconductor industry’s history, electromigration was primarily an on-chip concern, but on-chip EM is largely under control by reliability engineers. But with the scaling and rapid developments in advanced packaging — implementing TSVs, fan-out packaging with redistribution layers, and copper pillar bumps — electromigration has emerged as a major threat at the package level. Current flowing through the solder bump causes joule heating, and heat from other parts of the package may also dissipate through the solder bumps. EM can become an issue for solder joint connections between chip and interposer, or chip and PCB, as well as in RDLs. Solder joint failures typically manifest as voids or cracks.

Fig. 1: Electromigration can create short circuits between two interconnects through the development of hillocks, or an open circuit through the creation of voids in interconnect. Source: Ansys

Fig. 1: Electromigration can create short circuits between two interconnects through the development of hillocks, or an open circuit through the creation of voids in interconnect. Source: Ansys

Electromigration progresses more quickly at higher temperatures, at higher currents, under greater mechanical stress and in the presence of defects or impurities in the metal. Black’s equation describes an interconnect’s mean time-to-failure with respect to its temperature, current density and the activation energy needed to dislodge a metal atom as:

Black's equation

J is the current density, k is Boltzmann’s constant, T is temperature, Ea is the activation energy, and N is a scaling factor that depends on the metal’s properties. Black’s equation is useful because it easily shows how shorter, wider interconnects will tend to have longer MTTF. In addition, electromigration time-to-failure very strongly depends on the interconnect’s temperature. That temperature is primarily the result of the chip’s environmental temperature, self-heating of the conductor caused by current flow, the heat from neighboring interconnects or transistors, and the thermal conductivity of the surrounding material.

It is also important to note that electromigration is a runaway process. As current density and/or temperature increases, electromigration increases, which raises current density, causing more metal to migrate in a destructive feedback loop.

EM failure modes and allowable current density
In the case of copper redistribution layers in polyimide material, as current flows through the RDL, heat accumulates in the conductor due to Joule heating generation, which can degrade performance. As the required current density and Joule heating temperature is increasing in the fine-line Cu RDL structures (<5nm lines and spaces), self-heating is considered a key factor in the reliability of high-density fan out packages.

JiHye Kwon, senior manager of R&D at Amkor, recently used EM testing and Black’s equation to determine the electromigration failure mechanisms for a given RDL stack and high-density fan-out package with 2µm or 10µm wide RDL layers, 1,000µm long. [1]

High density fan-out is an emerging technology, as it features more aggressive scaling than wafer level fan-out packages. The three layers of copper RDL (3µm thick with Ta/Cu seed) were fabricated followed by polyimide fill, copper pillar deposition, die attach, and overmold. Kwon’s team tested both 2 and 10µm RDL at different current densities and temperatures until resistance increased by 100% (EM failure), but the maximum allowed current density corresponded with a 20% resistance increase. The failure modes occurred in two stages, first by void nucleation and growth and second with copper reduction and oxidation. The study yielded Ea and current density exponent values that can be useful in future designs of RDLs.

Meanwhile, a team of researchers from ASE recently demonstrated how susceptibility to electromigration is determined on copper pillar interconnects in flip chip quad flat no-lead (FCQFN) for high-power automotive applications. The multi-layered copper pillar bumps with a Cu/Ni/Sn1.8Ag configuration were bonded to a silver-plated copper leadframe and tested under extreme EM conditions of 10 kA/cm2 current density and temperatures of 150°C, 160°C and 180°C, while taking in-situ resistance measurements. [2] The EM failures corresponded with rapid rises in electrical resistance that corresponded with the formation of intermetallic compounds and voids at the Cu/solder interfaces. The team built an EM prediction model of interconnects based on a Black-type EM equation, following the JEDEC standard with five test conditions.

After the statistic calculation from the lifetime of samples, the ASE team determined activation energy of Cu pillar interconnects in the FCQFN package (1.12 ± 0.03 eV). The maximum current of the Cu pillar interconnects allowable lasting 10 years at a 105°C operating temperature at a 0.1% failure rate was larger than 2A for the FCQFN Cu pillar structure. “The FCQFN package has great potential in terms of its excellent anti-EM performance for future high-power applications,” the article said.

Designing/manufacturing for EM resiliency
Building electromigration resilience into advanced devices begins with using only EM-compliant linewidths in circuit designs based on the current density and heat profile that the interconnects will experience during operation over the lifetime of the device. Electromigration mitigation also requires process and materials engineering to ensure durability, for instance, of copper pillar bumps under BGA packages. It also calls for an optimized assembly process window and tight process control to prevent tiny violations of design rules that can later precipitate as EM failures.

As the industry makes its way toward true 3D packages, and eventually 3D-ICs, it seems clear that modeling and simulation will play an increasing role in determining many of the guard rails for manufacturing and assembly before manufacturing and assembly even begins. “Reliability modeling and simulation tools are being used to better understand the reliability of bump structures. This proactive approach helps in identifying potential issues before they arise, enabling engineers to implement preventive measures,” said Zwenger.

Modeling and simulation at the system level also will be essential to understanding the complex interplay between reliability mechanisms with thermal and mechanical stress in multi-chiplet systems during operation.

“Electromigration for stacked die is challenging,” said Synopsys’ Lynch. “Localized, die-to-die workloads cause repetitive current flow in specific areas. This generates local heat, increasing EM resulting in wire degradation, while producing even more heat. Reducing the thermal issue becomes critical to ensuring EM reliability.”

As stated previously, solder bumps can become a site for EM reliability failure. “Engineers fine-tune bump design in terms of bump size, pitch, and shape to ensure uniformity and reliability across the entire package. This includes the adoption of innovative Cu bump structures for improved mechanical and electrical properties,” said Amkor’s Zwenger.

In flip-chip BGA and other flip-chip applications, underfill materials — typically thermoset epoxies — are used to reduce the thermal stresses on solder bumps. “Underfill materials play a critical role in providing mechanical support and thermal stability to the bumps,” Zwenger said. “Engineers are investing in the development of advanced underfill formulations with enhanced properties, such as improved adhesion, thermal conductivity, and stress relief.”

Conclusion
Because of its dependence on temperature, electromigration is a failure mechanism to watch and plan for as devices continue to scale and systems integrators continue to cram more and more chiplets of various functions into advanced packages.

“In advanced technologies, the current density is now in close proximity to the maximum density,” said Synopsys’ Lynch. “Anything that causes an increase in temperature poses a threat. Designers of multi-die systems need to understand the impact of temperature and design systems to remove the heat.”

References

  1. JiHye Kwon, “Electromigration Performance Of Fine-Line Cu Redistribution Layer (RDL) For HDFO Packaging,” Semiconductor Engineering, Jan. 18, 2024, https://semiengineering.com/electromigration-performance-of-fine-line-cu-redistribution-layer-rdl-for-hdfo-packaging/
  2. -Y. Tsai, et al., “An Electromigration Study of Cu Pillar Interconnects in Flip-chip QFN Packaging under Extreme Conditions for High-power Applications,” 2023 IEEE 25th Electronics Packaging Technology Conference (EPTC), Singapore, 2023, pp. 326-332, doi: 10.1109/EPTC59621.2023.10457564.

Related Reading
What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.
Thermal Integrity Challenges Grow In 2.5D
Work is underway to map heat flows in interposer-based designs, but there’s much more to be done.
Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.

The post Electromigration Concerns Grow In Advanced Packages appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • V2X Path To Deployment Still MurkyAnn Mutschler
    Experts at the Table: Semiconductor Engineering sat down to discuss Vehicle-To-Everything (V2X) technology and the path to deployment, with Shawn Carpenter, program director for 5G and space at Ansys; Lang Lin, principal product manager at Ansys; Daniel Dalpiaz, senior manager product marketing, Americas, green industrial power division at Infineon; David Fritz, vice president of virtual and hybrid systems at Siemens EDA; and Ron DiGiuseppe, senior marketing manager, automotive IP segment at Syn
     

V2X Path To Deployment Still Murky

7. Březen 2024 v 09:05

Experts at the Table: Semiconductor Engineering sat down to discuss Vehicle-To-Everything (V2X) technology and the path to deployment, with Shawn Carpenter, program director for 5G and space at Ansys; Lang Lin, principal product manager at Ansys; Daniel Dalpiaz, senior manager product marketing, Americas, green industrial power division at Infineon; David Fritz, vice president of virtual and hybrid systems at Siemens EDA; and Ron DiGiuseppe, senior marketing manager, automotive IP segment at Synopsys. What follows are excerpts from that conversation.

L-R: Ansys' Carpenter; Ansys' Lin; Infineon’s Dalpiaz; Siemens EDA’s Fritz; Synopsys‘ DiGiuseppe.

L-R: Ansys’ Carpenter; Ansys’ Lin; Infineon’s Dalpiaz; Siemens EDA’s Fritz; Synopsys‘ DiGiuseppe.

SE: What is the potential of vehicle-to-everything technology, and what role will the semiconductor ecosystem play in making this a reality?

DiGiuseppe: V2X is a technology that’s not just years, but decades, in the making. It initially started as a dedicated short-range communications (DSRC) type of technology, and has globally transitioned into a cellular technology, although many of those V2X applications are not just cellular. There are other spectrum allocations V2X can run on, including WiFi or other general-use technology. So it’s not limited to cellular. Also, it’s not just a technology. It’s an application, an outcome, and there are a lot of valuable uses, many of which are safety-related, but there are others, such as efficiency of traffic management notifications. V2X has a wide number of uses. The deployment will be done in stages, and there’s a lot of activity even though it’s taken a long time.

Lin: When I see the keyword V2X, it reminds me of everything about how the car can communicate with anything in the world. It’s a very exciting moment that we’re here today to be able to make some kind of technology to enable great communication between vehicles and people, in network infrastructures and car to car communications. Today, there is already something implemented. For instance, in car network systems we can connect our phone to the car already, but we’re still in the first mile. We’ve started on the journey, but we have a long way to go as far as how to connect car-to-car, how to connect the car to the entire infrastructure of networks, and to the internet. There are a lot of unknowns on the road while we start driving on this journey, and safety and security are definitely the biggest concerns. What if my network is being jeopardized?

Dalpiaz: V2X is part of a much bigger smart grid ecosystem. This will certainly play a very important role, especially as the grid becomes smart and decentralized. This is what will enable the future energy ecosystem, having renewable energies, energy storage systems all connected. And as we see more EVs being used as mobile battery storage. this is something that will certainly enable, and is part of, a smart grid ecosystem that everybody’s talking about.

Fritz: The days of independent semiconductor and software development are over. It is the need for OEMs to control their own destiny, driven by growing consumer and competitive demand, that has all but eliminated the ability to sell a one-size-fits-all product. We’ve known for a very long time that software needs to drive semiconductors, and semiconductors need to drive software. This symbiotic relationship, and the tools and methodologies needed to support this paradigm shift, are essential to producing a highly successful, complex, and competitive solution that meets consumer demands.

SE: What are the discrete pieces of V2X that need to be connected?

Dalpiaz: From the semiconductor point of view, especially with the usage of wide bandgap materials, a few companies are seeing that it’s possible to increase efficiency and power density. Being able to not only provide such solutions, but have everything connected in one box, is part of the smart ecosystem. Then, having the electric vehicles, energy storage, solar — everything combined into one box. Twenty years ago, before the iPhone, we used to have a fax machine, a camera for photographs, a computer. The future of this ecosystem is going to have one box sitting in your home, and have all this stuff connected together. So from the semiconductor point of view, especially with silicon carbide, it is something that is possible today, and it can achieve a very high level of efficiency — about 99%, very close to 100%. And of course, we need to make the system smaller to fit in a vehicle.

DiGiuseppe: One of the key stakeholders is the cellular companies. When we look at cellular V2X, one of the main challenges is interoperability. You have different devices in different model-year cars, so for the vehicle-to-vehicle communications, those different devices need to be interoperable. Then, the car will be talking to the infrastructure, so the roadside units need to be interoperable with the cars and devices in the cars. Then, of course, you have vehicle-to-pedestrians, vehicle-to-e-mobility like vehicle-to-bicycles, vehicle-to-motorcycles interoperability between all the devices over the medium. Whether it’s cellular or Wi-Fi or other technologies, it all needs to be interoperable. That will allow deployments in one locality to work in another locality, because even if they’re interoperable in one deployment in one region, we’ve got to make sure they’re also interoperable in other regions. So it’s a large scale interoperability goal.

Lin: Ron, you’re talking about interoperability, and Daniel talked about the ecosystem. From my side, I would also mention some standards are necessary. For EDA, to help build such an ecosystem and chips, we need some rules to give to engineers as to what’s to be followed. There are two important standards in my mind. One is the vehicle safety standard ISO 26262, which regulates a couple of safety standards for on-road vehicle chip design. Another is the cyber security standard, ISO 21434. If I make a tool, I probably will follow those standards, and then think about how the tool could help users decide a pass/failure criteria regarding their design, making sure to meet the security and safety target from the standard.

DiGiuseppe: In addition to standards, last October the U.S. Department of Transportation released its national V2X deployment plan. That plan, which is still in draft feedback stage, lays out — at least in the U.S. — the whole timeline for deployments. That kind of oversight plan overlays onto the standards that Lang was just talking about. That deployment plan outlines the different contributions from all the different stakeholders, from the automakers/OEMs to the software developers for the applications. So overlaid on top of standards is a deployment plan, and a government deployment plan outlines that. Plus, there are a lot of government stakeholders, like the FCC allocating spectrum, and the Department of Transportation deploying all these deployments, and that’s in addition to the technology providers.

Fritz: It would take days to adequately answer those questions, but at the core, the root design components are connectivity, power, performance, and acceleration. Connectivity with the proper protocols allows computational tasks to be distributed. This is particularly important in automotive, where the physical distance between sensing, actuating and computing nodes is critical for predictable performance. In the case of V2X, connectivity enables the normalization of external data, whether it involves smart city infrastructure or another vehicle. It’s important to note that the form of the shared data grows exponentially with the capacity to describe the environment, and therefore the compute requirements to process and understand it. For example, a data form that can describe signage in the U.S. is relatively small, but one that is universal with variations recognizable is much larger and more ambiguous. This drives design parameters that directly impact manufacturing, development, and service cost functions. Further, the normalization of the data has an impact on the overall design and design component interactions. In the case of power, it goes without saying that high compute requirements, and the associated necessary cooling, can have a significant impact on EV range and manufacturing costs. Performance can take many forms, but as software loads increase with hypervisors, specialized operating systems, and protocol stacks, not to mention very complex application software, all must meet stringent mission critical requirements. Finally, acceleration is of growing importance because it allows workloads to be handed off to specialized hardware that is better equipped to handle that load. An example is running AI inferencing on a CPU is typically far slower and more power-hungry than on an NPU, but a GPU could be idle and available to do the same task. On the other hand, a small CNN can be handled quite easily on a CPU with a few simple instructions. It is at the intersection of these major design components where an OEM will find its differentiation. Therefore, having a system capable of exploring this complex hardware and software space quickly, and with a small team, is critical for an OEM to demand of its suppliers what is required for the success of its platforms. Again, controlling your own destiny is essential to survival.

SE: With all of this interoperability, what happens when there are parts of the ‘everything’ — whether it’s the car or the infrastructure or pedestrians — that are not updated with the latest technologies or different aspects of what needs to be there for conductivity?

DiGiuseppe: In addition to that challenge, this includes backward compatibility for automotive. For someone buying a car in 2025, you would expect any V2X technology to work in 2040. But in the meantime, all those standards that we’re talking about are continuing to evolve, so they need to be backward compatible.

Carpenter: This highlights the need for a digital twin capability for modeling this infrastructure to be able to understand that when we get two years down the road, some devices may not be reprogrammable. We may not be able to flash a particular device. We need to be able to look at that, and be able to simulate that in advance to understand what will happen. What will this do? We’re seeing this show a little bit, even giving a nod to what Ron was talking about earlier with interoperability. We have customers who want to be able to validate real hardware stuff that they’re developing on the lab bench, but they want to do it with the fidelity of a real system operating on a car, in a virtual city, with the live interaction of the channel with a gNodeB 5G base station mounted up on a building someplace, and they want to know how this will work in the context of the situation that it’s supposed to serve. And if something goes wrong in that scene, can we introduce something into this device and run our real silicon development platform against it to understand what happens here. If we go into a deep shadow, a deep fade area, and I’m not getting updates, yet I’m hurtling down the road at a certain speed, how long can I do this before I receive corrective information? What if someone’s software deck out there doesn’t get reprogrammed or doesn’t get the latest version of the standard safety protocols or something like that? We’re going to need this ability to carry models of stuff that was built two or three years ago in today’s infrastructure, model that, and understand in advance what’s going to happen with it so that we have an approach to do this. This is what the Department of Defense is doing today with their digital thread enablement, to have a way to capture that with legacy models of what they built years ago, but apply it in modern missions and understand, ‘Does it work? Does it fit? Does it not fit? What do we need to do to the existing system to make sure that we’re safe here?’ That is an approach we clearly see the automakers beginning to look at as a way to future-proof some of these systems and make sure that they’ve got a way to test them as they go forward.

Fritz: It’s become very clear from several popularized incidents that simply stopping and waiting for tech support to find you and get you going again is not going to be a successful strategy. In the end, the vehicle must make decisions at least as thoughtful as an average human would make. This is entirely possible, but not if too much emphasis is placed in the design phase on the dependencies between communicating (or non-communicating) actors. For this reason, we will always require sophisticated decision-making in-vehicle to be widely accepted.

SE: How does the design team stay up to date with everything?

DiGiuseppe: On the vehicle side, they’re going to be relying on over-the-air (OTA) software updates, which is relatively new in the automotive industry. But clearly, once we identify a software update, we’re going to need to roll out that software update, and OTA is obviously going to be used hand-in-hand with the updates to V2X as it moves forward.

SE: From a developer standpoint, they have to design to these all these regulations. What are the issues here?

Lin: As a software developer, if you think about a vehicle 10 years ago, you mainly just replaced hardware. You replaced your brakes, you replaced your engine, adding some fluid. These are all old styles. Right now, if you have the V2X network, you’d expect probably daily updates because software is evolving daily, and your whole communication system infrastructure is under the whole internet evolution, so you’re going to have to keep pace with it. That’s a lot of work for developers.

Carpenter: There could be implications on edge processing. The telecommunications providers are going to need to put a lot more compute closer to the radio head, and clearly they’re already exploring the possibilities of getting not just central processing cores, CPU cores, but there will be GPU cores and Tensor Processing Units, and we don’t know what all yet for AI, that will be a part of this safety infrastructure and information/infotainment delivery. There’s a lot more compute that’s going to have to happen with a much shorter latency. Augmented reality with heads-up displays — imagine the possibilities coming in safety systems with heads-up displays in cars. Then imagine the amount of processing that it’s going to take. So the telecom providers will need to be a major part of that, together with most of the local government regulatory groups that are going to foster that safety system. Each municipality probably has to decide what do they adopt, what level of standard will they use, and deliver. Who invests in that? The future is really exciting, but there are a few things yet to be sorted out in terms of the investment needed to really deliver that promise.

Dalpiaz: I’m more in the infrastructure side, and one of the questions we always have is, ‘With all this focus on renewables and decentralization of the grid, can the grid handle such expectations or such projects?’ Having more people connecting and feeding energy back into the grid, and managing all of this, that’s always the question that you have to go through and consider.

Fritz: The fact is that keeping up to date is not practical. However, that doesn’t mean that a methodology cannot be employed to accept changes into the development system, and therefore be folded into the development process. CI/CD systems with digital twin golden models already are being developed, with nightly regressions run against complex (and possibly changing) requirements. In this way, requirement changes are automatically addressed as they occur, and solutions can be rolled into an Agile methodology through nightly regressions. This is an important benefit of a modern development methodology that has been used in other industries for years, but it’s just now finding purchase in progressive automotive companies.

Related Reading
Growing Challenges For Increasingly Connected Vehicles
OEMs have high expectations for connected vehicles and global growth opportunities, but it’s not that simple.
Software-Defined Vehicles Ready To Roll
New approach could have big effects on cost, safety, security, and time to market.

The post V2X Path To Deployment Still Murky appeared first on Semiconductor Engineering.

❌
❌