FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • AI/ML’s Role In Design And Test ExpandsLaura Peters
    The role of AI and ML in test keeps growing, providing significant time and money savings that often exceed initial expectations. But it doesn’t work in all cases, sometimes even disrupting well-tested process flows with questionable return on investment. One of the big attractions of AI is its ability to apply analytics to large data sets that are otherwise limited by human capabilities. In the critical design-to-test realm, AI can address problems such as tool incompatibilities between the des
     

AI/ML’s Role In Design And Test Expands

5. Srpen 2024 v 09:03

The role of AI and ML in test keeps growing, providing significant time and money savings that often exceed initial expectations. But it doesn’t work in all cases, sometimes even disrupting well-tested process flows with questionable return on investment.

One of the big attractions of AI is its ability to apply analytics to large data sets that are otherwise limited by human capabilities. In the critical design-to-test realm, AI can address problems such as tool incompatibilities between the design set-up, simulation, and ATE test program, which typically slows debugging and development efforts. Some of the most time-consuming and costly aspects of design-to-test arise from incompatibilities between tools.

“During device bring-up and debug, complex software/hardware interactions can expose the need for domain knowledge from multiple teams or stakeholders, who may not be familiar with each other’s tools,” said Richard Fanning, lead software engineer at Teradyne. “Any time spent doing conversions or debugging differences in these set-ups is time wasted. Our toolset targets this exact problem by allowing all set-ups to use the same set of source files so everyone can be sure they are running the same thing.”

ML/AI can help keep design teams on track, as well. “As we drive down this technology curve, the analytics and the compute infrastructure that we have to bring to bear becomes increasingly more complex and you want to be able to make the right decision with a minimal amount of overkill,” said Ken Butler, senior director of business development in the ACS data analytics platform group at Advantest. “In some cases, we are customizing the test solution on a die-by-die type of basis.”

But despite the hype, not all tools work well in every circumstance. “AI has some great capabilities, but it’s really just a tool,” said Ron Press, senior director of technology enablement at Siemens Digital Industries Software, in a recent presentation at a MEPTEC event. “We still need engineering innovation. So sometimes people write about how AI is going to take away everybody’s job. I don’t see that at all. We have more complex designs and scaling in our designs. We need to get the same work done even faster by using AI as a tool to get us there.”

Speeding design to characterization to first silicon
In the face of ever-shrinking process windows and the lowest allowable defectivity rates, chipmakers continually are improving the design-to-test processes to ensure maximum efficiency during device bring-up and into high volume manufacturing. “Analytics in test operations is not a new thing. This industry has a history of analyzing test data and making product decisions for more than 30 years,” said Advantest’s Butler. “What is different now is that we’re moving to increasingly smaller geometries, advanced packaging technologies and chiplet-based designs. And that’s driving us to change the nature of the type of analytics that we do, both in terms of the software and the hardware infrastructure. But from a production test viewpoint, we’re still kind of in the early days of our journey with AI and test.”

Nonetheless, early adopters are building out the infrastructure needed for in-line compute and AI/ML modeling to support real-time inferencing in test cells. And because no one company has all the expertise needed in-house, partnerships and libraries of applications are being developed with tool-to-tool compatibility in mind.

“Protocol libraries provide out-of-the-box solutions for communicating common protocols. This reduces the development and debug effort for device communication,” said Teradyne’s Fanning. “We have seen situations where a test engineer has been tasked with talking to a new protocol interface, and saved significant time using this feature.”

In fact, data compatibility is a consistent theme, from design all the way through to the latest developments in ATE hardware and software. “Using the same test sequences between characterization and production has become key as the device complexity has increased exponentially,” explained Teradyne’s Fanning. “Partnerships with EDA tool and IP vendors is also key. We have worked extensively with industry leaders to ensure that the libraries and test files they output are formats our system can utilize directly. These tools also have device knowledge that our toolset does not. This is why the remote connect feature is key, because our partners can provide context-specific tools that are powerful during production debug. Being able to use these tools real-time without having to reproduce a setup or use case in a different environment has been a game changer.”

Serial scan test
But if it seems as if all the configuration changes are happening on the test side, it’s important to take stock of substantial changes on the approach to multi-core design for test.

Tradeoffs during the iterative process of design for test (DFT) have become so substantial in the case of multi-core products that a new approach has become necessary.

“If we look at the way a design is typically put together today, you have multiple cores that are going to be produced at different times,” said Siemens’ Press. “You need to have an idea of how many I/O pins you need to get your scan channels, the deep serial memory from the tester that’s going to be feeding through your I/O pins to this core. So I have a bunch of variables I need to trade off. I have the number of pins going to the core, the pattern size, and the complexity of the core. Then I’ll try to figure out what’s the best combination of cores to test together in what is called hierarchical DFT. But as these designs get more complex, with upwards of 2,500 cores, that’s a lot of tradeoffs to figure out.”

Press noted that applying AI with the same architecture can provide a 20% to 30% higher efficiency, but an improved methodology based on packetized scan test (see figure 1) actually makes more sense.


Fig. 1: Advantages to the serial scan network (SSN) approach. Source: Siemens

“Instead of having tester channels feeding into the scan channels that go to each core, you have a packetized bus and packets of data that feed through all the cores. Then you instruct the cores when their packet information is going to be available. By doing this, you don’t have as many variables you need to trade off,” he said. At the core level, each core can be optimized for any number of scan channels and patterns, and the I/O pin count is no longer a variable in the calculation. “Then, when you put it into this final chip, it deliver from the packets the amount of data you need for that core, that can work with any size serial bus, in what is called a serial scan network (SSN).”

Some of the results reported by Siemens EDA customers (see figure 2) highlight both supervised and unsupervised machine learning implementation for improvements in diagnosis resolution and failure analysis. DFT productivity was boosted by 5 to 10X using the serial scan network methodology.


Fig. 2: Realized benefits using machine learning and the serial scan network approach. Source: Siemens

What slows down AI implementation in HVM?
In the transition from design to testing of a device, the application of machine learning algorithms can enable a number of advantages, from better pairing of chiplet performance for use in an advanced package to test time reduction. For example, only a subset of high-performing devices may require burn-in.

“You can identify scratches on wafers, and then bin out the dies surrounding those scratches automatically within wafer sort,” said Michael Schuldenfrei, fellow at NI/Emerson Test & Measurement. “So AI and ML all sounds like a really great idea, and there are many applications where it makes sense to use AI. The big question is, why isn’t it really happening frequently and at-scale? The answer to that goes into the complexity of building and deploying these solutions.”

Schuldenfrei summarized four key steps in ML’s lifecycle, each with its own challenges. In the first phase, the training, engineering teams use data to understand a particular issue and then build a model that can be used to predict an outcome associated with that issue. Once the model is validated and the team wants to deploy it in the production environment, it needs to be integrated with the existing equipment, such as a tester or manufacturing execution system (MES). Models also mature and evolve over time, requiring frequent validation of the data going into the model and checking to see that the model is functioning as expected. Models also must adapt, requiring redeployment, learning, acting, validating and adapting, in a continuous circle.

“That eats up a lot of time for the data scientists who are charged with deploying all these new AI-based solutions in their organizations. Time is also wasted in the beginning when they are trying to access the right data, organizing it, connecting it all together, making sense of it, and extracting features from it that actually make sense,” said Schuldenfrei.

Further difficulties are introduced in a distributed semiconductor manufacturing environment in which many different test houses are situated in various locations around the globe. “By the time you finish implementing the ML solution, your model is stale and your product is probably no longer bleeding edge so it has lost its actionability, when the model needs to make a decision that actually impacts either the binning or the processing of that particular device,” said Schuldenfrei. “So actually deploying ML-based solutions in a production environment with high-volume semiconductor test is very far from trivial.”

He cited a 2014 Google article that stated how the ML code development part of the process is both the smallest and easiest part of the whole exercise, [1] whereas the various aspects of building infrastructure, data collection, feature extraction, data verification, and managing model deployments are the most challenging parts.

Changes from design through test ripple through the ecosystem. “People who work in EDA put lots of effort into design rule checking (DRC), meaning we’re checking that the work we’ve done and the design structure are safe to move forward because we didn’t mess anything up in the process,” said Siemens’ Press. “That’s really important with AI — what we call verifiability. If we have some type of AI running and giving us a result, we have to make sure that result is safe. This really affects the people doing the design, the DFT group and the people in test engineering that have to take these patterns and apply them.”

There are a multitude of ML-based applications for improving test operations. Advantest’s Butler highlighted some of the apps customers are pursuing most often, including search time reduction, shift left testing, test time reduction, and chiplet pairing (see figure 3).

“For minimum voltage, maximum frequency, or trim tests, you tend to set a lower limit and an upper limit for your search, and then you’re going to search across there in order to be able to find your minimum voltage for this particular device,” he said. “Those limits are set based on process split, and they may be fairly wide. But if you have analytics that you can bring to bear, then the AI- or ML-type techniques can basically tell you where this die lies on the process spectrum. Perhaps it was fed forward from an earlier insertion, and perhaps you combine it with what you’re doing at the current insertion. That kind of inference can help you narrow the search limits and speed up that test. A lot of people are very interested in this application, and some folks are doing it in production to reduce search time for test time-intensive tests.”


Fig. 3: Opportunities for real-time and/or post-test improvements to pair or bin devices, improve yield, throughput, reliability or cost using the ACS platform. Source: Advantest

“The idea behind shift left is perhaps I have a very expensive test insertion downstream or a high package cost,” Butler said. “If my yield is not where I want it to be, then I can use analytics at earlier insertions to be able to try to predict which devices are likely to fail at the later insertion by doing analysis at an earlier insertion, and then downgrade or scrap those die in order to optimize downstream test insertions, raising the yield and lowering overall cost. Test time reduction is very simply the addition or removal of test content, skipping tests to reduce cost. Or you might want to add test content for yield improvement,” said Butler.

“If I have a multi-tiered device, and it’s not going to pass bin 1 criteria – but maybe it’s bin 2 if I add some additional content — then people may be looking at analytics to try to make those decisions. Finally, two things go together in my mind, this idea of chiplet designs and smart pairing. So the classic example is a processor die with a stack of high bandwidth memory on top of it. Perhaps I’m interested in high performance in some applications and low power in others. I want to be able to match the content and classify die as they’re coming through the test operation, and then downstream do pick-and-place and put them together in such a way that I maximize the yield for multiple streams of data. Similar kinds of things apply for achieving a low power footprint and carbon footprint.”

Generative AI
The question that inevitably comes up when discussing the role of AI in semiconductors is whether or not large language models like ChatGPT can prove useful to engineers working in fabs. Early work shows some promise.

“For example, you can ask the system to build an outlier detection model for you that looks for parts that are five sigma away from the center line, saying ‘Please create the script for me,’ and the system will create the script. These are the kinds of automated, generative AI-based solutions that we’re already playing with,” says Schuldenfrei. “But from everything I’ve seen so far, there is still quite a lot of work to be done to get these systems to provide outputs with high enough quality. At the moment, the amount of human interaction that is needed afterward to fix problems with the algorithms or models that generative AI is producing is still quite significant.”

A lingering question is how to access the test programs needed to train the new test programs when everyone is protecting important test IP? “Most people value their test IP and don’t necessarily want to set up guardrails around the training and utilization processes,” Butler said. “So finding a way to accelerate the overall process of developing test programs while protecting IP is the challenge. It’s clear this kind of technology is going to be brought to bear, just like we already see in the software development process.”

Failure analysis
Failure analysis is typically a costly and time-consuming endeavor for fabs because it requires a trip back in time to gather wafer processing, assembly, and packaging data specific to a particular failed device, known as a returned material authorization (RMA). Physical failure analysis is performed in an FA lab, using a variety of tools to trace the root cause of the failure.

While scan diagnostic data has been used for decades, a newer approach involves pairing a digital twin with scan diagnostics data to find the root cause of failures.

“Within test, we have a digital twin that does root cause deconvolution based on scan failure diagnosis. So instead of having to look at the physical device and spend time trying to figure out the root cause, since we have scan, we have millions and millions of virtual sample points,” said Siemens’ Press. “We can reverse-engineer what we did to create the patterns and figure out where the mis-compare happened within the scan cells deep within the design. Using YieldInsight and unsupervised machine learning with training on a bunch of data, we can very quickly pinpoint the fail locations. This allows us to run thousands, or tens of thousands fail diagnoses in a short period of time, giving us the opportunity to identify the systematic yield limiters.”

Yet another approach that is gaining steam is using on-die monitors to access specific performance information in lieu of physical FA. “What is needed is deep data from inside the package to monitor performance and reliability continuously, which is what we provide,” said Alex Burlak, vice president of test and analytics at proteanTecs. “For example, if the suspected failure is from the chiplet interconnect, we can help the analysis using deep data coming from on-chip agents instead of taking the device out of context and into the lab (where you may or may not be able to reproduce the problem). Even more, the ability to send back data and not the device can in many cases pinpoint the problem, saving the expensive RMA and failure analysis procedure.”

Conclusion
The enthusiasm around AI and machine learning is being met by robust infrastructure changes in the ATE community to accommodate the need for real-time inferencing of test data and test optimization for higher yield, higher throughput, and chiplet classifications for multi-chiplet packages. For multi-core designs, packetized test, commercialized as an SSN methodology, provides a more flexible approach to optimizing each core for the number of scan chains, patterns and bus width needs of each core in a device.

The number of testing applications that can benefit from AI continues to rise, including test time reduction, Vmin/Fmax search reduction, shift left, smart pairing of chiplets, and overall power reduction. New developments like identical source files for all setups across design, characterization, and test help speed the critical debug and development stage for new products.

Reference

  1. https://proceedings.neurips.cc/paper_files/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf

The post AI/ML’s Role In Design And Test Expands appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • What Works Best For ChipletsAnne Meixner
    The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield. To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least i
     

What Works Best For Chiplets

18. Duben 2024 v 09:08

The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield.

To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least initially. The basic challenge is aligning domain-specific performance demands of end systems, which contain a growing number of chiplets, with the assembly and packaging capabilities and methodologies of IDMs, foundries, and OSATs. This includes the creation of assembly development kits (ADKs) that are roughly the equivalent of process development kits (PDKs), which today are codified with manufacturing specifications.

A PDK provides the appropriate level of detail needed to develop planar chips, marrying design tools with fab processes to achieve a predictable outcome. But making this work for an ADK with heterogeneous chiplets is many times more complex. Design and assembly teams need to manage thermal, mechanical, and electrical co-dependencies that cause electrical and mechanical stress, resulting in warpage, reduced yield, and reliability issues under real-world workloads. Layered on top of this the business and legal issues related to packaging of different devices from different manufacturers.

“Chiplets are a growing trend, especially in the HPC and networking segments, with potential to scale to other applications,” said Gabriela Pereira, technology and market analyst for semiconductor packaging at Yole Intelligence. “The industry has understood that high-end advanced packaging technologies are needed to connect them — but that’s much more complex than it seems. Connecting chiplets requires the design of high-bandwidth interconnections at the package level, which can take different forms — e.g., 2D, 2.5D or 3D — while ensuring that the thermal and power requirements are fulfilled.”

Commercial chiplet-based devices generally are domain-specific, and sometimes developed for a specific workload. So despite a big industry push to create a LEGO-like mix-and-match ecosystem for chiplets — which today includes multiple IP and EDA vendors, foundries, memory suppliers, OSATs, substrate suppliers, etc. — making this work as planned will require time and a massive amount of work.

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

In creating heterogeneous integrated designs, it’s essential to have much tighter collaboration between foundries, IDMs, OSATs, and PCB manufacturers. And because each chiplet-based system will be customized, the number of assembly processes will grow substantially. For example, one OSAT noted that among its ~5,000 customers, there are ~1,000 different assembly processes.

That diversity in products and processes makes it difficult to achieve predictable results by choosing chiplets from a large menu of options.

“We’ve already encountered a lot of limitations including not only the silicon, but also integration and the ecosystem,” said Lihong Cao, senior director at ASE Group, at MEPTEC’s Road to Chiplets forum. She stressed that customers continue to push for a low-cost chiplet assembly process, which is creating constructive tension between developing a sophisticated assembly process and the economic realities of different industry sectors. Computing devices for automotive have a higher cost sensitivity than for data centers, for example, but their chips operate in a harsher environment over a longer lifetime.

What’s needed is a defined set of assembly process recipes — basically, a highly limited menu of choices — that are specific to the end application (HPC, automotive, RF telecommunications) in order to lower the cost of chiplet-based systems. OSATs and foundries already are moving in that direction for high-performance computing. For example, at its 2024 Direct Connect event, Intel shared its six different package processes for chiplets. TSMC and Samsung also offer defined sets of chiplet processes. But the success of these assembly processes requires engineering teams to co-optimize the flows, processes, and materials to best match the system requirements.

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

“Previously, when we designed a system we only had to be worried about the system requirements. Once we start segregating into dies and reassembling them, we have to start looking at other things. We have to worry about putting them together while considering signal integrity between dies, reliability, thermals, etc.,” said Itai Leshniak, director of AI systems solutions at Applied Materials, at the MEPTEC forum. “If we take the case of AI-based computer vision, we can break it down layer by layer — on the hardware side, determining which computer vision processors, sensors, filters are needed to break it down into the architecture at layer. Then we begin to go through how to package all these chiplets, and then which materials to use and how to take advantage of those materials.”

Materials and assembly processes
Conceptually, design engineers will use chiplets to design a system. However, the co-design and integration is far more complicated than assembling a set of LEGO blocks, because the chiplets, interposers, and package substrates come from different design houses and manufacturing facilities. The advanced packaging technologies used to connect chiplets vary with an alphabet soup of names — FOWLP, FOPLP, CoWoS, etc., each of which poses additional design and material choices along with certain process limitations.

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Currently engineering teams determine the tradeoffs among the different packaging options to select materials, derive a process recipe, and determine design rules.

Materials are a good starting point. “Materials are very important because they enable new products and packaging technologies,” Tanja Braun, deputy group manager at the Fraunhofer Institute for Reliability and Microintegration IZM. “As you move into more advanced packaging, process is getting much more complex because you are putting more things together. In the end, it’s a combination of equipment, materials, and process development.”

There are three thermal parameters that are critical in package assembly processes — coefficients of thermal expansion (CTE), glass transition temperature (Tg), and thermal conductivity. These factors affect how a material behaves in manufacturing to packaging processes, as well as how it behaves in the field.

“Challenges for our materials include temperature limitations of different die,” said Rama Puligadda, CTO at Brewer Science. “We have to ensure that the temperatures used for bonding materials don’t exceed the thermal limitations of any of the chips that are being integrated into the package. Additionally, there may be some subsequent processes like redistribution layer (RDL) formation or molding. Our materials have to survive those processes. They have to survive the chemicals they come in contact with throughout the packaging process scheme. Mechanical stresses in the package add additional challenges for bonding materials.”

Within a stack of chiplets-on-substrate with an optional interposer, their material attributes affect the thermal-mechanical stresses between neighboring materials, as well. This directly impacts interconnect dimensional control over a large area substrate area.

“If you go work the numbers, you will find that the level of tolerance and control required is frightening,” said Dick Otte, CEO of Promex Industries. “You’re talking about controlling dimensions equivalent to the width of a grass blade over the length of a football field, so that’s roughly 1 in 100,000.”

The goal is uniform heating of the structure in reflow in order to attain the best process results and to avoid cracking. “When you’re taking it through a 250 degrees centigrade temperature change, then you need to heat up slowly so that the top doesn’t get hot before the bottom does,” said Otte.

Multi-physics to comprehend co-optimization
Multi-physics modeling has become the go-to method for co-optimizing packaging design and assembly process development. That affects both permanent and temporary materials, as well the placement of processors, memories, and other components.

“You always looking to what the customer needs electrically, because that’s going to help define the material set. The material set is broadly applicable to a bunch of speed ranges. As long as you don’t step outside of those electrical specifications, theoretically you should be okay,” said Mike Kelly, vice president of advanced package and technology integration at Amkor Technology.

To save many iterations of empirically based development, engineers can use physics-based simulations to understand the impact of a material set’s properties impact on the assembly process, power/thermals, and mechanical vibrations.

Consider that HPC chiplet products can consume ~1,000 watts at peak performance so the power and thermal interactions need to be fully understood.

We’ve struggled, as everybody has, with this blizzard of complexity in the different techniques. Not only do they vary across different vendors, but they’re also varying over time,” said Marc Swinnen, director of product marketing at Ansys. “Our approach has been to identify the essentials that need to be worked on. We work jointly with customers to develop a simulation flow that actually achieves what is needed now.”

Materials are just one piece of the puzzle. “Then there’s the assembly stresses that need to be modeled to know whether you can correctly assemble this device. The third one is mechanical vibration,” Swinnen said. “Can your device withstand those regular vibrations? Modeling these attributes ties directly into our mechanical analysis tools — acoustic, thermal, vibration, etc. In the end, you’re going to have to do physics simulation. We’re trying to make it accessible to people in many different forms. But the bedrock of our tool offerings is that we have the meshing simulation and analysis. It’s a question of getting the data in the right format in a way that’s practical and usable.”

Evolving assembly design kits
For conventional packages, OSATs provide design rules for each packaging technology. These need to consider electrical, mechanical and thermal design requirements and manufacturing process limitations. In effect this is a multi-dimensional bounding box. Suppliers perform iterations with the customer to create a product specific process recipe.

Rules cover the macro-level attributes. “At a minimum, what you see from design rules is maximum package size, maximum silicon size, and whether silicon can be [mounted] on both sides of the substrate, such that when you follow these constructions the final product will have a lifetime of 1,000 thermal cycles, for example,” said Fraunhofer’s Braun.

In addition, design rules need to describe routing constraints for the interposer and/or redistribution layer, such as RDL line widths and spaces, ball-grid/pillar/pad size and pitches, and the maximum number of interconnections.

Breaking up a monolithic HPC device into multiple dies shifts some of the semiconductor design/process complexity into the packaging space. That makes things much more complicated. Consider that to connect 10 dies requires on order of 100,000 traces within the interposer’s or substrate’s redistribution layer.

To cope with the complexity at the chip level, the IC industry has long relied upon process design kits (PDKs) to capture design rules in an electronic file that can be imported into EDA tools. Their counterparts, assembly design kits (ADKs), are relatively immature.

“We call it Smart Package,” said Amkor’s Kelly. “It’s an ADK that we give to every customer who’s doing their own design. It is a set of macros, and a customization of a database tailored to a customer’s particular design. For chiplets, it is a high-density fan-out package technology. And it’s cognizant of the limitations for metal density and metal spacing, etc. This makes it easier for us to do design rule checks (DRCs).”

But right now, with the level of customization still required, how an ADK is derived and what it entails is in flux. Partnerships between EDA tool vendors, OSATs, and semiconductor device providers are required.

“We come from the IC world where everything is very rigid,” said Kenneth Larsen, director of 3D-IC product management in Synopsys‘ EDA Group. “On the OSAT side, and maybe this is because it’s so custom, design rules seem like a data sheet. Then you build and optimize the products over time or in collaboration with the OSAT. It’s not an electronic exchange. In the IC world, this would be totally unheard of. While it is possible to tweak a few things, you have a qualification process. And it seems like that’s not there yet for packaging.”

Materials and associated assembly recipes ultimately drive what’s possible for a chiplet-substrate stack in terms of pillar pitch, RDL line widths and spaces, bonding processes, and chiplet placement tolerances. But within a handful of ADKs, there are many possible interactions to consider.

The current focus is on co-optimizing the system design with the chiplet assembly process, leading to an assembly process development flow (see figure 4). This flow considers the needs of customization of an assembly process, and it creates the necessary design rules to be used by package designers.

Fig. 4: Chip-package hybrid flow. Source: ASE

Fig. 4: Chip-package hybrid flow. Source: ASE

“First you need to define your structure using chiplets. Are you using substrate RDL, 2.5D RDL, or a bridge? After that you need to consider your structure’s materials. What kind of material do you choose to fulfill your electrical performance and the mechanical stress requirements,” said Cao. “After that, you do pre-analysis to ensure all the structures and materials you use are workable in terms of electrical, warpage and mechanical stress.”

The design planning flow also includes the evaluation of die-to-die interconnects through the documents for co-design sign-off.

Conclusion
Before chiplet-based designs can be enabled outside the IDM model, the industry needs to complete the ecosystem that bridges the manufacturing and design complexity. This is because the need to co-optimize the system architecture based on materials, process, and integration capabilities is essential. While this would be easier with a set of well-defined products for the chiplet ecosystem to drive forward on, that has not happened yet.

Engineering teams across the design and manufacturing stack will need to collaborate to choose the appropriate materials, architectures, processes, etc., to develop a final chiplet-based product that is designable. As ASE group’s Cao noted, “An integrated design and manufacturing ecosystem is important. It is very critical to have collaboration among IDM, vendors, materials suppliers. Everyone needs to work together to really enable integration for the real applications.”

Related stories
Fan-Out Packaging Gets Competitive
Manufacturability reaches sufficient level to compete with flip-chip BGA and 2.5D.

Inside Panel-Level Fan-Out Technology
Fraunhofer’s panel experts dig into why this approach is needed and where the challenges are to making it work.

Next Steps For Panel-Level Packaging
Where it’s working, and what challenges remain for even broader adoption.

Mini-Consortia Forming Around Chiplets
Commercial chiplet marketplaces are still on the distant horizon, but companies are getting an early start with more limited partnerships.

What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.

Mechanical Challenges Rise With Heterogeneous Integration
But gaps in tools make it difficult to address warpage, structural issues, and new materials in multi-die/multi-chiplet designs.

Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.

The post What Works Best For Chiplets appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • What Works Best For ChipletsAnne Meixner
    The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield. To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least i
     

What Works Best For Chiplets

18. Duben 2024 v 09:08

The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield.

To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least initially. The basic challenge is aligning domain-specific performance demands of end systems, which contain a growing number of chiplets, with the assembly and packaging capabilities and methodologies of IDMs, foundries, and OSATs. This includes the creation of assembly development kits (ADKs) that are roughly the equivalent of process development kits (PDKs), which today are codified with manufacturing specifications.

A PDK provides the appropriate level of detail needed to develop planar chips, marrying design tools with fab processes to achieve a predictable outcome. But making this work for an ADK with heterogeneous chiplets is many times more complex. Design and assembly teams need to manage thermal, mechanical, and electrical co-dependencies that cause electrical and mechanical stress, resulting in warpage, reduced yield, and reliability issues under real-world workloads. Layered on top of this the business and legal issues related to packaging of different devices from different manufacturers.

“Chiplets are a growing trend, especially in the HPC and networking segments, with potential to scale to other applications,” said Gabriela Pereira, technology and market analyst for semiconductor packaging at Yole Intelligence. “The industry has understood that high-end advanced packaging technologies are needed to connect them — but that’s much more complex than it seems. Connecting chiplets requires the design of high-bandwidth interconnections at the package level, which can take different forms — e.g., 2D, 2.5D or 3D — while ensuring that the thermal and power requirements are fulfilled.”

Commercial chiplet-based devices generally are domain-specific, and sometimes developed for a specific workload. So despite a big industry push to create a LEGO-like mix-and-match ecosystem for chiplets — which today includes multiple IP and EDA vendors, foundries, memory suppliers, OSATs, substrate suppliers, etc. — making this work as planned will require time and a massive amount of work.

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

In creating heterogeneous integrated designs, it’s essential to have much tighter collaboration between foundries, IDMs, OSATs, and PCB manufacturers. And because each chiplet-based system will be customized, the number of assembly processes will grow substantially. For example, one OSAT noted that among its ~5,000 customers, there are ~1,000 different assembly processes.

That diversity in products and processes makes it difficult to achieve predictable results by choosing chiplets from a large menu of options.

“We’ve already encountered a lot of limitations including not only the silicon, but also integration and the ecosystem,” said Lihong Cao, senior director at ASE Group, at MEPTEC’s Road to Chiplets forum. She stressed that customers continue to push for a low-cost chiplet assembly process, which is creating constructive tension between developing a sophisticated assembly process and the economic realities of different industry sectors. Computing devices for automotive have a higher cost sensitivity than for data centers, for example, but their chips operate in a harsher environment over a longer lifetime.

What’s needed is a defined set of assembly process recipes — basically, a highly limited menu of choices — that are specific to the end application (HPC, automotive, RF telecommunications) in order to lower the cost of chiplet-based systems. OSATs and foundries already are moving in that direction for high-performance computing. For example, at its 2024 Direct Connect event, Intel shared its six different package processes for chiplets. TSMC and Samsung also offer defined sets of chiplet processes. But the success of these assembly processes requires engineering teams to co-optimize the flows, processes, and materials to best match the system requirements.

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

“Previously, when we designed a system we only had to be worried about the system requirements. Once we start segregating into dies and reassembling them, we have to start looking at other things. We have to worry about putting them together while considering signal integrity between dies, reliability, thermals, etc.,” said Itai Leshniak, director of AI systems solutions at Applied Materials, at the MEPTEC forum. “If we take the case of AI-based computer vision, we can break it down layer by layer — on the hardware side, determining which computer vision processors, sensors, filters are needed to break it down into the architecture at layer. Then we begin to go through how to package all these chiplets, and then which materials to use and how to take advantage of those materials.”

Materials and assembly processes
Conceptually, design engineers will use chiplets to design a system. However, the co-design and integration is far more complicated than assembling a set of LEGO blocks, because the chiplets, interposers, and package substrates come from different design houses and manufacturing facilities. The advanced packaging technologies used to connect chiplets vary with an alphabet soup of names — FOWLP, FOPLP, CoWoS, etc., each of which poses additional design and material choices along with certain process limitations.

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Currently engineering teams determine the tradeoffs among the different packaging options to select materials, derive a process recipe, and determine design rules.

Materials are a good starting point. “Materials are very important because they enable new products and packaging technologies,” Tanja Braun, deputy group manager at the Fraunhofer Institute for Reliability and Microintegration IZM. “As you move into more advanced packaging, process is getting much more complex because you are putting more things together. In the end, it’s a combination of equipment, materials, and process development.”

There are three thermal parameters that are critical in package assembly processes — coefficients of thermal expansion (CTE), glass transition temperature (Tg), and thermal conductivity. These factors affect how a material behaves in manufacturing to packaging processes, as well as how it behaves in the field.

“Challenges for our materials include temperature limitations of different die,” said Rama Puligadda, CTO at Brewer Science. “We have to ensure that the temperatures used for bonding materials don’t exceed the thermal limitations of any of the chips that are being integrated into the package. Additionally, there may be some subsequent processes like redistribution layer (RDL) formation or molding. Our materials have to survive those processes. They have to survive the chemicals they come in contact with throughout the packaging process scheme. Mechanical stresses in the package add additional challenges for bonding materials.”

Within a stack of chiplets-on-substrate with an optional interposer, their material attributes affect the thermal-mechanical stresses between neighboring materials, as well. This directly impacts interconnect dimensional control over a large area substrate area.

“If you go work the numbers, you will find that the level of tolerance and control required is frightening,” said Dick Otte, CEO of Promex Industries. “You’re talking about controlling dimensions equivalent to the width of a grass blade over the length of a football field, so that’s roughly 1 in 100,000.”

The goal is uniform heating of the structure in reflow in order to attain the best process results and to avoid cracking. “When you’re taking it through a 250 degrees centigrade temperature change, then you need to heat up slowly so that the top doesn’t get hot before the bottom does,” said Otte.

Multi-physics to comprehend co-optimization
Multi-physics modeling has become the go-to method for co-optimizing packaging design and assembly process development. That affects both permanent and temporary materials, as well the placement of processors, memories, and other components.

“You always looking to what the customer needs electrically, because that’s going to help define the material set. The material set is broadly applicable to a bunch of speed ranges. As long as you don’t step outside of those electrical specifications, theoretically you should be okay,” said Mike Kelly, vice president of advanced package and technology integration at Amkor Technology.

To save many iterations of empirically based development, engineers can use physics-based simulations to understand the impact of a material set’s properties impact on the assembly process, power/thermals, and mechanical vibrations.

Consider that HPC chiplet products can consume ~1,000 watts at peak performance so the power and thermal interactions need to be fully understood.

We’ve struggled, as everybody has, with this blizzard of complexity in the different techniques. Not only do they vary across different vendors, but they’re also varying over time,” said Marc Swinnen, director of product marketing at Ansys. “Our approach has been to identify the essentials that need to be worked on. We work jointly with customers to develop a simulation flow that actually achieves what is needed now.”

Materials are just one piece of the puzzle. “Then there’s the assembly stresses that need to be modeled to know whether you can correctly assemble this device. The third one is mechanical vibration,” Swinnen said. “Can your device withstand those regular vibrations? Modeling these attributes ties directly into our mechanical analysis tools — acoustic, thermal, vibration, etc. In the end, you’re going to have to do physics simulation. We’re trying to make it accessible to people in many different forms. But the bedrock of our tool offerings is that we have the meshing simulation and analysis. It’s a question of getting the data in the right format in a way that’s practical and usable.”

Evolving assembly design kits
For conventional packages, OSATs provide design rules for each packaging technology. These need to consider electrical, mechanical and thermal design requirements and manufacturing process limitations. In effect this is a multi-dimensional bounding box. Suppliers perform iterations with the customer to create a product specific process recipe.

Rules cover the macro-level attributes. “At a minimum, what you see from design rules is maximum package size, maximum silicon size, and whether silicon can be [mounted] on both sides of the substrate, such that when you follow these constructions the final product will have a lifetime of 1,000 thermal cycles, for example,” said Fraunhofer’s Braun.

In addition, design rules need to describe routing constraints for the interposer and/or redistribution layer, such as RDL line widths and spaces, ball-grid/pillar/pad size and pitches, and the maximum number of interconnections.

Breaking up a monolithic HPC device into multiple dies shifts some of the semiconductor design/process complexity into the packaging space. That makes things much more complicated. Consider that to connect 10 dies requires on order of 100,000 traces within the interposer’s or substrate’s redistribution layer.

To cope with the complexity at the chip level, the IC industry has long relied upon process design kits (PDKs) to capture design rules in an electronic file that can be imported into EDA tools. Their counterparts, assembly design kits (ADKs), are relatively immature.

“We call it Smart Package,” said Amkor’s Kelly. “It’s an ADK that we give to every customer who’s doing their own design. It is a set of macros, and a customization of a database tailored to a customer’s particular design. For chiplets, it is a high-density fan-out package technology. And it’s cognizant of the limitations for metal density and metal spacing, etc. This makes it easier for us to do design rule checks (DRCs).”

But right now, with the level of customization still required, how an ADK is derived and what it entails is in flux. Partnerships between EDA tool vendors, OSATs, and semiconductor device providers are required.

“We come from the IC world where everything is very rigid,” said Kenneth Larsen, director of 3D-IC product management in Synopsys‘ EDA Group. “On the OSAT side, and maybe this is because it’s so custom, design rules seem like a data sheet. Then you build and optimize the products over time or in collaboration with the OSAT. It’s not an electronic exchange. In the IC world, this would be totally unheard of. While it is possible to tweak a few things, you have a qualification process. And it seems like that’s not there yet for packaging.”

Materials and associated assembly recipes ultimately drive what’s possible for a chiplet-substrate stack in terms of pillar pitch, RDL line widths and spaces, bonding processes, and chiplet placement tolerances. But within a handful of ADKs, there are many possible interactions to consider.

The current focus is on co-optimizing the system design with the chiplet assembly process, leading to an assembly process development flow (see figure 4). This flow considers the needs of customization of an assembly process, and it creates the necessary design rules to be used by package designers.

Fig. 4: Chip-package hybrid flow. Source: ASE

Fig. 4: Chip-package hybrid flow. Source: ASE

“First you need to define your structure using chiplets. Are you using substrate RDL, 2.5D RDL, or a bridge? After that you need to consider your structure’s materials. What kind of material do you choose to fulfill your electrical performance and the mechanical stress requirements,” said Cao. “After that, you do pre-analysis to ensure all the structures and materials you use are workable in terms of electrical, warpage and mechanical stress.”

The design planning flow also includes the evaluation of die-to-die interconnects through the documents for co-design sign-off.

Conclusion
Before chiplet-based designs can be enabled outside the IDM model, the industry needs to complete the ecosystem that bridges the manufacturing and design complexity. This is because the need to co-optimize the system architecture based on materials, process, and integration capabilities is essential. While this would be easier with a set of well-defined products for the chiplet ecosystem to drive forward on, that has not happened yet.

Engineering teams across the design and manufacturing stack will need to collaborate to choose the appropriate materials, architectures, processes, etc., to develop a final chiplet-based product that is designable. As ASE group’s Cao noted, “An integrated design and manufacturing ecosystem is important. It is very critical to have collaboration among IDM, vendors, materials suppliers. Everyone needs to work together to really enable integration for the real applications.”

Related stories
Fan-Out Packaging Gets Competitive
Manufacturability reaches sufficient level to compete with flip-chip BGA and 2.5D.

Inside Panel-Level Fan-Out Technology
Fraunhofer’s panel experts dig into why this approach is needed and where the challenges are to making it work.

Next Steps For Panel-Level Packaging
Where it’s working, and what challenges remain for even broader adoption.

Mini-Consortia Forming Around Chiplets
Commercial chiplet marketplaces are still on the distant horizon, but companies are getting an early start with more limited partnerships.

What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.

Mechanical Challenges Rise With Heterogeneous Integration
But gaps in tools make it difficult to address warpage, structural issues, and new materials in multi-die/multi-chiplet designs.

The post What Works Best For Chiplets appeared first on Semiconductor Engineering.

❌
❌