FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • 3.5D: The Great CompromiseEd Sperling
    The semiconductor industry is converging on 3.5D as the next best option in advanced packaging, a hybrid approach that includes stacking logic chiplets and bonding them separately to a substrate shared by other components. This assembly model satisfies the need for big increases in performance while sidestepping some of the thorniest issues in heterogeneous integration. It establishes a middle ground between 2.5D, which already is in widespread use inside of data centers, and full 3D-ICs, which
     

3.5D: The Great Compromise

21. Srpen 2024 v 09:01

The semiconductor industry is converging on 3.5D as the next best option in advanced packaging, a hybrid approach that includes stacking logic chiplets and bonding them separately to a substrate shared by other components.

This assembly model satisfies the need for big increases in performance while sidestepping some of the thorniest issues in heterogeneous integration. It establishes a middle ground between 2.5D, which already is in widespread use inside of data centers, and full 3D-ICs, which the chip industry has been struggling to commercialize for the better part of a decade.

A 3.5D architecture offers several key advantages:

  • It creates enough physical separation to effectively address thermal dissipation and noise.
  • It provides a way to add more SRAM into high-speed designs. SRAM has been the go-to choice for processor cache since the mid-1960s, and remains an essential element for faster processing. But SRAM no longer scales at the same rate as digital transistors, so it is consuming more real estate (in percentage terms) at each new node. And because the size of a reticle is fixed, the best available option is to add area by stacking chiplets vertically.
  • By thinning the interface between processing elements and memory, a 3.5D approach also can shorten the distances that signals need to travel and greatly improve processing speeds well beyond a planar implementation. This is essential with large language models and AI/ML, where the amount of data that needs to be processed quickly is exploding.

Chipmakers still point to fully integrated 3D-ICs as the best performing alternative to a planar SoC, but packing everything into a 3D configuration makes it harder to deal with physical effects. Thermal dissipation is probably the most difficult to contend with. Workloads can vary significantly, creating dynamic thermal gradients and trapping heat in unexpected places, which in turn reduce the lifespan and reliability of chips. On top of that, power and substrate noise become more problematic at each new node, as do concerns about electromagnetic interference.

“What the market has adopted first is high-performance chips, and those produce a lot of heat,” said Marc Swinnen, director of product marketing at Ansys. “They have gone for expensive cooling systems with a huge number of fans and heat sinks, and they have opted for silicon interposers, which arguably are some of the most expensive technologies for connecting chips together. But it also gives the highest performance and is very good for thermal because it matches the coefficient of thermal expansion. Thermal is one of the big reasons that’s been successful. In addition to that, you may want bigger systems with more stuff that you can’t fit on one chip. That’s just a reticle-size limitation. Another is heterogeneous integration, where you want multiple different processes, like an RF process or the I/O, which don’t need to be in 5nm.”

A 3.5D assembly also provides more flexibility to add additional processor cores, and higher yield because known good die can be manufactured and tested separately, a concept first pioneered by Xilinx in 2011 at 28nm.

3.5D is a loose amalgamation of all these approaches. It can include two to three chiplets stacked on top of each other, or even multiple stacks laid out horizontally.

“It’s limited vertical, and not just for thermal reasons,” said Bill Chen, fellow and senior technical advisor at ASE Group. “It’s also for performance reasons. But thermal is the limiting factor, and we’ve talked about many different materials to help with that — diamond and graphene — but that limitation is still there.”

This is why the most likely combination, at least initially, will be processors stacked on SRAM, which simplifies the cooling. The heat generated by high utilization of different processing elements can be removed with heat sinks or liquid cooling. And with one or more thinned out substrates, signals will travel shorter distances, which in turn uses less power to move data back and forth between processors and memory.

“Most likely, this is going to be logic over memory on a logic process,” said Javier DeLaCruz, fellow and senior director of Silicon Ops Engineering at Arm. “These are all contained within an SoC normally, but a portion of that is going to be SRAM, which does not scale very well from node to node. So having logic over memory and a logic process is really the winning solution, and that’s one of the better use cases for 3D because that’s what really shortens your connectivity. A processor generally doesn’t talk to another processor. They talk to each other through memory, so having the memory on a different floor with no latency between them is pretty attractive.”

The SRAM doesn’t necessarily have to be at the same node as the processors advanced node, which also helps with yield, and reliability. At a recent Samsung Foundry event, Taejoong Song, the company’s vice president of foundry business development, showed a roadmap of a 3.5D configuration using a 2nm chiplet stacked on a 4nm chiplet next year, and a 1.4nm chiplet on top of a 2nm chiplet in 2027.


Fig. 1: Samsung’s heterogeneous integration roadmap showing stacked DRAM (HBM), chiplets and co-packaged optics. Source: Samsung Foundry

Intel Foundry’s approach is similar in many ways. “Our 3.5D technology is implemented on a substrate with silicon bridges,” said Kevin O’Buckley, senior vice president and general manager of Foundry Services at Intel. “This is not an incredibly costly, low-yielding, multi-reticle form-factor silicon, or even RDL. We’re using thin silicon slices in a much more cost-efficient fashion to enable that die-to-die connectivity — even stacked die-to-die connectivity — through a silicon bridge. So you get the same advantages of silicon density, the same SI (signal integrity) performance of that bridge without having to put a giant monolithic interposer underneath the whole thing, which is both cost- and capacity-prohibitive. It’s working. It’s in the lab and it’s running.”


Fig. 2: Intel’s 3.5D model. Source: Intel

The strategy here is partly evolutionary — 3.5D has been in R&D for at least several years — and part revolutionary, because thinning out the interconnect layer, figuring out a way to handle these thinner interconnect layers, and how to bond them is still a work in progress. There is a potential for warping, cracking, or other latent defects, and dynamically configuring data paths to maximize throughput is an ongoing challenge. But there have been significant advances in thermal management on two- and three-chiplet stacks.

“There will be multiple solutions,” said C.P. Hung, vice president of corporate R&D at ASE. “For example, besides the device itself and an external heat sink, a lot of people will be adding immersion cooling or local liquid cooling. So for the packaging, you can probably also expect to see the implementation of a vapor chamber, which will add a good interface from the device itself to an external heat sink. With all these challenges, we also need to target a different pitch. For example, nowadays you see mass production with a 45 to 40 pitch. That is a typical bumping solution. We expect the industry to move to a 25 to 20 micron bump pitch. Then, to go further, we need hybrid bonding, which is a less than 10 micron pitch.”


Fig. 3: Today’s interposers support more than 100,000 I/Os at a 45m pitch. Source: ASE

Hybrid bonding solves another thorny problem, which is co-planarity across thousands of micro-bumps. “People are starting to realize that the densities we’re interconnecting require a level of flatness, which the guys who make traditional things to be bonded are having a hard time meeting with reasonable yield,” David Fromm, COO at Promex Industries. “That makes it hard to build them, and the thinking is, ‘So maybe we’ve got to do something else.’ You’re starting to see some of that.”

Taming the Hydra
Managing heat remains a challenge, even with all the latest advances and a 3.5D assembly, but the ability to isolate the thermal effects from other components is the best option available today, and possibly well into the future. Still, there are other issues to contend with. Even 2.5D isn’t easy, and a large percentage of the 2.5D implementations have been bespoke designs by large systems companies with very deep pockets.

One of the big remaining challenges is closing timing so that signals arrive at the right place at the right fraction of a second. This becomes harder as more elements are added into chips, and in a 3.5D or 3D-IC, this can be incredibly complex.

“Timing ultimately is the key,” said Sutirtha Kabir, R&D director at Synopsys. “It’s not guaranteed that at whatever your temperature is, you can use the same library for timing. So the question is how much thermal- and IR-aware timing do you have to do? These are big systems. You have to make sure your sign-off is converging. There are two things coming out. There are a bunch of multi-physics effects that are all clumped together. And yes, you could traditionally do one at a time as sign-off, but that isn’t going to work very well. You need to figure out how to solve these problems simultaneously. Ultimately, you’re doing one design. It’s not one for thermal, one for IR, one for timing. The second thing is the data is exploding. How do you efficiently handle the data, because you cannot wait for days and days of runtime and simulation and analysis?”

Physically assembling these devices isn’t easy, either. “The challenge here is really in the thermal, electrical, and mechanical connection of all these various die with different thicknesses and different coefficients of thermal expansion,” said Intel’s O’Buckley. “So with three die, you’ve got the die and an active base, and those are substantially thinned to enable them to come together. And then EMIB is in the substrate. There’s always intense thermal-mechanical qualification work done to manage not just the assembly, but to ensure in the final assembly — the second-level assembly when this is going through system-level card attach — that this thing stays together.”

And depending upon demands for speed, the interconnects and interconnect materials can change. “Hybrid bonding gives you, by far, the best signal and power density,” said Arm’s DeLaCruz. “And it gives you the best thermal conductivity, because you don’t have that underfill that you would otherwise have to put in between the die, which is a pretty significant barrier. This is likely where the industry will go. It’s just a matter of having the production base.”

Hybrid bonding has been used for years for image sensors using wafer-on-wafer connections. “The tricky part is going into the logic space, where you’re moving from wafer-on-wafer to a die-on-wafer process, which is more complex,” DeLaCruz said. “While it currently would cost more, that’s a temporary problem because there’s not much of an installed base to support it and drive down the cost. There’s really no expensive material or equipment costs.”

Toward mass customization
All of this is leading toward the goal of choosing chiplets from a menu and then rapidly connecting them into some sort of architecture that is proven to work. That may not materialize for years. But commercial chiplets will show up in advanced designs over the next couple years, most likely in high-bandwidth memory with a customized processor in the stack, with more following that path in the future.

At least part of this will depend on how standardized the processes for designing, manufacturing, and testing become. “We’re seeing a lot of 2.5D from customers able to secure silicon interposers,” said Ruben Fuentes, vice president for the Design Center at Amkor Technology. “These customers want to place their chiplets on an interposer, then the full module is placed on a flip-chip substrate package. We also have customers who say they either don’t want to use a silicon interposer or cannot secure them. They prefer an RDL interconnect with S-SWIFT or with S-Connect, which serves as an interposer in very dense areas.”

But with at least a third of these leading designs only for internal use, and the remainder confined to large processor vendors, the rest of the market hasn’t caught up yet. Once it does, that will drive economies of scale and open the door to more complete assembly design kits, commercial chiplets, and more options for customization.

“Everybody is generally going in the same direction,” said Fuentes. “But not everything is the same height. HBMs are pre-packaged and are taller than ICs. HBMs could have 12 or 16 ICs stacked inside. It makes a difference from a co-planarity and thermal standpoint, and metal balancing on different layers. So now vendors are having a hard time processing all this data because suddenly you have these huge databases that are a lot bigger than the standard packaging databases. We’re seeing bridges, S-Connect, SWIFT, and then S-SWIFT. This is new territory, and we’re seeing a performance gap in the packaging tools. There’s work that needs to be done here, but software vendors have been very proactive in finding solutions. Additionally, these packages need to be routed. There is limited automated routing, so a good amount of interactive routing is still required, so it takes a lot of time.”


Fig. 4: Packaging roadmap showing bridge and hybrid bonding connections for modules and chiplets, respectively. Source: Amkor Technology

What’s missing
The key challenges ahead for 3.5D are proven reliability and customizability — requirements that are seemingly contradictory, and which are beyond the control of any single company. There are four major pieces to making all of this work.

EDA is the first important piece of the puzzle, and the challenge extends just beyond a single chip. “The IC designers have to think about a lot of things concurrently, like thermal, signal integrity, and power integrity,” said Keith Lanier, technical product management director at Synopsys. “But even beyond that, there’s a new paradigm in terms of how people need to work. Traditional packaging folks and IC designers need to work closely together to make these 3.5D designs successful.”

It’s not just about doing more with the same or fewer people. It’s doing more with different people, too. “It’s understanding the architecture definition, the functional requirements, constraints, and having those well-defined,” Lanier said. “But then it’s also feasibility, which includes partitioning and technology selection, and then prototyping and floor-planning. This is lots and lots of data that is required to be generated, and you need analysis-driven exploration, design, and implementation. And AI will be required to help designers and system design teams manage the sheer complexity of these 3.5D designs.”

Process/assembly design kits are a second critical piece, and this is likely to be split between the foundries and the OSATs. “If the customer wants a silicon interposer for a 2.5D package, it would be up to the foundry that’s going to manufacture the interposer to provide the PDK. We would provide the PDK for all of our products, such as S-SWIFT and S-Connect,” said Amkor’s Fuentes.

Setting realistic parameters is the third piece of the puzzle. While the type of processing elements and some of the analog functions may change — particularly those involving power and communication — most of the components will remain the same. That determines what can be pre-built and pre-tested, and the speed and ease of assembly.

“A lot of the standards that are being deployed, like UCIe interfaces and HBM interfaces are heading to where 20% is customization and 80% is on the shelf,” said Intel’s O’Buckley. “But we aren’t there today. At the scale that our customers are deploying these products, the economics of spending that extra time to optimize an implementation is a decimal point. It’s not leveraging 80/20 standards. We’ll get there. But most of these designs you can count on your fingers and toes because of the cost and scale required to do them. And until the infrastructure for standards-based chiplets gets mature, the barrier of entry for companies that want to do this without that scale is just too high. Still, it is going to happen.”

Ensuring processes are consistent is the fourth piece of the puzzle. The tools and the individual processes don’t need to change. “The customer has a ‘target’ for the outcome they want for a particular tool, which typically is a critical dimension measured by a metrology tool,” said David Park, vice president of marketing at Tignis. “As long as there is some ‘measurement’ that determines the goodness of some outcome, which typically is the result of a process step, we can either predict the bad outcome — and engineers have to take some corrective or preventive action — or we can optimize the recipe of that tool in real time to keep the result in the range they want.”

Park noted there is a recipe that controls the inputs. “The tool does whatever it is supposed to do,” he said. “Then you measure the output to see how far you deviated from the acceptable output.”

The challenge is that inside of a 3.5D system, what is considered acceptable output is still being defined. There are many processes with different tolerances. Defining what is consistent enough will require a broad understanding of how all the pieces work together under specific workloads, and where the potential weaknesses are that need to be adjusted.

“One of the problems here is as these densities get higher and the copper pillars get smaller, the amount of space you need between the pillar and the substrate have to be highly controlled,” said Dick Otte, president and CEO of Promex. “There’s a conflict — not so much with how you fabricate the chip, because it usually has the copper pillars on it — but with the substrate. A lot of the substrate technologies are not inherently flat. It’s the same issue with glass. You’ve got a really nice flat piece of glass. The first thing you’re going to do is put down a layer of metal and you’re going to pattern it. And then you put down a layer of dielectric, and suddenly you’ve got a lump where the conductor goes. And now, where do you put the contact points? So you always have the one plan which is going to be the contact point where all the pillars come in. But what if I only need one layer and I don’t need three?”

Conclusion
For the past decade, the chip industry has been trying to figure out a way to balance faster processing, domain-specific designs, limited reticle size, and the enormous cost of scaling an SoC. After investigating nearly every possible packaging approach, interconnect, power delivery method, substrate and dielectric material, 3.5D has emerged as the front runner — at least for now.

This approach provides the chip industry with a common thread on which to begin developing assembly design kits, commercial chiplets, and to fill in the missing tools and services throughout the supply chain. Whether this ultimately becomes a springboard for full 3D-ICs, or a platform on which to use 3D stacking more effectively, remains to be seen. But for the foreseeable future, large chipmakers have converged on a path forward to provide orders of magnitude performance improvements and a way to contain costs. The rest of the industry will be working to smooth out that path for years to come.

Related Reading
Intel Vs. Samsung Vs. TSMC
Foundry competition heats up in three dimensions and with novel technologies as planar scaling benefits diminish.
3D Metrology Meets Its Match In 3D Chips And Packages
Next-generation tools take on precision challenges in three dimensions.
Design Flow Challenged By 3D-IC Process, Thermal Variation
Rethinking traditional workflows by shifting left can help solve persistent problems caused by process and thermal variations.
Floor-Planning Evolves Into The Chiplet Era
Automatically mitigating thermal issues becomes a top priority in heterogeneous designs.

The post 3.5D: The Great Compromise appeared first on Semiconductor Engineering.

  • ✇Gamecritics.com
  • Tomba! Special Edition ReviewJustin Grandfield
    The Baconator Returns HIGH It’s endlessly charming. The new soundtrack is fantastic. LOW The extra features are not explained. The museum lacks polish. WTF Tomba keeps items in his stomach, like Snake did with cigarettes in Metal Gear Solid… Tomba! Special Edition is a reminder of the bygone days of the PS1, when developers took experimental approaches to the then-new Sony console. Tomba!, a 2.5D game from 1997, defied the logic that 3D was where every developer should be hea
     

Tomba! Special Edition Review

14. Srpen 2024 v 13:00

The Baconator Returns

HIGH It’s endlessly charming. The new soundtrack is fantastic.

LOW The extra features are not explained. The museum lacks polish.

WTF Tomba keeps items in his stomach, like Snake did with cigarettes in Metal Gear Solid…


Tomba! Special Edition is a reminder of the bygone days of the PS1, when developers took experimental approaches to the then-new Sony console. Tomba!, a 2.5D game from 1997, defied the logic that 3D was where every developer should be heading. What resulted was an experience that still looks beautiful, has a ton of charm, a cult following, and remains enjoyable more than 20 years after its debut.

Tomba! Special Edition is an action-platformer with light RPG elements. The titular character must defeat the evil Koma Pigs to recover his stolen bracelet. Along the way, Tomba will encounter and befriend many creatures and people who need help, while also carrying out objectives to advance the story, finding ways to the evil pigs’ lairs, or opening new pathways to needed items in this fairly non-linear adventure.

The characters are all endearing and charming in their own way. From standard fantasy dwarves to wilder fare like mouse cowboys, each area was packed with unique and colorful characters. Each map is bright and picturesque, and the music has been wonderfully enhanced with a remastered soundtrack that pops.

Looking at the gameplay, platforming is the star of the show here, as Tomba is given various methods to traverse the world, such as a parasol for slowing down his fall rate or a grapple line to grab and swing from various objects. The 2.5D aspect also allows Tomba to go into the background and play in a different part of some levels. This was a pretty clever way to add… depth…. to platformers, which often didn’t use background layers like this. In this aspect, Tomba! excels.

To dispatch enemies, Tomba must jump and grab onto them, so that he can then fling them. Sometimes stunning them is necessary first, and combat never became dull since different enemies required unique strategies. The boss pigs in particular were a highlight, as each has their own arena where the objective is to throw them into an Evil Pig Bag. (Yes, it’s called that.)

As Tomba! Special Edition is an updated release, there are some great quality-of-life features that I found incredibly helpful during my time playing. For starters, there’s a helpful rewind feature that allowed me to move the game back anywhere from a few seconds to a few minutes. This allowed me to retry difficult platforming sections without losing progress or health. There’s also a way to save at any time, which made the challenge even more friendly to new players of the series, like myself.

In addition, a museum feature allowed me to view art and documents, such as advertisements and manuals. There were also videos with Tokuro Fujiwara (director and creator of the series) about the development process of the original Tomba! and a music player where any of the tracks can be listened to.

While this new version of an old classic seems great as I’ve described it so far, there are a few issues with both the game and the supplementary material.

For example, with the historical videos, there’s no way to rewind or even pause the playback. This seems like a pretty standard feature in 2024, and the omission of any controls here is pretty annoying.

In regards to the game itself, it suffers from long load times between areas. In many cases, this isn’t really noticeable, but when having to many between short screens in succession, it became an annoyance. There’s also noticeable frame juddering, particularly when weather effects are present.

Also annoying is that the rewind and save features are not explained to the player beforehand. I figured them out by pressing random buttons, which is hardly optimal. New features like these need to be explained, so people will know exactly how to take full advantage of them. (The music also cuts out for a few seconds when using rewind, taking me out of the mood.)

Mechanically, Tomba! Special Edition suffers from some wonky physics, although these issues were present in the original. When swinging between platforms, it’s common to miss the next one due to how easy it is to overshoot an object and how little time to correct there is. Jumping also feels imprecise, often feeling too floaty.

Finally, mission design is often a bit too obtuse for its own good. For example, sometimes it’s necessary to talk to unassuming NPCs several times despite not having any reason to do so. There’s also a good deal of backtracking to be done, and sometimes I felt frustrated wasting time looking for answers, only to find that something else needed to be done first. Clearly, we’ve learned a lot about signposting and quest structure since the game was originally designed.

Tomba! Special Edition is a charming reminder of the experimental days of the PS1 era, and the cute characters and wonderful soundtrack still appeal. However, the flaws in this port and some of the game’s original issues might make it a bit tough for newcomers to fully embrace this beloved cult classic.

Rating: 7 out of 10


Disclosures: This version of Tomba! is developed and published by Limited Run Games. It is currently available on PS4/5, XBO/X/S, Switch and PC. This copy of the game was obtained via publisher and reviewed on PS5. Approximately 8 hours was devoted to the game, and it was not completed. There is no multiplayer mode.

Parents: This game has an ESRB rating of E10+ rating for Alcohol Reference, Crude Humor, and Mild Fantasy Violence. The ESRB rating states: “This is an adventure platformer in which players follow a hero (Tomba) attempting to retrieve a stolen keepsake from evil pigs. From a side-scrolling perspective, players traverse whimsical environments while collecting fruit, performing quests, and defeating animal/monster enemies. Players use a spiked ball to knock out enemies; player can also grab and toss pigs into other characters. One mission involves fixing a pump to provide wine for a village. One level depicts pixelated cherub characters urinating on the ground; the cartoony cherubs’ pelvic regions and buttocks are briefly depicted. ”

Colorblind Modes: There are no colorblind options.

Deaf & Hard of Hearing Gamers: The game offers subtitles, but only during gameplay. Subtitles cannot be resized. (See example above.) This game is not accessible, due to a lack of subtitles during voiced cutscenes.

Remappable Controls: No, this game’s controls are not remappable. A screen will appear before the game is started that explains the controls. On PS5, circle attacks with the equipped weapon, X is for jumping and can be used to scroll through text, square is for interacting with objects and NPCs, triangle opens up the items menu, the touchpad opens the entire menu, L2 opens the rewind menu, and R2 opens the emulation menu. The first areas of the game will also explain them. However, there is no way to reference most of these controls without either restarting the game (since backing out to the main menu is not possible) or going back to the tutorial areas. The rewind and save anywhere features are not explained at all.

Could John Carmack implement voxels in Doom engine?

There are examples of voxel engines of that era - ones that draw columns of pixels according to height and color data.
Could a similar approach of software rendering of voxels be implemented in original Doom or Doom 2 engine? I mean rendering sets of columns instead of sprites. If it couldn't why then?

  • ✇WePlayGames.net: Home for all Gamers
  • Epic Games Freebie: Grab DNF Duel for Free Until August 15, 2024!Cyberez
    Article Reading Time: 4 minGame: DNF Duel I’ve got some killer news that’s gonna make your day – Epic Games Store is hooking us up with DNF Duel for free! Yeah, you read that right, absolutely FREE! I’m stoked to share this amazing deal with you, so listen up! From now until August 15, 2024, we can all grab DNF Duel without spending a single cent. It’s part of Epic’s awesome giveaway program, which blesses us with top-tier games for nothing. How awesome is that? Experience DNF Duel: A F
     

Epic Games Freebie: Grab DNF Duel for Free Until August 15, 2024!

Article Reading Time: 4 min
Game: DNF Duel

I’ve got some killer news that’s gonna make your day – Epic Games Store is hooking us up with DNF Duel for free! Yeah, you read that right, absolutely FREE! I’m stoked to share this amazing deal with you, so listen up!

From now until August 15, 2024, we can all grab DNF Duel without spending a single cent. It’s part of Epic’s awesome giveaway program, which blesses us with top-tier games for nothing. How awesome is that?

Experience DNF Duel: A Fighting Game Masterpiece by Arc System Works

Let me break down DNF Duel for you. It’s this insanely intense fighting game that’s got the whole community buzzing. The masterminds at Arc System Works (the fighting game wizards) teamed up with Nexon to deliver this gem. They’ve taken the Dungeon Fighter Online universe we all know and love and transformed it into a knock-out fighting experience that’ll blow your mind.

Imagine yourself glued to your screen, fingers dancing across the controller, as you throw down a roster of unique characters. The combat system is so deep you could drown in it, the visuals are straight-up gorgeous, and the mechanics are so engaging you’ll forget to eat (don’t do that, though; stay healthy!).

How to Claim Your Free Copy and Save $49.99

  1. Head over to the Epic Games Store and log in (or make an account if you haven’t already).
  2. Search for Apex Legends Season 22 in the store, or follow this link.
  3. Click on the freebie offer and follow the prompts to add it to your account.

When you snag DNF Duel from the Epic Games Store, you’re not getting some half-baked demo. No way! You’re getting the full package, baby! We’re talking the entire character lineup – each fighter bringing their own sick moves and style to the table. You’ll get to dive into all kinds of game modes, from epic story-driven quests to white-knuckle multiplayer battles where you can flex on your friends.

And let’s talk about the eye candy and ear treats for a second. The graphics are so crisp they’ll make your eyes water, and the soundtrack of Word on the Street is so firing that it’ll have you bobbing your head even before the first punch is thrown.

Here’s the kicker – DNF Duel usually costs a whopping $49.99. But right now? It’s all yours for the low price of zero dollars and zero cents! That’s forty buckaroos staying in your wallet, my friends. Talk about a deal of the century!

Don’t Miss Out: Secure Your Free DNF Duel Before the Deadline

Want to jump on this sweet, sweet offer? It’s easier than pulling off a button-mash combo. Just cruise over to the Epic Games Store, hunt down DNF Duel, and smash that ‘claim’ button before August 15, 2024. Once it’s chillin’ in your library, it’s yours to keep forever. No strings are attached, and there is no fine print – just pure gaming goodness.

I’m telling you, sleeping on this offer would be like forfeiting a match before it even starts. Whether you’re a seasoned pro who can pull off frame-perfect combos in your sleep or you’re just looking to dip your toes into the fighting game pool, DNF Duel’s got something for everyone.

So what are you waiting for? The clock’s ticking! Race over to the Epic Games Store and secure your free copy now! Trust me, your future self will thank you when you’re knee-deep in epic battles and mind-blowing combos.

Game on! May your reflexes be quick, your strategies unbeatable, and your victories sweet. See you in the arena!

The post Epic Games Freebie: Grab DNF Duel for Free Until August 15, 2024! appeared first on WePlayGames.net: Home for Top Gamers.

  • ✇Semiconductor Engineering
  • What Works Best For ChipletsAnne Meixner
    The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield. To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least i
     

What Works Best For Chiplets

18. Duben 2024 v 09:08

The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield.

To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least initially. The basic challenge is aligning domain-specific performance demands of end systems, which contain a growing number of chiplets, with the assembly and packaging capabilities and methodologies of IDMs, foundries, and OSATs. This includes the creation of assembly development kits (ADKs) that are roughly the equivalent of process development kits (PDKs), which today are codified with manufacturing specifications.

A PDK provides the appropriate level of detail needed to develop planar chips, marrying design tools with fab processes to achieve a predictable outcome. But making this work for an ADK with heterogeneous chiplets is many times more complex. Design and assembly teams need to manage thermal, mechanical, and electrical co-dependencies that cause electrical and mechanical stress, resulting in warpage, reduced yield, and reliability issues under real-world workloads. Layered on top of this the business and legal issues related to packaging of different devices from different manufacturers.

“Chiplets are a growing trend, especially in the HPC and networking segments, with potential to scale to other applications,” said Gabriela Pereira, technology and market analyst for semiconductor packaging at Yole Intelligence. “The industry has understood that high-end advanced packaging technologies are needed to connect them — but that’s much more complex than it seems. Connecting chiplets requires the design of high-bandwidth interconnections at the package level, which can take different forms — e.g., 2D, 2.5D or 3D — while ensuring that the thermal and power requirements are fulfilled.”

Commercial chiplet-based devices generally are domain-specific, and sometimes developed for a specific workload. So despite a big industry push to create a LEGO-like mix-and-match ecosystem for chiplets — which today includes multiple IP and EDA vendors, foundries, memory suppliers, OSATs, substrate suppliers, etc. — making this work as planned will require time and a massive amount of work.

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

In creating heterogeneous integrated designs, it’s essential to have much tighter collaboration between foundries, IDMs, OSATs, and PCB manufacturers. And because each chiplet-based system will be customized, the number of assembly processes will grow substantially. For example, one OSAT noted that among its ~5,000 customers, there are ~1,000 different assembly processes.

That diversity in products and processes makes it difficult to achieve predictable results by choosing chiplets from a large menu of options.

“We’ve already encountered a lot of limitations including not only the silicon, but also integration and the ecosystem,” said Lihong Cao, senior director at ASE Group, at MEPTEC’s Road to Chiplets forum. She stressed that customers continue to push for a low-cost chiplet assembly process, which is creating constructive tension between developing a sophisticated assembly process and the economic realities of different industry sectors. Computing devices for automotive have a higher cost sensitivity than for data centers, for example, but their chips operate in a harsher environment over a longer lifetime.

What’s needed is a defined set of assembly process recipes — basically, a highly limited menu of choices — that are specific to the end application (HPC, automotive, RF telecommunications) in order to lower the cost of chiplet-based systems. OSATs and foundries already are moving in that direction for high-performance computing. For example, at its 2024 Direct Connect event, Intel shared its six different package processes for chiplets. TSMC and Samsung also offer defined sets of chiplet processes. But the success of these assembly processes requires engineering teams to co-optimize the flows, processes, and materials to best match the system requirements.

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

“Previously, when we designed a system we only had to be worried about the system requirements. Once we start segregating into dies and reassembling them, we have to start looking at other things. We have to worry about putting them together while considering signal integrity between dies, reliability, thermals, etc.,” said Itai Leshniak, director of AI systems solutions at Applied Materials, at the MEPTEC forum. “If we take the case of AI-based computer vision, we can break it down layer by layer — on the hardware side, determining which computer vision processors, sensors, filters are needed to break it down into the architecture at layer. Then we begin to go through how to package all these chiplets, and then which materials to use and how to take advantage of those materials.”

Materials and assembly processes
Conceptually, design engineers will use chiplets to design a system. However, the co-design and integration is far more complicated than assembling a set of LEGO blocks, because the chiplets, interposers, and package substrates come from different design houses and manufacturing facilities. The advanced packaging technologies used to connect chiplets vary with an alphabet soup of names — FOWLP, FOPLP, CoWoS, etc., each of which poses additional design and material choices along with certain process limitations.

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Currently engineering teams determine the tradeoffs among the different packaging options to select materials, derive a process recipe, and determine design rules.

Materials are a good starting point. “Materials are very important because they enable new products and packaging technologies,” Tanja Braun, deputy group manager at the Fraunhofer Institute for Reliability and Microintegration IZM. “As you move into more advanced packaging, process is getting much more complex because you are putting more things together. In the end, it’s a combination of equipment, materials, and process development.”

There are three thermal parameters that are critical in package assembly processes — coefficients of thermal expansion (CTE), glass transition temperature (Tg), and thermal conductivity. These factors affect how a material behaves in manufacturing to packaging processes, as well as how it behaves in the field.

“Challenges for our materials include temperature limitations of different die,” said Rama Puligadda, CTO at Brewer Science. “We have to ensure that the temperatures used for bonding materials don’t exceed the thermal limitations of any of the chips that are being integrated into the package. Additionally, there may be some subsequent processes like redistribution layer (RDL) formation or molding. Our materials have to survive those processes. They have to survive the chemicals they come in contact with throughout the packaging process scheme. Mechanical stresses in the package add additional challenges for bonding materials.”

Within a stack of chiplets-on-substrate with an optional interposer, their material attributes affect the thermal-mechanical stresses between neighboring materials, as well. This directly impacts interconnect dimensional control over a large area substrate area.

“If you go work the numbers, you will find that the level of tolerance and control required is frightening,” said Dick Otte, CEO of Promex Industries. “You’re talking about controlling dimensions equivalent to the width of a grass blade over the length of a football field, so that’s roughly 1 in 100,000.”

The goal is uniform heating of the structure in reflow in order to attain the best process results and to avoid cracking. “When you’re taking it through a 250 degrees centigrade temperature change, then you need to heat up slowly so that the top doesn’t get hot before the bottom does,” said Otte.

Multi-physics to comprehend co-optimization
Multi-physics modeling has become the go-to method for co-optimizing packaging design and assembly process development. That affects both permanent and temporary materials, as well the placement of processors, memories, and other components.

“You always looking to what the customer needs electrically, because that’s going to help define the material set. The material set is broadly applicable to a bunch of speed ranges. As long as you don’t step outside of those electrical specifications, theoretically you should be okay,” said Mike Kelly, vice president of advanced package and technology integration at Amkor Technology.

To save many iterations of empirically based development, engineers can use physics-based simulations to understand the impact of a material set’s properties impact on the assembly process, power/thermals, and mechanical vibrations.

Consider that HPC chiplet products can consume ~1,000 watts at peak performance so the power and thermal interactions need to be fully understood.

We’ve struggled, as everybody has, with this blizzard of complexity in the different techniques. Not only do they vary across different vendors, but they’re also varying over time,” said Marc Swinnen, director of product marketing at Ansys. “Our approach has been to identify the essentials that need to be worked on. We work jointly with customers to develop a simulation flow that actually achieves what is needed now.”

Materials are just one piece of the puzzle. “Then there’s the assembly stresses that need to be modeled to know whether you can correctly assemble this device. The third one is mechanical vibration,” Swinnen said. “Can your device withstand those regular vibrations? Modeling these attributes ties directly into our mechanical analysis tools — acoustic, thermal, vibration, etc. In the end, you’re going to have to do physics simulation. We’re trying to make it accessible to people in many different forms. But the bedrock of our tool offerings is that we have the meshing simulation and analysis. It’s a question of getting the data in the right format in a way that’s practical and usable.”

Evolving assembly design kits
For conventional packages, OSATs provide design rules for each packaging technology. These need to consider electrical, mechanical and thermal design requirements and manufacturing process limitations. In effect this is a multi-dimensional bounding box. Suppliers perform iterations with the customer to create a product specific process recipe.

Rules cover the macro-level attributes. “At a minimum, what you see from design rules is maximum package size, maximum silicon size, and whether silicon can be [mounted] on both sides of the substrate, such that when you follow these constructions the final product will have a lifetime of 1,000 thermal cycles, for example,” said Fraunhofer’s Braun.

In addition, design rules need to describe routing constraints for the interposer and/or redistribution layer, such as RDL line widths and spaces, ball-grid/pillar/pad size and pitches, and the maximum number of interconnections.

Breaking up a monolithic HPC device into multiple dies shifts some of the semiconductor design/process complexity into the packaging space. That makes things much more complicated. Consider that to connect 10 dies requires on order of 100,000 traces within the interposer’s or substrate’s redistribution layer.

To cope with the complexity at the chip level, the IC industry has long relied upon process design kits (PDKs) to capture design rules in an electronic file that can be imported into EDA tools. Their counterparts, assembly design kits (ADKs), are relatively immature.

“We call it Smart Package,” said Amkor’s Kelly. “It’s an ADK that we give to every customer who’s doing their own design. It is a set of macros, and a customization of a database tailored to a customer’s particular design. For chiplets, it is a high-density fan-out package technology. And it’s cognizant of the limitations for metal density and metal spacing, etc. This makes it easier for us to do design rule checks (DRCs).”

But right now, with the level of customization still required, how an ADK is derived and what it entails is in flux. Partnerships between EDA tool vendors, OSATs, and semiconductor device providers are required.

“We come from the IC world where everything is very rigid,” said Kenneth Larsen, director of 3D-IC product management in Synopsys‘ EDA Group. “On the OSAT side, and maybe this is because it’s so custom, design rules seem like a data sheet. Then you build and optimize the products over time or in collaboration with the OSAT. It’s not an electronic exchange. In the IC world, this would be totally unheard of. While it is possible to tweak a few things, you have a qualification process. And it seems like that’s not there yet for packaging.”

Materials and associated assembly recipes ultimately drive what’s possible for a chiplet-substrate stack in terms of pillar pitch, RDL line widths and spaces, bonding processes, and chiplet placement tolerances. But within a handful of ADKs, there are many possible interactions to consider.

The current focus is on co-optimizing the system design with the chiplet assembly process, leading to an assembly process development flow (see figure 4). This flow considers the needs of customization of an assembly process, and it creates the necessary design rules to be used by package designers.

Fig. 4: Chip-package hybrid flow. Source: ASE

Fig. 4: Chip-package hybrid flow. Source: ASE

“First you need to define your structure using chiplets. Are you using substrate RDL, 2.5D RDL, or a bridge? After that you need to consider your structure’s materials. What kind of material do you choose to fulfill your electrical performance and the mechanical stress requirements,” said Cao. “After that, you do pre-analysis to ensure all the structures and materials you use are workable in terms of electrical, warpage and mechanical stress.”

The design planning flow also includes the evaluation of die-to-die interconnects through the documents for co-design sign-off.

Conclusion
Before chiplet-based designs can be enabled outside the IDM model, the industry needs to complete the ecosystem that bridges the manufacturing and design complexity. This is because the need to co-optimize the system architecture based on materials, process, and integration capabilities is essential. While this would be easier with a set of well-defined products for the chiplet ecosystem to drive forward on, that has not happened yet.

Engineering teams across the design and manufacturing stack will need to collaborate to choose the appropriate materials, architectures, processes, etc., to develop a final chiplet-based product that is designable. As ASE group’s Cao noted, “An integrated design and manufacturing ecosystem is important. It is very critical to have collaboration among IDM, vendors, materials suppliers. Everyone needs to work together to really enable integration for the real applications.”

Related stories
Fan-Out Packaging Gets Competitive
Manufacturability reaches sufficient level to compete with flip-chip BGA and 2.5D.

Inside Panel-Level Fan-Out Technology
Fraunhofer’s panel experts dig into why this approach is needed and where the challenges are to making it work.

Next Steps For Panel-Level Packaging
Where it’s working, and what challenges remain for even broader adoption.

Mini-Consortia Forming Around Chiplets
Commercial chiplet marketplaces are still on the distant horizon, but companies are getting an early start with more limited partnerships.

What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.

Mechanical Challenges Rise With Heterogeneous Integration
But gaps in tools make it difficult to address warpage, structural issues, and new materials in multi-die/multi-chiplet designs.

Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.

The post What Works Best For Chiplets appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • What Works Best For ChipletsAnne Meixner
    The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield. To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least i
     

What Works Best For Chiplets

18. Duben 2024 v 09:08

The semiconductor industry is preparing for the migration from proprietary chiplet-based systems to a more open chiplet ecosystem, in which chiplets fabricated by different companies of various technologies and device nodes can be integrated in a single package with acceptable yield.

To make this work as expected, the chip industry will have to solve a variety of well-documented technical and business issues, and it will have to rein in some of the grander visions of what’s possible — at least initially. The basic challenge is aligning domain-specific performance demands of end systems, which contain a growing number of chiplets, with the assembly and packaging capabilities and methodologies of IDMs, foundries, and OSATs. This includes the creation of assembly development kits (ADKs) that are roughly the equivalent of process development kits (PDKs), which today are codified with manufacturing specifications.

A PDK provides the appropriate level of detail needed to develop planar chips, marrying design tools with fab processes to achieve a predictable outcome. But making this work for an ADK with heterogeneous chiplets is many times more complex. Design and assembly teams need to manage thermal, mechanical, and electrical co-dependencies that cause electrical and mechanical stress, resulting in warpage, reduced yield, and reliability issues under real-world workloads. Layered on top of this the business and legal issues related to packaging of different devices from different manufacturers.

“Chiplets are a growing trend, especially in the HPC and networking segments, with potential to scale to other applications,” said Gabriela Pereira, technology and market analyst for semiconductor packaging at Yole Intelligence. “The industry has understood that high-end advanced packaging technologies are needed to connect them — but that’s much more complex than it seems. Connecting chiplets requires the design of high-bandwidth interconnections at the package level, which can take different forms — e.g., 2D, 2.5D or 3D — while ensuring that the thermal and power requirements are fulfilled.”

Commercial chiplet-based devices generally are domain-specific, and sometimes developed for a specific workload. So despite a big industry push to create a LEGO-like mix-and-match ecosystem for chiplets — which today includes multiple IP and EDA vendors, foundries, memory suppliers, OSATs, substrate suppliers, etc. — making this work as planned will require time and a massive amount of work.

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

Fig. 1: System assembly requires tighter coupling between chipmakers and OSATs. Source: ASE

In creating heterogeneous integrated designs, it’s essential to have much tighter collaboration between foundries, IDMs, OSATs, and PCB manufacturers. And because each chiplet-based system will be customized, the number of assembly processes will grow substantially. For example, one OSAT noted that among its ~5,000 customers, there are ~1,000 different assembly processes.

That diversity in products and processes makes it difficult to achieve predictable results by choosing chiplets from a large menu of options.

“We’ve already encountered a lot of limitations including not only the silicon, but also integration and the ecosystem,” said Lihong Cao, senior director at ASE Group, at MEPTEC’s Road to Chiplets forum. She stressed that customers continue to push for a low-cost chiplet assembly process, which is creating constructive tension between developing a sophisticated assembly process and the economic realities of different industry sectors. Computing devices for automotive have a higher cost sensitivity than for data centers, for example, but their chips operate in a harsher environment over a longer lifetime.

What’s needed is a defined set of assembly process recipes — basically, a highly limited menu of choices — that are specific to the end application (HPC, automotive, RF telecommunications) in order to lower the cost of chiplet-based systems. OSATs and foundries already are moving in that direction for high-performance computing. For example, at its 2024 Direct Connect event, Intel shared its six different package processes for chiplets. TSMC and Samsung also offer defined sets of chiplet processes. But the success of these assembly processes requires engineering teams to co-optimize the flows, processes, and materials to best match the system requirements.

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

Fig. 2: Integrated platform development requires tightly coupled architectural analysis that co-optimizes the system design to architecture to assembly process and packaging material selections. Source: Applied Materials

“Previously, when we designed a system we only had to be worried about the system requirements. Once we start segregating into dies and reassembling them, we have to start looking at other things. We have to worry about putting them together while considering signal integrity between dies, reliability, thermals, etc.,” said Itai Leshniak, director of AI systems solutions at Applied Materials, at the MEPTEC forum. “If we take the case of AI-based computer vision, we can break it down layer by layer — on the hardware side, determining which computer vision processors, sensors, filters are needed to break it down into the architecture at layer. Then we begin to go through how to package all these chiplets, and then which materials to use and how to take advantage of those materials.”

Materials and assembly processes
Conceptually, design engineers will use chiplets to design a system. However, the co-design and integration is far more complicated than assembling a set of LEGO blocks, because the chiplets, interposers, and package substrates come from different design houses and manufacturing facilities. The advanced packaging technologies used to connect chiplets vary with an alphabet soup of names — FOWLP, FOPLP, CoWoS, etc., each of which poses additional design and material choices along with certain process limitations.

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Fig. 3: There are a multitude of choices in multi-die packaging from the high-level layout to substrates, materials, bonding methods, and cooling materials. Source: Synopsys

Currently engineering teams determine the tradeoffs among the different packaging options to select materials, derive a process recipe, and determine design rules.

Materials are a good starting point. “Materials are very important because they enable new products and packaging technologies,” Tanja Braun, deputy group manager at the Fraunhofer Institute for Reliability and Microintegration IZM. “As you move into more advanced packaging, process is getting much more complex because you are putting more things together. In the end, it’s a combination of equipment, materials, and process development.”

There are three thermal parameters that are critical in package assembly processes — coefficients of thermal expansion (CTE), glass transition temperature (Tg), and thermal conductivity. These factors affect how a material behaves in manufacturing to packaging processes, as well as how it behaves in the field.

“Challenges for our materials include temperature limitations of different die,” said Rama Puligadda, CTO at Brewer Science. “We have to ensure that the temperatures used for bonding materials don’t exceed the thermal limitations of any of the chips that are being integrated into the package. Additionally, there may be some subsequent processes like redistribution layer (RDL) formation or molding. Our materials have to survive those processes. They have to survive the chemicals they come in contact with throughout the packaging process scheme. Mechanical stresses in the package add additional challenges for bonding materials.”

Within a stack of chiplets-on-substrate with an optional interposer, their material attributes affect the thermal-mechanical stresses between neighboring materials, as well. This directly impacts interconnect dimensional control over a large area substrate area.

“If you go work the numbers, you will find that the level of tolerance and control required is frightening,” said Dick Otte, CEO of Promex Industries. “You’re talking about controlling dimensions equivalent to the width of a grass blade over the length of a football field, so that’s roughly 1 in 100,000.”

The goal is uniform heating of the structure in reflow in order to attain the best process results and to avoid cracking. “When you’re taking it through a 250 degrees centigrade temperature change, then you need to heat up slowly so that the top doesn’t get hot before the bottom does,” said Otte.

Multi-physics to comprehend co-optimization
Multi-physics modeling has become the go-to method for co-optimizing packaging design and assembly process development. That affects both permanent and temporary materials, as well the placement of processors, memories, and other components.

“You always looking to what the customer needs electrically, because that’s going to help define the material set. The material set is broadly applicable to a bunch of speed ranges. As long as you don’t step outside of those electrical specifications, theoretically you should be okay,” said Mike Kelly, vice president of advanced package and technology integration at Amkor Technology.

To save many iterations of empirically based development, engineers can use physics-based simulations to understand the impact of a material set’s properties impact on the assembly process, power/thermals, and mechanical vibrations.

Consider that HPC chiplet products can consume ~1,000 watts at peak performance so the power and thermal interactions need to be fully understood.

We’ve struggled, as everybody has, with this blizzard of complexity in the different techniques. Not only do they vary across different vendors, but they’re also varying over time,” said Marc Swinnen, director of product marketing at Ansys. “Our approach has been to identify the essentials that need to be worked on. We work jointly with customers to develop a simulation flow that actually achieves what is needed now.”

Materials are just one piece of the puzzle. “Then there’s the assembly stresses that need to be modeled to know whether you can correctly assemble this device. The third one is mechanical vibration,” Swinnen said. “Can your device withstand those regular vibrations? Modeling these attributes ties directly into our mechanical analysis tools — acoustic, thermal, vibration, etc. In the end, you’re going to have to do physics simulation. We’re trying to make it accessible to people in many different forms. But the bedrock of our tool offerings is that we have the meshing simulation and analysis. It’s a question of getting the data in the right format in a way that’s practical and usable.”

Evolving assembly design kits
For conventional packages, OSATs provide design rules for each packaging technology. These need to consider electrical, mechanical and thermal design requirements and manufacturing process limitations. In effect this is a multi-dimensional bounding box. Suppliers perform iterations with the customer to create a product specific process recipe.

Rules cover the macro-level attributes. “At a minimum, what you see from design rules is maximum package size, maximum silicon size, and whether silicon can be [mounted] on both sides of the substrate, such that when you follow these constructions the final product will have a lifetime of 1,000 thermal cycles, for example,” said Fraunhofer’s Braun.

In addition, design rules need to describe routing constraints for the interposer and/or redistribution layer, such as RDL line widths and spaces, ball-grid/pillar/pad size and pitches, and the maximum number of interconnections.

Breaking up a monolithic HPC device into multiple dies shifts some of the semiconductor design/process complexity into the packaging space. That makes things much more complicated. Consider that to connect 10 dies requires on order of 100,000 traces within the interposer’s or substrate’s redistribution layer.

To cope with the complexity at the chip level, the IC industry has long relied upon process design kits (PDKs) to capture design rules in an electronic file that can be imported into EDA tools. Their counterparts, assembly design kits (ADKs), are relatively immature.

“We call it Smart Package,” said Amkor’s Kelly. “It’s an ADK that we give to every customer who’s doing their own design. It is a set of macros, and a customization of a database tailored to a customer’s particular design. For chiplets, it is a high-density fan-out package technology. And it’s cognizant of the limitations for metal density and metal spacing, etc. This makes it easier for us to do design rule checks (DRCs).”

But right now, with the level of customization still required, how an ADK is derived and what it entails is in flux. Partnerships between EDA tool vendors, OSATs, and semiconductor device providers are required.

“We come from the IC world where everything is very rigid,” said Kenneth Larsen, director of 3D-IC product management in Synopsys‘ EDA Group. “On the OSAT side, and maybe this is because it’s so custom, design rules seem like a data sheet. Then you build and optimize the products over time or in collaboration with the OSAT. It’s not an electronic exchange. In the IC world, this would be totally unheard of. While it is possible to tweak a few things, you have a qualification process. And it seems like that’s not there yet for packaging.”

Materials and associated assembly recipes ultimately drive what’s possible for a chiplet-substrate stack in terms of pillar pitch, RDL line widths and spaces, bonding processes, and chiplet placement tolerances. But within a handful of ADKs, there are many possible interactions to consider.

The current focus is on co-optimizing the system design with the chiplet assembly process, leading to an assembly process development flow (see figure 4). This flow considers the needs of customization of an assembly process, and it creates the necessary design rules to be used by package designers.

Fig. 4: Chip-package hybrid flow. Source: ASE

Fig. 4: Chip-package hybrid flow. Source: ASE

“First you need to define your structure using chiplets. Are you using substrate RDL, 2.5D RDL, or a bridge? After that you need to consider your structure’s materials. What kind of material do you choose to fulfill your electrical performance and the mechanical stress requirements,” said Cao. “After that, you do pre-analysis to ensure all the structures and materials you use are workable in terms of electrical, warpage and mechanical stress.”

The design planning flow also includes the evaluation of die-to-die interconnects through the documents for co-design sign-off.

Conclusion
Before chiplet-based designs can be enabled outside the IDM model, the industry needs to complete the ecosystem that bridges the manufacturing and design complexity. This is because the need to co-optimize the system architecture based on materials, process, and integration capabilities is essential. While this would be easier with a set of well-defined products for the chiplet ecosystem to drive forward on, that has not happened yet.

Engineering teams across the design and manufacturing stack will need to collaborate to choose the appropriate materials, architectures, processes, etc., to develop a final chiplet-based product that is designable. As ASE group’s Cao noted, “An integrated design and manufacturing ecosystem is important. It is very critical to have collaboration among IDM, vendors, materials suppliers. Everyone needs to work together to really enable integration for the real applications.”

Related stories
Fan-Out Packaging Gets Competitive
Manufacturability reaches sufficient level to compete with flip-chip BGA and 2.5D.

Inside Panel-Level Fan-Out Technology
Fraunhofer’s panel experts dig into why this approach is needed and where the challenges are to making it work.

Next Steps For Panel-Level Packaging
Where it’s working, and what challenges remain for even broader adoption.

Mini-Consortia Forming Around Chiplets
Commercial chiplet marketplaces are still on the distant horizon, but companies are getting an early start with more limited partnerships.

What Can Go Wrong In Heterogeneous Integration
Workflows and tools are disconnected, mechanical stress is ill-defined, and complete co-planarity is nearly impossible. But there are solutions on the horizon.

Mechanical Challenges Rise With Heterogeneous Integration
But gaps in tools make it difficult to address warpage, structural issues, and new materials in multi-die/multi-chiplet designs.

The post What Works Best For Chiplets appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • 2.5D Integration: Big Chip Or Small PCB?Brian Bailey
    Defining whether a 2.5D device is a printed circuit board shrunk down to fit into a package, or is a chip that extends beyond the limits of a single die, may seem like hair-splitting semantics, but it can have significant consequences for the overall success of a design. Planar chips always have been limited by size of the reticle, which is about 858mm2. Beyond that, yield issues make the silicon uneconomical. For years, that has limited the number of features that could be crammed onto a planar
     

2.5D Integration: Big Chip Or Small PCB?

29. Únor 2024 v 09:08

Defining whether a 2.5D device is a printed circuit board shrunk down to fit into a package, or is a chip that extends beyond the limits of a single die, may seem like hair-splitting semantics, but it can have significant consequences for the overall success of a design.

Planar chips always have been limited by size of the reticle, which is about 858mm2. Beyond that, yield issues make the silicon uneconomical. For years, that has limited the number of features that could be crammed onto a planar substrate. Any additional features would need to be designed into additional chips and connected with a printed circuit board (PCB).

The advent of 2.5D packaging technology has opened up a whole new axis for expansion, allowing multiple chiplets to be interconnected inside an advanced package. But the starting point for this packaged design can have a big impact on how the various components are assembled, who is involved, and which tools are deployed and when.

There are several reasons why 2.5D is gaining ground today. One is cost. “If you can build smaller chips, or chiplets, and those chiplets have been designed and optimized to be integrated into a package, it can make the whole thing smaller,” says Tony Mastroianni, advanced packaging solutions director at Siemens Digital Industries Software. “And because the yield is much higher, that has a dramatic impact on cost. Rather than having 50% or below yield for die-sized chips, you can get that up into the 90% range.”

Interconnecting chips using a PCB also limits performance. “Historically, we had chips packaged separately, put on the PCB, and connected with some routing,” says Ramin Farjadrad, CEO and co-founder of Eliyan. “The problems people started to face were twofold. One was that the bandwidth between these chips was limited by going through the PCB, and then a limited number of balls on the package limited the connectivity between these chips.”

The key difference with 2.5D compared to a PCB is that 2.5D uses chip dimensions. There are much finer-grain wires, and various components can be packed much closer together on an interposer or in a package than on a board. For those reasons, wires can be shorter, there can be more of them, and bandwidth is increased.

That impacts performance at multiple levels. “Since they are so close, you don’t have the long transport RC or LC delays, so it’s much faster,” says Siemens’ Mastroianni. “You don’t need big drivers on a chip to drive long traces over the board, so you have lower power. You get orders of magnitude better performance — and lower power. A common metric is to talk about pico joules per bit. The amount of energy it takes to move bits makes 2.5D compelling.”

Still, the mindset affects the initial design concept, and that has repercussions throughout the flow. “If you talk to a die designer, they’re probably going to say that it is just a big chip,” says John Park, product management group director in the Custom IC & PCB Group at Cadence. “But if you talk to a package designer, or a board designer, they’re going to say it’s basically a tiny PCB.”

Who is right? “The internal organizational structure within the company often decides how this is approached,” says Marc Swinnen, director of product marketing at Ansys. “Longer term, you want to make sure that your company is structured to match the physics and not try to match the physics to your company.”

What is clear is that nothing is certain. “The digital world was very regular in that every two years we got a new node that was half size,” says Cadence’s Park. “There would be some new requirements, but it was very evolutionary. Packaging is the Wild West. We might get 8 new packaging technologies this year, 3 next year, 12 the next year. Many of these are coming from the foundries, whereas it used to be just from the outsourced semiconductor assembly and test companies (OSATs) and the substrate providers. While the foundries are a new entrant, the OSATs are offering some really interesting packaging technologies at a lower cost.”

Part of the reason for this is that different groups of people have different requirement sets. “The government and the military see the primary benefits as heterogeneous integration capabilities,” says Ansys’ Swinnen. “They are not pushing the edge of processing technology. Instead, they are designing things like monolithic microwave integrated circuits (MMICs), where they need waveguides for very high-speed signals. They approach it from a packaging assembly point of view. Conversely, the high-performance compute (HPC) companies approach it from a pile of 5nm and 3nm chips with high performance high-bandwidth memory (HBM). They see it as a silicon assembly problem. The benefit they see is the flexibility of the architecture, where they can throw in cores and interfaces and create products for specific markets without having to redesign each chiplet. They see flexibility as the benefit. Military sees heterogeneous integration as the benefit.”

Materials
There are several materials used as the substrate in 2.5D packaging technology, each of which has different tradeoffs in terms of cost, density, and bandwidth, along with each having a selection of different physical issues that must be overcome. One of the primary points of differentiation is the bump pitch, as shown in figure 1.

Fig 1. Chiplet interconnection for various substrate configurations. Source: Eliyan

Fig 1. Chiplet interconnection for various substrate configurations. Source: Eliyan

When talking about an interposer, it generally is considered to be silicon. “The interposer could be a large piece of silicon (Fig 1 top), or just silicon bridges between the chips (Fig 1 middle) to provide the connectivity,” says Eliyan’s Farjadrad. “Both of these solutions use micro-bumps, which have high density. Interposers and bridges provide a lot of high-density bumps and traces, and that gives you bandwidth. If you utilize 1,000 wires each running at 5Gb, you get 5Tb. If you have 10,000, you get 50Tb. But those signals cannot go more than two or three millimeters. Alternatively, if you avoid the silicon interposer and you stay with an organic package (Fig 1 bottom), such as flip chip package, the density of the traces is 5X to 10X less. However, the thickness of the wires can be 5X to 10X more. That’s a significant advantage, because the resistance of the wires will go down by the square of the thickness of the wires. The cross section of that wire goes up by the square of that wire, so the resistance comes down significantly. If it’s 5X less density, that means you can run signals almost 25X further.”

For some people, it is all about bandwidth per millimeter. “If you have a parallel bus, or a parallel interface that is high speed, and you want bandwidth per millimeter, then you would probably pick a silicon interposer,” says Kent Stahn, senior manager of hardware engineering in Synopsys‘ Solutions Group. “An organic substrate is low-loss, low-cost, but it doesn’t have the density. In between, there are a bunch of solutions that deliver on some of that, but not for the same cost.”

There are other reasons to pick a substrate material, as well. “Silicon interposer comes from a foundry, so availability is a problem,” says Manuel Mota, senior staff product manager in Synopsys’ Solutions Group. “Some companies are facing challenges in sourcing advanced packages because capacity is taken. By going to other technologies that have a little less bandwidth density, but perhaps enough for your application, you can find them elsewhere. That’s becoming a critical aspect.”

All of these technologies are progressing rapidly, however. “The reticle limit is about 858mm square,” says Park. “People are talking about interposers that are perhaps four times that size, but we have laminates that go much bigger. Some of the laminate substrates coming from Japan are approaching that same level of interconnect density that we can get from silicon. I personally see more push towards organic substrates. Chip-on-Wafer-on-Substrate (CoWoS) from TSMC uses a silicon interposer and has been the technology of choice for about 12 years. More recently they introduced CoWoS-R, which uses film polyamide, closer to an organic type of substrate. Now we hear a lot about glass substrates.”

Over time, the total real estate inside the package may grow. “It doesn’t make sense for foundries to continue to build things the size of a 30-inch printed circuit board,” adds Park. “There are materials that are capable of addressing the bigger designs. Where we really need density is die-to-die. We want those chiplets right next to each other, a couple of millimeters of interconnect length. We want things very short. But the rest of it is just fanning out the I/O so that it connects to the PCB.”

This is why bridges are popular. “We do see a progression to bridges for the high-speed part of the interface,” say Synopsys’ Stahn. “The back side of it would be fanout, like RDL fanout. We see RDL packages that are going to be more like traditional packages going forward.”

Interposers offer additional capabilities. “Today, 99% of the interposers are passive,” says Park. “There’s no front end of line, there are no device layers. It’s purely back end of line processing. You are adding three, four, five metal layers to that silicon. That’s what we call a passive interposer. It’s just creating that die-to-die interconnect. But there are people taking that die and making it an active interposer, basically adding logic to that.”

That can happen for different purposes. “You already see some companies doing active interposers, where they add power management or some of the controls logic,” says Mota. “When you start putting active circuits on interposer, is it still a 2.5D integration, or does it become a 3D integration? We don’t see a big trend toward active interposers today.”

There are some new issues, though. “You have to consider coefficients of thermal expansion (CTE) mismatches,” says Stahn. “This happens whenever two materials with different CTEs are bonded together. Let’s start with the silicon interposer. You can get higher wattage systems, where the SoCs can be talking to their peers, and that can consume a lot of power. A silicon interposer still has to go in a package. The CTE mismatches are between the silicon to the package material. And with the bridge, you’re using it where you need it, but it’s still silicon die-to-die. You have to do the thermal mechanical analysis to make sure that the power that you’re delivering, and the CTE mismatches that you have, result in a viable system.”

While signal lengths in theory can get longer, this poses some problems. “When you’re making those long connections inside a chip, you typically limit those routes to a couple of millimeters, and then you buffer it,” says Mastroianni. “The problem with a passive silicon interposer is there are no buffers. That can really become a serious issue. If you do need to make those connections, you need to plan those out very carefully. And you do need to make sure you’re running timing analysis. Typically, your package guys are not going to be doing that analysis. That’s more of a problem that’s been solved with static timing analysis by silicon engineers. We do need to introduce an STA flow and deal with all the extractions that include organic and silicon type traces, and it becomes a new problem. When you start getting into some of those very long traces, your simple RC timing delays, which are assumed in normal STA delay calculators, don’t account for some of the inductance and mutual inductance between those traces, so you can get serious accuracy issues for those long traces.”

Active interposers help. “With active interposers, you can overcome some of the long-distance problems by putting in buffers or signal repeaters,” says Swinnen. “Then it starts looking more like a chip again, and you can only do it on silicon. You have the EMIB technology from Intel, where they embedded chiplet into the interposer and that’s an active bridge. The chip talks to the EMIB chip, and they both talk to you through this little active bridge chip, which is not exactly an active interposer, but acts almost like an active interposer.”

But even passive components add value. “The first thing that’s being done is including trench capacitors in the interposer,” says Mastroianni. “That gives you the ability to do some good decoupling, where it counts, close to the die. If you put them out on the board, you lose a lot of the benefits for the high-speed interfaces. If you can get them in the interposer, sitting right under where you have the fast-switching speed signals, you can get some localized decoupling.”

In addition to different materials, there is the question of who designs the interposer. “The industry seems to think of it as a little PCB in the context of who’s doing the design,” says Matt Commens, senior manager for product management at Ansys. “The interposers are typically being designed by packaging engineers, even though they are silicon processes. This is especially true for the high-performance ones. It seems counterintuitive, but they have that signal integrity background, they’ve been designing transmission lines and minimizing mismatch at interconnects. A traditional IC designer works from a component point of view. So certainly, the industry is telling us that the people they’re assigning to do that design work are packaging type of personas.”

Power
There are some considerable differences in routing between PCBs and interposers. “Interposer routing is much easier, as the number of components is drastically reduced compared to the PCB,” says Andy Heinig, head of department for efficient electronics at Fraunhofer IIS/EAS. “On the other hand, the power grid on the interposer is much more complex due to the higher resistance of the metal layers and the fact that the power grid is cut out by signal wires. The routing for the die-to-die interface is more complex due to the routing density.”

Power delivery looks very different. “If you look at a PCB, they put these big metal pour areas embedded in the layers, and they void out areas where things need to go through,” says Park. “You put down a bunch of copper and then you void out the others. We can’t build an interposer that way. We have to deposit the interconnect, so the power and ground structures on a silicon interposer will look more like a digital chip. But the signal will look more like a PCB or laminate package.”

Routing does look more like a PCB than a chip. “You’ll see things like teardrops or fillets where it makes a connection to a pad or via to create better yield,” adds Park. “The routing styles today are more aligned to PCBs than they are to a digital IC, where you just have 90° orthogonal corners and clean routing channels. For interposers, whether it’s silicon or organic, the via is often bigger than the wire, which is a classic PCB problem. The routers, if we’re talking about digital, is again more like a small PCB than a die.”

TSVs can create problems, too. “If you’re going to treat them as square, you’re losing a lot of space at the corners,” says Swinnen. “You really want 45° around those objects. Silicon routers are traditionally Manhattan, although there has been a long tradition of RDL routing, which is the top layer where the bumps are connected. That has traditionally used octagonal bumps or round bumps, and then 45° routing. It’s not as flexible as the PCB routing, but they have redistribution layer routers, and also they have some routers that come from the full custom side which have full river routing.”

Related Reading
True 3D Is Much Tougher Than 2.5D
While terms often are used interchangeably, they are very different technologies with different challenges.
Thermal Integrity Challenges Grow In 2.5D
Work is underway to map heat flows in interposer-based designs, but there’s much more to be done.

The post 2.5D Integration: Big Chip Or Small PCB? appeared first on Semiconductor Engineering.

❌
❌