FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • Next-Gen High-Speed Communication In Data CentersEd Sperling
    Data centers are being flooded with data. While more of it needs to be processed locally, much of it also needs to be moved around within a system and between systems. This has put a spotlight on a variety of new optical technologies and methodologies. Yang Zhang, senior product marketing manager at Cadence, talks about the rapid increase in different types of optics and optical scenarios being developed to improve performance, reduce latency, and reduce overall power.  The post Next-Gen High-S
     

Next-Gen High-Speed Communication In Data Centers

21. Srpen 2024 v 09:15

Data centers are being flooded with data. While more of it needs to be processed locally, much of it also needs to be moved around within a system and between systems. This has put a spotlight on a variety of new optical technologies and methodologies. Yang Zhang, senior product marketing manager at Cadence, talks about the rapid increase in different types of optics and optical scenarios being developed to improve performance, reduce latency, and reduce overall power.

The post Next-Gen High-Speed Communication In Data Centers appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • 3.5D: The Great CompromiseEd Sperling
    The semiconductor industry is converging on 3.5D as the next best option in advanced packaging, a hybrid approach that includes stacking logic chiplets and bonding them separately to a substrate shared by other components. This assembly model satisfies the need for big increases in performance while sidestepping some of the thorniest issues in heterogeneous integration. It establishes a middle ground between 2.5D, which already is in widespread use inside of data centers, and full 3D-ICs, which
     

3.5D: The Great Compromise

21. Srpen 2024 v 09:01

The semiconductor industry is converging on 3.5D as the next best option in advanced packaging, a hybrid approach that includes stacking logic chiplets and bonding them separately to a substrate shared by other components.

This assembly model satisfies the need for big increases in performance while sidestepping some of the thorniest issues in heterogeneous integration. It establishes a middle ground between 2.5D, which already is in widespread use inside of data centers, and full 3D-ICs, which the chip industry has been struggling to commercialize for the better part of a decade.

A 3.5D architecture offers several key advantages:

  • It creates enough physical separation to effectively address thermal dissipation and noise.
  • It provides a way to add more SRAM into high-speed designs. SRAM has been the go-to choice for processor cache since the mid-1960s, and remains an essential element for faster processing. But SRAM no longer scales at the same rate as digital transistors, so it is consuming more real estate (in percentage terms) at each new node. And because the size of a reticle is fixed, the best available option is to add area by stacking chiplets vertically.
  • By thinning the interface between processing elements and memory, a 3.5D approach also can shorten the distances that signals need to travel and greatly improve processing speeds well beyond a planar implementation. This is essential with large language models and AI/ML, where the amount of data that needs to be processed quickly is exploding.

Chipmakers still point to fully integrated 3D-ICs as the best performing alternative to a planar SoC, but packing everything into a 3D configuration makes it harder to deal with physical effects. Thermal dissipation is probably the most difficult to contend with. Workloads can vary significantly, creating dynamic thermal gradients and trapping heat in unexpected places, which in turn reduce the lifespan and reliability of chips. On top of that, power and substrate noise become more problematic at each new node, as do concerns about electromagnetic interference.

“What the market has adopted first is high-performance chips, and those produce a lot of heat,” said Marc Swinnen, director of product marketing at Ansys. “They have gone for expensive cooling systems with a huge number of fans and heat sinks, and they have opted for silicon interposers, which arguably are some of the most expensive technologies for connecting chips together. But it also gives the highest performance and is very good for thermal because it matches the coefficient of thermal expansion. Thermal is one of the big reasons that’s been successful. In addition to that, you may want bigger systems with more stuff that you can’t fit on one chip. That’s just a reticle-size limitation. Another is heterogeneous integration, where you want multiple different processes, like an RF process or the I/O, which don’t need to be in 5nm.”

A 3.5D assembly also provides more flexibility to add additional processor cores, and higher yield because known good die can be manufactured and tested separately, a concept first pioneered by Xilinx in 2011 at 28nm.

3.5D is a loose amalgamation of all these approaches. It can include two to three chiplets stacked on top of each other, or even multiple stacks laid out horizontally.

“It’s limited vertical, and not just for thermal reasons,” said Bill Chen, fellow and senior technical advisor at ASE Group. “It’s also for performance reasons. But thermal is the limiting factor, and we’ve talked about many different materials to help with that — diamond and graphene — but that limitation is still there.”

This is why the most likely combination, at least initially, will be processors stacked on SRAM, which simplifies the cooling. The heat generated by high utilization of different processing elements can be removed with heat sinks or liquid cooling. And with one or more thinned out substrates, signals will travel shorter distances, which in turn uses less power to move data back and forth between processors and memory.

“Most likely, this is going to be logic over memory on a logic process,” said Javier DeLaCruz, fellow and senior director of Silicon Ops Engineering at Arm. “These are all contained within an SoC normally, but a portion of that is going to be SRAM, which does not scale very well from node to node. So having logic over memory and a logic process is really the winning solution, and that’s one of the better use cases for 3D because that’s what really shortens your connectivity. A processor generally doesn’t talk to another processor. They talk to each other through memory, so having the memory on a different floor with no latency between them is pretty attractive.”

The SRAM doesn’t necessarily have to be at the same node as the processors advanced node, which also helps with yield, and reliability. At a recent Samsung Foundry event, Taejoong Song, the company’s vice president of foundry business development, showed a roadmap of a 3.5D configuration using a 2nm chiplet stacked on a 4nm chiplet next year, and a 1.4nm chiplet on top of a 2nm chiplet in 2027.


Fig. 1: Samsung’s heterogeneous integration roadmap showing stacked DRAM (HBM), chiplets and co-packaged optics. Source: Samsung Foundry

Intel Foundry’s approach is similar in many ways. “Our 3.5D technology is implemented on a substrate with silicon bridges,” said Kevin O’Buckley, senior vice president and general manager of Foundry Services at Intel. “This is not an incredibly costly, low-yielding, multi-reticle form-factor silicon, or even RDL. We’re using thin silicon slices in a much more cost-efficient fashion to enable that die-to-die connectivity — even stacked die-to-die connectivity — through a silicon bridge. So you get the same advantages of silicon density, the same SI (signal integrity) performance of that bridge without having to put a giant monolithic interposer underneath the whole thing, which is both cost- and capacity-prohibitive. It’s working. It’s in the lab and it’s running.”


Fig. 2: Intel’s 3.5D model. Source: Intel

The strategy here is partly evolutionary — 3.5D has been in R&D for at least several years — and part revolutionary, because thinning out the interconnect layer, figuring out a way to handle these thinner interconnect layers, and how to bond them is still a work in progress. There is a potential for warping, cracking, or other latent defects, and dynamically configuring data paths to maximize throughput is an ongoing challenge. But there have been significant advances in thermal management on two- and three-chiplet stacks.

“There will be multiple solutions,” said C.P. Hung, vice president of corporate R&D at ASE. “For example, besides the device itself and an external heat sink, a lot of people will be adding immersion cooling or local liquid cooling. So for the packaging, you can probably also expect to see the implementation of a vapor chamber, which will add a good interface from the device itself to an external heat sink. With all these challenges, we also need to target a different pitch. For example, nowadays you see mass production with a 45 to 40 pitch. That is a typical bumping solution. We expect the industry to move to a 25 to 20 micron bump pitch. Then, to go further, we need hybrid bonding, which is a less than 10 micron pitch.”


Fig. 3: Today’s interposers support more than 100,000 I/Os at a 45m pitch. Source: ASE

Hybrid bonding solves another thorny problem, which is co-planarity across thousands of micro-bumps. “People are starting to realize that the densities we’re interconnecting require a level of flatness, which the guys who make traditional things to be bonded are having a hard time meeting with reasonable yield,” David Fromm, COO at Promex Industries. “That makes it hard to build them, and the thinking is, ‘So maybe we’ve got to do something else.’ You’re starting to see some of that.”

Taming the Hydra
Managing heat remains a challenge, even with all the latest advances and a 3.5D assembly, but the ability to isolate the thermal effects from other components is the best option available today, and possibly well into the future. Still, there are other issues to contend with. Even 2.5D isn’t easy, and a large percentage of the 2.5D implementations have been bespoke designs by large systems companies with very deep pockets.

One of the big remaining challenges is closing timing so that signals arrive at the right place at the right fraction of a second. This becomes harder as more elements are added into chips, and in a 3.5D or 3D-IC, this can be incredibly complex.

“Timing ultimately is the key,” said Sutirtha Kabir, R&D director at Synopsys. “It’s not guaranteed that at whatever your temperature is, you can use the same library for timing. So the question is how much thermal- and IR-aware timing do you have to do? These are big systems. You have to make sure your sign-off is converging. There are two things coming out. There are a bunch of multi-physics effects that are all clumped together. And yes, you could traditionally do one at a time as sign-off, but that isn’t going to work very well. You need to figure out how to solve these problems simultaneously. Ultimately, you’re doing one design. It’s not one for thermal, one for IR, one for timing. The second thing is the data is exploding. How do you efficiently handle the data, because you cannot wait for days and days of runtime and simulation and analysis?”

Physically assembling these devices isn’t easy, either. “The challenge here is really in the thermal, electrical, and mechanical connection of all these various die with different thicknesses and different coefficients of thermal expansion,” said Intel’s O’Buckley. “So with three die, you’ve got the die and an active base, and those are substantially thinned to enable them to come together. And then EMIB is in the substrate. There’s always intense thermal-mechanical qualification work done to manage not just the assembly, but to ensure in the final assembly — the second-level assembly when this is going through system-level card attach — that this thing stays together.”

And depending upon demands for speed, the interconnects and interconnect materials can change. “Hybrid bonding gives you, by far, the best signal and power density,” said Arm’s DeLaCruz. “And it gives you the best thermal conductivity, because you don’t have that underfill that you would otherwise have to put in between the die, which is a pretty significant barrier. This is likely where the industry will go. It’s just a matter of having the production base.”

Hybrid bonding has been used for years for image sensors using wafer-on-wafer connections. “The tricky part is going into the logic space, where you’re moving from wafer-on-wafer to a die-on-wafer process, which is more complex,” DeLaCruz said. “While it currently would cost more, that’s a temporary problem because there’s not much of an installed base to support it and drive down the cost. There’s really no expensive material or equipment costs.”

Toward mass customization
All of this is leading toward the goal of choosing chiplets from a menu and then rapidly connecting them into some sort of architecture that is proven to work. That may not materialize for years. But commercial chiplets will show up in advanced designs over the next couple years, most likely in high-bandwidth memory with a customized processor in the stack, with more following that path in the future.

At least part of this will depend on how standardized the processes for designing, manufacturing, and testing become. “We’re seeing a lot of 2.5D from customers able to secure silicon interposers,” said Ruben Fuentes, vice president for the Design Center at Amkor Technology. “These customers want to place their chiplets on an interposer, then the full module is placed on a flip-chip substrate package. We also have customers who say they either don’t want to use a silicon interposer or cannot secure them. They prefer an RDL interconnect with S-SWIFT or with S-Connect, which serves as an interposer in very dense areas.”

But with at least a third of these leading designs only for internal use, and the remainder confined to large processor vendors, the rest of the market hasn’t caught up yet. Once it does, that will drive economies of scale and open the door to more complete assembly design kits, commercial chiplets, and more options for customization.

“Everybody is generally going in the same direction,” said Fuentes. “But not everything is the same height. HBMs are pre-packaged and are taller than ICs. HBMs could have 12 or 16 ICs stacked inside. It makes a difference from a co-planarity and thermal standpoint, and metal balancing on different layers. So now vendors are having a hard time processing all this data because suddenly you have these huge databases that are a lot bigger than the standard packaging databases. We’re seeing bridges, S-Connect, SWIFT, and then S-SWIFT. This is new territory, and we’re seeing a performance gap in the packaging tools. There’s work that needs to be done here, but software vendors have been very proactive in finding solutions. Additionally, these packages need to be routed. There is limited automated routing, so a good amount of interactive routing is still required, so it takes a lot of time.”


Fig. 4: Packaging roadmap showing bridge and hybrid bonding connections for modules and chiplets, respectively. Source: Amkor Technology

What’s missing
The key challenges ahead for 3.5D are proven reliability and customizability — requirements that are seemingly contradictory, and which are beyond the control of any single company. There are four major pieces to making all of this work.

EDA is the first important piece of the puzzle, and the challenge extends just beyond a single chip. “The IC designers have to think about a lot of things concurrently, like thermal, signal integrity, and power integrity,” said Keith Lanier, technical product management director at Synopsys. “But even beyond that, there’s a new paradigm in terms of how people need to work. Traditional packaging folks and IC designers need to work closely together to make these 3.5D designs successful.”

It’s not just about doing more with the same or fewer people. It’s doing more with different people, too. “It’s understanding the architecture definition, the functional requirements, constraints, and having those well-defined,” Lanier said. “But then it’s also feasibility, which includes partitioning and technology selection, and then prototyping and floor-planning. This is lots and lots of data that is required to be generated, and you need analysis-driven exploration, design, and implementation. And AI will be required to help designers and system design teams manage the sheer complexity of these 3.5D designs.”

Process/assembly design kits are a second critical piece, and this is likely to be split between the foundries and the OSATs. “If the customer wants a silicon interposer for a 2.5D package, it would be up to the foundry that’s going to manufacture the interposer to provide the PDK. We would provide the PDK for all of our products, such as S-SWIFT and S-Connect,” said Amkor’s Fuentes.

Setting realistic parameters is the third piece of the puzzle. While the type of processing elements and some of the analog functions may change — particularly those involving power and communication — most of the components will remain the same. That determines what can be pre-built and pre-tested, and the speed and ease of assembly.

“A lot of the standards that are being deployed, like UCIe interfaces and HBM interfaces are heading to where 20% is customization and 80% is on the shelf,” said Intel’s O’Buckley. “But we aren’t there today. At the scale that our customers are deploying these products, the economics of spending that extra time to optimize an implementation is a decimal point. It’s not leveraging 80/20 standards. We’ll get there. But most of these designs you can count on your fingers and toes because of the cost and scale required to do them. And until the infrastructure for standards-based chiplets gets mature, the barrier of entry for companies that want to do this without that scale is just too high. Still, it is going to happen.”

Ensuring processes are consistent is the fourth piece of the puzzle. The tools and the individual processes don’t need to change. “The customer has a ‘target’ for the outcome they want for a particular tool, which typically is a critical dimension measured by a metrology tool,” said David Park, vice president of marketing at Tignis. “As long as there is some ‘measurement’ that determines the goodness of some outcome, which typically is the result of a process step, we can either predict the bad outcome — and engineers have to take some corrective or preventive action — or we can optimize the recipe of that tool in real time to keep the result in the range they want.”

Park noted there is a recipe that controls the inputs. “The tool does whatever it is supposed to do,” he said. “Then you measure the output to see how far you deviated from the acceptable output.”

The challenge is that inside of a 3.5D system, what is considered acceptable output is still being defined. There are many processes with different tolerances. Defining what is consistent enough will require a broad understanding of how all the pieces work together under specific workloads, and where the potential weaknesses are that need to be adjusted.

“One of the problems here is as these densities get higher and the copper pillars get smaller, the amount of space you need between the pillar and the substrate have to be highly controlled,” said Dick Otte, president and CEO of Promex. “There’s a conflict — not so much with how you fabricate the chip, because it usually has the copper pillars on it — but with the substrate. A lot of the substrate technologies are not inherently flat. It’s the same issue with glass. You’ve got a really nice flat piece of glass. The first thing you’re going to do is put down a layer of metal and you’re going to pattern it. And then you put down a layer of dielectric, and suddenly you’ve got a lump where the conductor goes. And now, where do you put the contact points? So you always have the one plan which is going to be the contact point where all the pillars come in. But what if I only need one layer and I don’t need three?”

Conclusion
For the past decade, the chip industry has been trying to figure out a way to balance faster processing, domain-specific designs, limited reticle size, and the enormous cost of scaling an SoC. After investigating nearly every possible packaging approach, interconnect, power delivery method, substrate and dielectric material, 3.5D has emerged as the front runner — at least for now.

This approach provides the chip industry with a common thread on which to begin developing assembly design kits, commercial chiplets, and to fill in the missing tools and services throughout the supply chain. Whether this ultimately becomes a springboard for full 3D-ICs, or a platform on which to use 3D stacking more effectively, remains to be seen. But for the foreseeable future, large chipmakers have converged on a path forward to provide orders of magnitude performance improvements and a way to contain costs. The rest of the industry will be working to smooth out that path for years to come.

Related Reading
Intel Vs. Samsung Vs. TSMC
Foundry competition heats up in three dimensions and with novel technologies as planar scaling benefits diminish.
3D Metrology Meets Its Match In 3D Chips And Packages
Next-generation tools take on precision challenges in three dimensions.
Design Flow Challenged By 3D-IC Process, Thermal Variation
Rethinking traditional workflows by shifting left can help solve persistent problems caused by process and thermal variations.
Floor-Planning Evolves Into The Chiplet Era
Automatically mitigating thermal issues becomes a top priority in heterogeneous designs.

The post 3.5D: The Great Compromise appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Real-World Applications Of Computational Fluid DynamicsEd Sperling
    More powerful chips are enabling chips to process more data faster, but they’re also having a revolutionary impact on how that data can be used. Simulations that used to take days or weeks now can be completed in a matter of hours, and multi-physics simulations that were implausible to even consider are now very much in the realm of what is possible. Parviz Moin, professor of mechanical engineering and director of the Center for Turbulence Research at Stanford University, talks about a future fi
     

Real-World Applications Of Computational Fluid Dynamics

5. Srpen 2024 v 09:15

More powerful chips are enabling chips to process more data faster, but they’re also having a revolutionary impact on how that data can be used. Simulations that used to take days or weeks now can be completed in a matter of hours, and multi-physics simulations that were implausible to even consider are now very much in the realm of what is possible. Parviz Moin, professor of mechanical engineering and director of the Center for Turbulence Research at Stanford University, talks about a future filled with “what if” scenarios, more grid points to capture tiny anomalies in wind or the behavior of jet engines, and much more detailed high-fidelity numerical simulations of waves, chemical reactions, and phase changes.

The post Real-World Applications Of Computational Fluid Dynamics appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Making Adaptive Test Work BetterEd Sperling
    One of the big challenges for IC test is making sense of mountains of data, a direct result of more features being packed onto a single die, or multiple chiplets being assembled into an advanced package. Collecting all that data through various agents and building models on the tester no longer makes sense for a couple reasons — there is too much data, and there are multiple customers using the same equipment. Steve Zamek, director of product management at PDF Solutions, and Eli Roth, product ma
     

Making Adaptive Test Work Better

10. Červen 2024 v 09:15

One of the big challenges for IC test is making sense of mountains of data, a direct result of more features being packed onto a single die, or multiple chiplets being assembled into an advanced package. Collecting all that data through various agents and building models on the tester no longer makes sense for a couple reasons — there is too much data, and there are multiple customers using the same equipment. Steve Zamek, director of product management at PDF Solutions, and Eli Roth, product manager at Teradyne, explain how to optimize testing around different data sources, how to partition that data between the edge and the cloud, and how to ensure it remains secure.

The post Making Adaptive Test Work Better appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Sensor Fusion Challenges In AutomotiveEd Sperling
    The number of sensors in automobiles is growing rapidly alongside new safety features and increasing levels of autonomy. The challenge is integrating them in a way that makes sense, because these sensors are optimized for different types of data, sometimes with different resolution requirements even for the same type of data, and frequently with very different latency, power consumption, and reliability requirements. Pulin Desai, group director for product marketing, management and business deve
     

Sensor Fusion Challenges In Automotive

2. Květen 2024 v 09:15

The number of sensors in automobiles is growing rapidly alongside new safety features and increasing levels of autonomy. The challenge is integrating them in a way that makes sense, because these sensors are optimized for different types of data, sometimes with different resolution requirements even for the same type of data, and frequently with very different latency, power consumption, and reliability requirements. Pulin Desai, group director for product marketing, management and business development at Cadence, talks about challenges with sensor fusion, the growing importance of four-dimensional sensing, what’s needed to future-proof sensor designs, and the difficulty of integrating one or more software stacks with conflicting requirements.

The post Sensor Fusion Challenges In Automotive appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Intel, And Others, InsideEd Sperling
    Intel this week made a strong case for how it will regain global process technology leadership, unfurling an aggressive technology and business roadmap that includes everything from several more process node shrinks that ultimately could scale into the single-digit angstrom range to a broad shift in how it approaches the market. Both will be essential for processing the huge amount of data for AI everywhere, and to win back some of the market share that NVIDIA currently wields. Intel’s strategy
     

Intel, And Others, Inside

22. Únor 2024 v 09:30

Intel this week made a strong case for how it will regain global process technology leadership, unfurling an aggressive technology and business roadmap that includes everything from several more process node shrinks that ultimately could scale into the single-digit angstrom range to a broad shift in how it approaches the market. Both will be essential for processing the huge amount of data for AI everywhere, and to win back some of the market share that NVIDIA currently wields.

Intel’s strategy is many layers thick. It includes a long list of innovations, including backside power delivery, which significantly reduces congestion and noise in more than a dozen metal layers, to 2.5D and 3D-ICs using a standard architecture for internally developed and third-party chiplets and IP. And it encompasses everything from a broad and open ecosystem — a sharp departure from its past own-it-all strategy, which once prompted anti-competitive charges — as well as a willingness to work more closely with customers, competitors, and governments to achieve its goals.

Whether Intel delivers on that promise remains to be seen. But the breadth of its vision, particularly for Intel Foundry, and the ambitious delivery schedule represent a sharp change in direction and culture for the 56-year-old chipmaker.

Of particular note is how the companies internal silos are being deconstructed. Intel CEO Pat Gelsinger said there will be a clean line between Intel products and Intel Foundry, but noted the foundry will be offer Intel developments and innovations developed by the product teams. “The à la carte menu is wide open for the industry,” he said during a Q&A session. “Clearwater Forest, which I showed today, is a construct that was innovated by my Xeon team — how to do hybrid bonding, Intel three base dies, 18A top die, being able to solve a lot of the CoWoS/Foveros problems using EMIB and hybrid bonding. That will become a set of collaterals that will benefit the foundry. They’re going to sell that constructional opportunity as a better way to build AI chips. So clearly, I’m taking product group intellectual property and leveraging it on the foundry side.”

This may sound like business as usual for a foundry, but Intel for years waffled over its commitment to a foundry model. In fact, it was viewed as an IDM that only began selling wafers to customers as the cost of maintaining its own fab began to skyrocket out of country. Creating separation between the two is a fundamental shift, and it’s an essential one for building trust in a complex world of sometimes partners, sometimes competitors. In the past, the company closely guarded its IP, confining that to its own processors rather than unbundling it and selling it to potential competitors. With this new division, Intel potentially can generate profits both from customized chips that leverage technologies it develops internally for other applications, as well as its own chip business.

Gelsinger is adamant about this being a leading-edge technology company, not a supplier of all components required in systems. “I’m not going to solve 200mm supply chain issues,” he said. “And, by the way, there are not going to be more 200mm factories built, for the most part, outside of specialty like SiC. There are some crazy discussions, like when some Europeans say, ‘I don’t need a leading-edge factory in Europe. Give me a 40nm node.’ What a stupid statement. It takes 5 years to build a new 40nm node, which is already 20 years old, and to make it economically viable, we’re going to have to run it 30 more years. Move the designs to more modern nodes as opposed to expecting to build old factories that are already out of date.”

Put in perspective, Intel is basically taking the Apple approach to chipmaking. How it fares against the other two leading-edge foundries isn’t clear at this point. Until 22nm, which was 16/14nm for Samsung and TSMC, Intel was the front runner in process technology. Several nodes later, it was trailing both Samsung and TSMC. The company is on a mission to regain its leadership.

“Great industries have two or three strong players,” said Gelsinger. “TSMC is a great company, and we’re going to build a great foundry, as well. And we’re going to challenge each other to further greatness. They are the best company in terms of customer support, bar none, in the industry, and they do not have a legacy of leadership technologies. They implement technologies with great customer support. We have a deep legacy of leadership technologies across domains that we created…At our innovation conference in September, I showed three companies participating in chiplet standardization — Synopsys, Intel, and TSMC. With our test chips — with our test chips. The world wants chiplets across a range of suppliers. I think we’ll be doing some of that. I think they’re going to be doing some of that. And I want to make sure that our mutual customers have great choice and technology benefits.”

If successful, Intel Foundry may not be the lowest-cost chipmaker, as TSMC founder Morris Chang has intimated. But if it can win back some key customers, and attract a lot of new ones with better performance, higher energy efficiency, and more customization options, that may not matter. What does matter is execution on the roadmap, and the number of companies rallying around Intel appears to indicate that significant changes are afoot, and that the competitive landscape could be a lot more fluid than it was several years ago.

The post Intel, And Others, Inside appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Broad Impact From Accelerating Tech CyclesEd Sperling
    Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are ex
     

Broad Impact From Accelerating Tech Cycles

21. Únor 2024 v 09:01

Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are excerpts of that conversation. Panelists were chosen by GSA’s EMTECH Interest Group. To view part one of this discussion, click here.


L-R: Accenture’s Kurniawan; Renesas’ Vora; Expedera’s Karazuba; Arm’s Yanamadala.

SE: In the past, a lot of data center applications were for things like enterprise resource planning (ERP), and those were 10- or 15-year cycles. Cycles now are 1 or 2 years at most. With ChatGPT, that’s about six months. How do companies plan for this today?

Kurniawan: In the past, businesses were very focused on just the technology. But technology is everywhere today. ERP is there to support the business initiatives, and there is a very intimate relationship between technology and business at this point. So virtually all businesses are technology businesses. We advise clients before implementing their technologies to think first about, ‘What are your business initiatives? What’s the business strategy? What’s the business imperative for where you want to go? What’s your vision?’ And then, once you understand that and get alignment from the leaders, you can think about the technology. You kind of jump back and forth, because those are really two sides of the same coin. You cannot separate them anymore. And your vision encompasses everything you want to achieve in the future while providing room for flexibility and testing out the technology plan you want to put in place to see how that supports your business vision. With every challenge comes opportunity. Our job as a consultant is really to be able to see what’s happening out there, continuously scanning the market, and trying to get ahead of the curve to advise clients.

Yanamadala: The rapid evolution of advanced technologies like generative AI can present challenges to data centers due to the short technology cycles and demanding workloads. Some of the key challenges with advanced workloads include fluctuating resource needs, because they can demand bursts of high compute. That means static resource allocation will be inefficient in handling these demands. Additionally, the growing demand for heterogenous computing can also present additional challenges in deploying a flexible compute infrastructure. Data centers are adding flexibility through adoption of containerization and virtualization. Adopting hardware-agnostic software frameworks like TensorFlow and PyTorch also can help to facilitate switching between different computing architectures. So can the development of efficient hardware and specialized AI accelerators.

SE: A lot of technology advancements are incremental, but if you get enough of these incremental improvements they can be combined in ways most people never imagined. We’ve seen systems shrink from mainframes to PCs to smart phones, and now computing is happening just about everywhere. Are we at the on the cusp of moving beyond a box, which we’ve been tethered to since the start of computing, and particularly with AR/VR.

Vora: I find it fascinating that somebody could wear a pair of glasses, get immersed in that world, and get used to it. From a user experience perspective, it seems like an extreme shift. Although I do see some play in certain verticals, it’s not clear there will be mass consumerization or adoption of this technology.

Kurniawan: Right now, generative AI is getting a lot of attention. ChatGPT captured the attention of hundreds of millions of people in 60 days. That says something. You input a prompt and you get a response back. ChatGPT is super-intuitive. It’s a technology with potential for many killer use cases. AR/VR is promising technology with upside potential, but there’s still work that needs to be done to tie that technology to the use case. Virtual reality gaming is number one, for sure. But the path to leveraging that technology to enhance how we operate other stuff still needs more clarity. That said, we recently published a white paper talking about the build-outs around the globe, driven by the combination of public incentivies and private investments. Everywhere around the world, everybody wants to build up their manufacturing facilities. We conducted interviews with semiconductor experts, and touched on AR/VR when we asked what they did during COVID when the whole world shut down. Is AR/VR like a hammer looking for nails? The overall response we got was pretty positive. They said that AR/VR probably will be tremendously useful at some future date. But they like where the technologies are going. For example, there are constraints like heat dissipation and the size of the headset, but the belief is the technology will evolve. As it matures to become more user-centric, you might think about using an AR/VR device to control the operations of the equipment in a fab. But there is work needed from a value perspective — connectivity and processing, for example.

Karazuba: AR/VR in the past has largely been a victim of its own hype cycle. There’s a lot of promises people have made. We’ve spent a little bit of time with AR/VR folks. There’s certainly an acknowledgement that whatever success the Apple AR/VR headset has will largely set the tone for the next half decade for what the AR/VR market is. These folks are not undeterred by that. Are we at a point today where you can walk around all day with mixed reality? No. With a home gaming system, being tied to the wall is probably a small price to pay for the constant AC power and the performance advantages that will provide. This is going to take some time. The value proposition is there, but the timing may not be right today. We saw this with the watch and wearables. Now, everybody has one of these. But it took five to seven years before it really took off.

Vora: We’ve worn watches for decades, so it’s not something new. It’s just that what we wear now is different. But with AR/VR, we’ve never done that before. How do you suddenly expect massive change like that?

Karazuba: But most of us are wearing eyeglasses. If you have a form factor that is a version of what we have now, where information is just simply overlaid on what we’re seeing, it’s not that far of a jump for mixed reality or augmented reality. However, with virtual reality, I find it hard to believe that people are going to walk into a conference room with a bunch of other people and put a headset on.

Yanamadala: We’ve seen devices and sensors deployed practically everywhere. Platforms that offer high-performance computing, along with secure, power-efficient hardware and connectivity are available today, and they will make this trend possible. But untethered or ambient consumer experiences in the mass market will have their challenges. We will need to invest in substantial infrastructure to enable technology to operate invisibly in the background. So while consumer-facing technology deployments increasingly become untethered, the compute and connectivity infrastructure will still require connections for power and bandwidth.

SE: People have been sounding the alarm for hardware security for years, but with limited success. What’s changed today is that we have many more connected devices and more valuable data. Is the chip industry starting to take this seriously? Or is the problem now so immense and pervasive that anything we do is just going to be a drop in the bucket?

Yanamada: Security is fundamental from the chip level, and five years ago we saw an opportunity to proactively improve the quality of chip security. IoT was in its early stages, and each chip vendor had varied and fragmented approaches to security. They also rarely approached an independent evaluation lab to check the robustness of their security implementation. But with increasing connectivity and data becoming more valuable, hackers were paying close attention, and governments were considering what action to take to protect consumers. That’s why in 2019, we launched PSA Certified – to rally the ecosystem to be proactive with security best practices. It’s critically important that chip vendors, software platforms, OEMs, and CSPs can deploy and access standardized Root of Trust services. Security is complicated. You need the whole value chain to work together.

Vora: Security architectures, at least on the hardware side, have come a long way. We pretty much now have a semiconductor TPM-like [Trusted Platform Module] capability, with security capabilities built into even small microcontrollers. They have cryptographic engines, randomizers, and all sorts of security elements built in. The fundamental challenge with security is that just putting some security features on a chip and providing all the technology pieces won’t solve the security challenge. Security is more of a system challenge and a policy challenge. In many cases, people have to think about it within the context of the entire network. And then, it’s only as strong as the weakest link in the network. That piece of security is going to grow in complexity as we start seeing more complex use cases with AI coming into play with IoT. On the other side, though, as data handling of AI moves closer to the edge, we will start seeing more local inferencing and local data being worked on without the need to mindlessly transport data across layers of networks and across the cloud. We’re going to see some lower risk and improvements from a data-in-flight perspective, because of a lot of more localization of intelligence and compute happening at different layers of the edge. As we start moving more to the edge, AI starts getting more of a hold there. But as a whole, security will remain a challenge. The fundamental challenges with security have not changed. It’s just the context and the systems in which we will have to apply them are different.

Karazuba: The semiconductor industry is finally starting to understand the true nature of what security breaches could mean with the type of data we’re handling. Security is a day zero responsibility of anyone building a product, whether that product is a chip or a device, and security responsibilities proliferate across the entire lifecycle of the of any device, from the person who is architecting the chip, to the person designing the smartphone, to the carrier. I would argue that carrier responsibilities for security go as far as the stopping those robo calls that we all get, and the spam calls and phishing calls. The internet service providers have a responsibility to stop the phishing e-mails. That’s all part of security. Obviously, with banks and financial institutions, their security is generally pretty good. But it stretches the entire way, and in the security world, the weakest link is always the security profile of your device. We’re getting better. We always could be better. But I am more encouraged now than I’ve been at any point since I really started looking at security of devices. I’m more encouraged by the way chips are being designed, deployed, manufactured, and delivered to customers.

Kurniawan: There’s some certification for IoT devices before those are sent into the market to make sure there is some security standard they adhere to. But two key words I mentioned before, collaboration and flexibility, are applicable to security, as well. Collaboration involves where you see the rest of the system, including other components in the technology set, going to evolve in the future. And flexibility is required, because security is a moving target. It needs to evolve because as you upgrade your system, your software, a vulnerability will move, as well. You need flexibility and security-minded thinking infused into your chip design.

Related Reading
Preparing For An AI-Driven Future In Chips (part 1 of above roundtable)
Designs need to be flexible enough to handle an onslaught of continuous and rapid changes, but secure enough to protect data.

The post Broad Impact From Accelerating Tech Cycles appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Broad Impact For Accelerating Tech CyclesEd Sperling
    Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are ex
     

Broad Impact For Accelerating Tech Cycles

21. Únor 2024 v 09:01

Experts at the Table: Semiconductor Engineering sat down to discuss the impact of leading edge technologies such as generative AI in data centers, AR/VR, and security architectures for connected devices, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are excerpts of that conversation. Panelists were chosen by GSA’s EMTECH Interest Group. To view part one of this discussion, click here.


L-R: Accenture’s Kurniawan; Renesas’ Vora; Expedera’s Karazuba; Arm’s Yanamadala.

SE: In the past, a lot of data center applications were for things like enterprise resource planning (ERP), and those were 10- or 15-year cycles. Cycles now are 1 or 2 years at most. With ChatGPT, that’s about six months. How do companies plan for this today?

Kurniawan: In the past, businesses were very focused on just the technology. But technology is everywhere today. ERP is there to support the business initiatives, and there is a very intimate relationship between technology and business at this point. So virtually all businesses are technology businesses. We advise clients before implementing their technologies to think first about, ‘What are your business initiatives? What’s the business strategy? What’s the business imperative for where you want to go? What’s your vision?’ And then, once you understand that and get alignment from the leaders, you can think about the technology. You kind of jump back and forth, because those are really two sides of the same coin. You cannot separate them anymore. And your vision encompasses everything you want to achieve in the future while providing room for flexibility and testing out the technology plan you want to put in place to see how that supports your business vision. With every challenge comes opportunity. Our job as a consultant is really to be able to see what’s happening out there, continuously scanning the market, and trying to get ahead of the curve to advise clients.

Yanamadala: The rapid evolution of advanced technologies like generative AI can present challenges to data centers due to the short technology cycles and demanding workloads. Some of the key challenges with advanced workloads include fluctuating resource needs, because they can demand bursts of high compute. That means static resource allocation will be inefficient in handling these demands. Additionally, the growing demand for heterogenous computing can also present additional challenges in deploying a flexible compute infrastructure. Data centers are adding flexibility through adoption of containerization and virtualization. Adopting hardware-agnostic software frameworks like TensorFlow and PyTorch also can help to facilitate switching between different computing architectures. So can the development of efficient hardware and specialized AI accelerators.

SE: A lot of technology advancements are incremental, but if you get enough of these incremental improvements they can be combined in ways most people never imagined. We’ve seen systems shrink from mainframes to PCs to smart phones, and now computing is happening just about everywhere. Are we at the on the cusp of moving beyond a box, which we’ve been tethered to since the start of computing, and particularly with AR/VR.

Vora: I find it fascinating that somebody could wear a pair of glasses, get immersed in that world, and get used to it. From a user experience perspective, it seems like an extreme shift. Although I do see some play in certain verticals, it’s not clear there will be mass consumerization or adoption of this technology.

Kurniawan: Right now, generative AI is getting a lot of attention. ChatGPT captured the attention of hundreds of millions of people in 60 days. That says something. You input a prompt and you get a response back. ChatGPT is super-intuitive. It’s a technology with potential for many killer use cases. AR/VR is promising technology with upside potential, but there’s still work that needs to be done to tie that technology to the use case. Virtual reality gaming is number one, for sure. But the path to leveraging that technology to enhance how we operate other stuff still needs more clarity. That said, we recently published a white paper talking about the build-outs around the globe, driven by the combination of public incentivies and private investments. Everywhere around the world, everybody wants to build up their manufacturing facilities. We conducted interviews with semiconductor experts, and touched on AR/VR when we asked what they did during COVID when the whole world shut down. Is AR/VR like a hammer looking for nails? The overall response we got was pretty positive. They said that AR/VR probably will be tremendously useful at some future date. But they like where the technologies are going. For example, there are constraints like heat dissipation and the size of the headset, but the belief is the technology will evolve. As it matures to become more user-centric, you might think about using an AR/VR device to control the operations of the equipment in a fab. But there is work needed from a value perspective — connectivity and processing, for example.

Karazuba: AR/VR in the past has largely been a victim of its own hype cycle. There’s a lot of promises people have made. We’ve spent a little bit of time with AR/VR folks. There’s certainly an acknowledgement that whatever success the Apple AR/VR headset has will largely set the tone for the next half decade for what the AR/VR market is. These folks are not undeterred by that. Are we at a point today where you can walk around all day with mixed reality? No. With a home gaming system, being tied to the wall is probably a small price to pay for the constant AC power and the performance advantages that will provide. This is going to take some time. The value proposition is there, but the timing may not be right today. We saw this with the watch and wearables. Now, everybody has one of these. But it took five to seven years before it really took off.

Vora: We’ve worn watches for decades, so it’s not something new. It’s just that what we wear now is different. But with AR/VR, we’ve never done that before. How do you suddenly expect massive change like that?

Karazuba: But most of us are wearing eyeglasses. If you have a form factor that is a version of what we have now, where information is just simply overlaid on what we’re seeing, it’s not that far of a jump for mixed reality or augmented reality. However, with virtual reality, I find it hard to believe that people are going to walk into a conference room with a bunch of other people and put a headset on.

Yanamadala: We’ve seen devices and sensors deployed practically everywhere. Platforms that offer high-performance computing, along with secure, power-efficient hardware and connectivity are available today, and they will make this trend possible. But untethered or ambient consumer experiences in the mass market will have their challenges. We will need to invest in substantial infrastructure to enable technology to operate invisibly in the background. So while consumer-facing technology deployments increasingly become untethered, the compute and connectivity infrastructure will still require connections for power and bandwidth.

SE: People have been sounding the alarm for hardware security for years, but with limited success. What’s changed today is that we have many more connected devices and more valuable data. Is the chip industry starting to take this seriously? Or is the problem now so immense and pervasive that anything we do is just going to be a drop in the bucket?

Yanamada: Security is fundamental from the chip level, and five years ago we saw an opportunity to proactively improve the quality of chip security. IoT was in its early stages, and each chip vendor had varied and fragmented approaches to security. They also rarely approached an independent evaluation lab to check the robustness of their security implementation. But with increasing connectivity and data becoming more valuable, hackers were paying close attention, and governments were considering what action to take to protect consumers. That’s why in 2019, we launched PSA Certified – to rally the ecosystem to be proactive with security best practices. It’s critically important that chip vendors, software platforms, OEMs, and CSPs can deploy and access standardized Root of Trust services. Security is complicated. You need the whole value chain to work together.

Vora: Security architectures, at least on the hardware side, have come a long way. We pretty much now have a semiconductor TPM-like [Trusted Platform Module] capability, with security capabilities built into even small microcontrollers. They have cryptographic engines, randomizers, and all sorts of security elements built in. The fundamental challenge with security is that just putting some security features on a chip and providing all the technology pieces won’t solve the security challenge. Security is more of a system challenge and a policy challenge. In many cases, people have to think about it within the context of the entire network. And then, it’s only as strong as the weakest link in the network. That piece of security is going to grow in complexity as we start seeing more complex use cases with AI coming into play with IoT. On the other side, though, as data handling of AI moves closer to the edge, we will start seeing more local inferencing and local data being worked on without the need to mindlessly transport data across layers of networks and across the cloud. We’re going to see some lower risk and improvements from a data-in-flight perspective, because of a lot of more localization of intelligence and compute happening at different layers of the edge. As we start moving more to the edge, AI starts getting more of a hold there. But as a whole, security will remain a challenge. The fundamental challenges with security have not changed. It’s just the context and the systems in which we will have to apply them are different.

Karazuba: The semiconductor industry is finally starting to understand the true nature of what security breaches could mean with the type of data we’re handling. Security is a day zero responsibility of anyone building a product, whether that product is a chip or a device, and security responsibilities proliferate across the entire lifecycle of the of any device, from the person who is architecting the chip, to the person designing the smartphone, to the carrier. I would argue that carrier responsibilities for security go as far as the stopping those robo calls that we all get, and the spam calls and phishing calls. The internet service providers have a responsibility to stop the phishing e-mails. That’s all part of security. Obviously, with banks and financial institutions, their security is generally pretty good. But it stretches the entire way, and in the security world, the weakest link is always the security profile of your device. We’re getting better. We always could be better. But I am more encouraged now than I’ve been at any point since I really started looking at security of devices. I’m more encouraged by the way chips are being designed, deployed, manufactured, and delivered to customers.

Kurniawan: There’s some certification for IoT devices before those are sent into the market to make sure there is some security standard they adhere to. But two key words I mentioned before, collaboration and flexibility, are applicable to security, as well. Collaboration involves where you see the rest of the system, including other components in the technology set, going to evolve in the future. And flexibility is required, because security is a moving target. It needs to evolve because as you upgrade your system, your software, a vulnerability will move, as well. You need flexibility and security-minded thinking infused into your chip design.

Related Reading
Preparing For An AI-Driven Future In Chips (part 1 of above roundtable)
Designs need to be flexible enough to handle an onslaught of continuous and rapid changes, but secure enough to protect data.

The post Broad Impact For Accelerating Tech Cycles appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Latency, Interconnects, And PokerEd Sperling
    Semiconductor Engineering sat down with Larry Pileggi, Coraluppi Head and Tanoto Professor of Electrical and Computer Engineering at Carnegie Mellon University, and the winner of this year’s Phil Kaufman Award for Pioneering Contributions. What follows are excerpts of that conversation. SE: When did you first get started working in semiconductors — and particularly, EDA? Pileggi: This was 1984 at Westinghouse research. We were making ASICs — analog and digital — and with digital you had logic si
     

Latency, Interconnects, And Poker

20. Únor 2024 v 09:09

Semiconductor Engineering sat down with Larry Pileggi, Coraluppi Head and Tanoto Professor of Electrical and Computer Engineering at Carnegie Mellon University, and the winner of this year’s Phil Kaufman Award for Pioneering Contributions. What follows are excerpts of that conversation.

Semiconductor Engineering sat down with Larry Pileggi, Coraluppi Head and Tanoto Professor of Electrical and Computer Engineering at Carnegie Mellon University, and the winner of this year's Phil Kaufman Award for Pioneering Contributions. What follows are excerpts of that conversation.SE: When did you first get started working in semiconductors — and particularly, EDA?

Pileggi: This was 1984 at Westinghouse research. We were making ASICs — analog and digital — and with digital you had logic simulators. But for analog, there was no way to simulate them at Westinghouse. They didn’t even have SPICE loaded onto the machine. So I got a copy of SPICE from Berkeley and loaded that tape, and I was the first to use it in the research center. I saw how limited it was and thought, ‘There must be more mature things than this.’ While I was working there, I was taking a class with Andrzej Strojwas at CMU (Carnegie Mellon University). He came up to me after a few weeks in that class and said, ‘I really think you should come back to school for a PhD.’ I had never considered it up until that point. But getting paid to go to school? That was cool, so I signed up.

SE: Circuit simulation in analog is largely brute force, right?

Pileggi: The tools that are out there are really good. There are many SPICEs out there, and they all have their niches that can do really great things. But it’s not something you can easily scale. That’s really been a challenge. There’s brute force in the innermost loop, but you can accelerate it with hardware.

SE: What was the ‘aha’ moment for you with regard to dealing with latency of interconnect as interconnect continued to scale?

Pileggi: There was some interest in looking at RC networks that appeared on chips as sort of a special class of problem. Paul Penfield and others at MIT did this Elmore approximation of RC lines using the first moment of the impulse response. It’s from a 1930s Elmore paper about estimating the delay of amplifiers. Mark Horowitz, a student of Penfield, tried to extend that to a few moments. What we did was more of a generalized approach, using many moments and building high-order approximations that you could apply to these RC lines. So you’re really using this to calculate that the dominant time constants, or the dominant poles, in the network. And for RC circuits, what’s really interesting is that the bigger the network gets, the more dominant the poles get. So you could have a million nodes out there — and it’s a million capacitors and a million poles — but for an RC line, three of them will model it really well. That makes things really efficient, providing you can capture those three efficiently. I was naive, not knowing that French mathematicians like [Henri] Pade already had attempted Pade approximations long before. I dove in like, ‘Oh, this should work.’ And I ran into a lot of the realities for why it doesn’t work. But then I was able to apply some of the circuit know-how to squeeze it into a place where it worked very effectively.

SE: A lot of that early work was around radio signals. But as you move that into the computing world, what else can you do with that? And if you now don’t have to put everything onto a single chip, does that change things?

Pileggi: Let’s take the power distribution for an IC, for example. That’s primarily dominated on the chip by RC phenomenon. The resistance far dominates the jωL impedance — the inductance. But as you move to a package, that’s different. If you put different chips together, whether you stack them or you put them on an interposer, inductance starts to rear its ugly head. Inductance is extraordinarily nasty to model and simulate. The problem is that when you when you look at capacitances, that’s a that’s a potential matrix where you take the nearest couplings, and say, ‘Okay, I have enough of this capacitance to say this is going to dominate the behavior.’ You’re essentially throwing away what you don’t need. With inductance, there’s a one-over relationship as compared to capacitance. Now, if you want the dominant inductance effect, that’s not so easy to get. If you have mutual couplings from everything to everything else, and if you say I’m going to throw away the couplings to faraway things, that’s a seemingly reasonable thing to do from an accuracy standpoint, but it affects the stability of the approximation. Essentially it can violate conservation of flux, such that you get positive poles. So you can actually create unstable systems by just throwing away small inductance terms. Usually when you see someone calculating inductance, it’s really just an estimate — or they’ve done some things to crunch it into a stable model.

SE: Is that simulation based on the 80/20 rule, or 90/10?

Pileggi: Even for the packages we had before we started doing the multi-chip stuff, power distribution was RC, but when you flip it into a package with many layers of metal, it’s LC. We had the same problem for the past 20 years, but what happens has been managed by good engineers. They apply very conservative methods to make sure the chips will work.

SE: So now, when you pile that into advanced nodes and packages and eliminate all that margin, you’ve got serious challenges, right?

Pileggi: Yes, and that’s why it was a good time for me to switch to electric power grids.

SE: Power grids for our communities have their own set of problems, though, like localization and mixing direct and alternating current, and a bunch of inverters.

Pileggi: It’s a fascinating problem. When I first stepped into it, a student of mine started telling me about how they did simulation. I said, ‘Wow, that doesn’t make any sense.’ I naively thought it was just like a big circuit, but it’s much more than that. It’s a very cool problem to work on. We’ve developed a lot of really exciting technology for that problem. With inverters, there’s a whole control loop. There isn’t the inertia that you have with big rotating machines that are fed by coal. But you have all these components on the same grid. How the grid behaves dynamically is a daunting problem.

SE: Does that also vary by weather? You’ve got wide variations in ambient temperature and all sorts of noise to contend with.

Pileggi: Yes, absolutely. In fact, how the lines behave is very much a function of temperature. That affects how resistant the transmission lines are. Frequency is very low, but the lengths are very long, so you have similar problems, but even more so with renewables. There’s sun, then a cloud, then sun. Or the wind changes direction. How do you store energy for use later? That’s where they talk about heavy batteries in the ground and things like that. Doing this with an old grid, like the one we have, is challenging. I’d much rather be starting from scratch.

SE: When you got started in electronics, was it largely the domain of some very big companies with very big research budgets?

Pileggi: Yes, and this is where you saw where management really makes a difference. Some of those companies, like Westinghouse Research, had these incredible R&D facilities, but they didn’t utilize them effectively, like all the gallium arsenide research where I was working. It seemed that every time we would develop something to improve something, the management didn’t always know what to do with it. I worked with some of the smartest people I’ve ever met, and they had worked on projects like the first camera in space, but they were living in obscurity. Nobody knew anything about their work, but it was just amazing.

SE: One other math-related question. You apparently have a reputation for being a very strong poker player. How did these two worlds collide?

Pileggi: I was in Las Vegas for a DARPA meeting and I had an afternoon off and there was a Texas Hold’em poker tournament going on. I thought it would be kind of fun, so I played four or five hours, got knocked out, and it cost me 100 bucks. I was intrigued by it, though. I went back to Pittsburgh and found our local casino had started a poker room with tournaments. I started getting better, probably because I read like 30 books on the subject. The more you play, the more you realize there are lots of layers to this. I ultimately played in the World Series in Vegas, because it’s like a bucket-list thing, and that first time I made it to day two of the main event. That’s equivalent to finishing in the top 40% of the field. When I was back in Pittsburgh, there was a ‘Poker Night in America’ event at the casino. There were about 300 people and some pros. I played in that, and won first place. That was a Saturday around Thanksgiving in 2013. We played from noon until just after midnight, and then you start again on Sunday. We played until maybe 5 a.m.

SE: That must have taken a toll.

Pileggi: Yes, because I was chairing the search for new department heads. I had a Monday morning meeting scheduled that I couldn’t miss, so I e-mailed everyone to say I would be an hour late and asked if they could push back the meeting. I went home and ate something, slept for an hour, and went to campus to do the final vote. They asked, what happened? I said I was in a poker tournament. They thought I was joking. But then they saw me on TV. All the local news stations covered it like, ‘Local professor skips school.’ I got a call from someone I hadn’t talked to in 34 years. My dean said his son thought engineering was stupid. But then he found out than this engineer won this poker tournament, and now he thinks engineering is really cool.’

SE: How did that affect your engineering classes?

Pileggi: I introduced myself to a group of students here two years ago when I became the department head and asked if they had any questions. One young lady raised her hand and said, ‘Yeah, can you teach us how to play poker?’ So now I do a poker training session with students once a semester.

The post Latency, Interconnects, And Poker appeared first on Semiconductor Engineering.

❌
❌