FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • Voltage Reference Architectures For Harsh Environments: Quantum Computing And SpaceTechnical Paper Link
    A technical paper titled “Cryo-CMOS Voltage References for the Ultrawide Temperature Range From 300 K Down to 4.2 K” was published by researchers at Delft University of Technology, QuTech, Kavli Institute of Nanoscience Delft, and École Polytechnique Fédérale de Lausanne (EPFL). Abstract: “This article presents a family of sub-1-V, fully-CMOS voltage references adopting MOS devices in weak inversion to achieve continuous operation from room temperature (RT) down to cryogenic temperatures. Their
     

Voltage Reference Architectures For Harsh Environments: Quantum Computing And Space

A technical paper titled “Cryo-CMOS Voltage References for the Ultrawide Temperature Range From 300 K Down to 4.2 K” was published by researchers at Delft University of Technology, QuTech, Kavli Institute of Nanoscience Delft, and École Polytechnique Fédérale de Lausanne (EPFL).

Abstract:

“This article presents a family of sub-1-V, fully-CMOS voltage references adopting MOS devices in weak inversion to achieve continuous operation from room temperature (RT) down to cryogenic temperatures. Their accuracy limitations due to curvature, body effect, and mismatch are investigated and experimentally validated. Implemented in 40-nm CMOS, the references show a line regulation better than 2.7%/V from a supply as low as 0.99 V. By applying dynamic element matching (DEM) techniques, a spread of 1.2% (3 σ ) from 4.2 to 300 K can be achieved, resulting in a temperature coefficient (TC) of 111 ppm/K. As the first significant statistical characterization extending down to cryogenic temperatures, the results demonstrate the ability of the proposed architectures to work under cryogenic harsh environments, such as space-and quantum-computing applications.”

Find the technical paper here. Published April 2024.

J. van Staveren et al., “Cryo-CMOS Voltage References for the Ultrawide Temperature Range From 300 K Down to 4.2 K,” in IEEE Journal of Solid-State Circuits, doi: 10.1109/JSSC.2024.3378768.

Further Reading
The Race Toward Quantum Advantage
Enormous amounts of money have been invested into quantum computing, but so far it has not surpassed conventional computers. When will that change?
Managing P/P Tradeoffs With Voltage Droop Gets Trickier
Higher current densities set against lower power envelopes makes meeting specs more challenging, especially at advanced nodes.

The post Voltage Reference Architectures For Harsh Environments: Quantum Computing And Space appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Design Considerations In PhotonicsKaren Heyman
    Experts at the Table: Semiconductor Engineering sat down to talk about what CMOS and photonics engineers need to know to successfully collaborate, with James Pond, fellow at Ansys; Gilles Lamant, distinguished engineer at Cadence; and Mitch Heins, business development manager for photonic solutions at Synopsys. What follows are excerpts of that conversation. To view part one of this discussion, click here. Part two is here. L-R: Ansys’s Pond, Cadence’s Lamant, Synopsys’ Heins SE: What do engine
     

Design Considerations In Photonics

1. Květen 2024 v 09:05

Experts at the Table: Semiconductor Engineering sat down to talk about what CMOS and photonics engineers need to know to successfully collaborate, with James Pond, fellow at Ansys; Gilles Lamant, distinguished engineer at Cadence; and Mitch Heins, business development manager for photonic solutions at Synopsys. What follows are excerpts of that conversation. To view part one of this discussion, click here. Part two is here.


L-R: Ansys’s Pond, Cadence’s Lamant, Synopsys’ Heins

SE: What do engineers who have spent their careers in CMOS need to know about designing for photonics?

Lamant:  It’s hard, no illusion. I had good mentors, including both James and Mitch, so I actually did that transition. Ten years ago, I knew nearly nothing about photonics. It takes having good mentors who can help you. That’s the biggest thing. It’s not enough to just try the software on your own. In addition, having an RF background is very useful in many ways. Photonics is the multiplication of RF. In photonics, you have multiple modes. In RF, you tend to only consider one mode, but a lot of the theory behind photonics is very much a generalization of RF.

Heins: We try to make our photonics flow look as much as we can like our electronics flow. We try to take the last 30 to 40 years of learning in EDA and apply it to photonics. One thing we see a lot is that when people are coming right out of school in photonics, they don’t necessarily have a deep background in how to do IC design. There are a lot of things we’ve learned, like design rule checking, that we now take for granted. It’s like breathing. You’ve got to do it. Layout versus schematic, you’ve got to do it. Even circuit-level simulation. As CMOS veterans, you’d think, of course, you always simulate your circuit before you go to manufacture, but that’s not the case in photonics.

Lamant: Those people actually know photonics, but they don’t know how to create a system. This is a different type of challenge. People who know photonics, know how to make a device. They’re expert at that. But they have no idea how to take that device and bring it to a full system that they can sell. I see that in so many startups. It’s not to make the point for EDA software. They use free software. They use Klayout and all those things that they have access to in the university. But all of those tools are not part of the ecosystem of trying to make a system. They say, ‘We wrote a custom simulator to simulate our ring.’ But the question then is, ‘How do you simulate the driver for your ring that goes with it?’ I see many startups fail because they don’t have that ability to take it from academic thinking to production.

You have the electronics people trying to do photonics, they have some methodology background, and other things, but they have a gap in knowledge. Fortunately, they can get caught up, especially if they’re an analog designer or an RF designer. They can close that gap by talking to the right people. Unfortunately, the people who know photonics do not have the knowledge of how to make a full system out of it, and this is greatly hurting the photonics world.

Pond: I would agree. We have two worlds of engineers who have been coming together over the last decade or so. Those who came from an EDA background — electrical circuit design, especially RF — have probably had the easiest time. We’ve been doing better and better for them. Ten years ago there was nothing. Now, there’s a more traditional workflow that looks more like an EDA workflow. Still, they have a lot to learn. But the workflow, the cockpit, and so on, follows along with the EDA model.

In the other direction, maybe we haven’t done quite as good a job because people coming from a photonics background can be really thrown off by the scale and complexity of EDA tools. My impression, coming from photonics, is EDA tools have been developed over many decades. When that happens, you end up with tools that are incredibly powerful, but you wonder if they’d been developed more recently, maybe things wouldn’t be done this way. There’s a resistance on the photonic engineer side to dive into that world because there’s a lot to learn about the EDA workflows. People from photonics have to embrace and take on that EDA world, because, as Gilles says, it’s necessary, it really has to be done.

Heins:  Now, you’re seeing a ton of work going into how to apply AI to help folks bring these kinds of more complex flows under control. There’s so much to learn, but if AI can help you take care of the plumbing, if you will, you can advance much faster. We already extensively use AI for SoCs or packaged designs where you have tens and hundreds of billions of transistors. Photonics is a different vector. The signal itself is much more complex than electrical. The optimization that you have to go through is much more complex. But AI can help get a handle on that, so as we go forward, you’ll start to see these kinds of complexities simplified for people.

SE: Is there something analogous to error correction/parity checks in the photonics world?

Lamant: That can’t be analogized to photonics, because that’s about knowing the original signal and comparing it to the others. Once you have reconstituted your data, and it’s back to being a digital set of bits, then you have a parity check or different types of things that today have nothing to do with photonics because it’s the physical link. In physical links, you can do retiming or a lot of things, but the error correction happens independently, on both sides.

Heins: Tuning might be something closer to it. If my resonance frequencies are not as expected, can I detect that and then adjust for it? That happens a lot. You could think of those kinds of things as error correction.

Pond: Most of the kind of error correction we’re talking about is just using all the standard methods, whether you have an optical link or a copper link. But there are some really interesting things. We had a workflow, developed between Ansys and Cadence a few years ago on a PAM-4 system, where we did a driver simulation and the photonic link together. You look into shifting the timing of signals to compensate for different effects. If you look at the eye at different locations, it may look completely distorted and wrong, because you’re pre-compensating for an effect that’s going to come later through the photonic portion of the link. That’s one of the reasons why it’s important to be able to do the full system simulation. You can’t just independently optimize the driver electronics and the photonics. They have to be done together, so you can perform the signal correction work.

Heins: You do things like equalization. Dispersion is another one. You get different wavelengths traveling at different speeds, and we compensate for that. At the physical level, there are some corrections that do take place, depending on the kind of system you’re trying to make. If you’re in coherent systems, where path links matter, phase matters, that’s more like trying to make the circuit correct by construction, so that you don’t encounter problems.

That raises another issue, which is manufacturing variances. There, you’re back to doing lots of sensitivity analysis through Monte Carlo-type simulations, parameterized simulations, etc., where you’re trying to get a feel for the sensitivity of your device, to a shift that could occur, either through the manufacturing process or just as this system sits in its ecosystem of whatever’s around it. It’s not quite error correction, per se, but certainly trying to design for that is something we care about.

SE: Any concluding thoughts?

Lamant: There is a lot of wondering and pondering right now, but it’s also exciting. We’ve reached the point where photonics is here to stay and will be part of more and more things. Looking forward, the interesting question is where it will become part of the actual data processing. Sensing is a terrific application for photonics, but I am not totally sold on the actual data processing. I’m not even using the word “computing” here, because processing and computing are very different things. Photonics is probably never going to be doing general computing. It may be doing specialized niche, like a Fourier transform-type of processing, and it needs to be part of a system.

Heins: It comes down to two things. What will really happen with quantum computing? And will quantum computing use photonics? A lot of people are looking at photonics for quantum computing because you can do a lot more of that work at room temperature than at 4 Kelvin or something like that — not all of it, but big chunks of it. If quantum computing actually becomes more than prototypes, and photonics is a big part of that, that could shift the answer. The other big issue in compute is we don’t have memory for photonics. If someone makes a breakthrough where suddenly states can be stored in some fashion, then all bets are off and everything changes again. But at this point, I don’t see anything promising.

One of the biggest challenges we have going forward for the whole ecosystem, in general, is lack of standards in this space, which makes interoperability between tools from our companies very difficult. The signal in photonics is very complex. It’s actually complex math, with real and imaginary parts. There are a lot of extra things that we have to take into account, and a lot of times we don’t even have common nomenclature or agreement on metrics and how to measure things. This is going to take time, but it’s being pushed by customers driving us to work together. For example, chiplets are great for photonics because a photonic IC is a chiplet. But all of a sudden, now you’re in a mixed domain, multi-physics type of environment, and there are some huge challenges to make that all work together. We have a pretty good handle on system functional verification, design-for-test, and all these things in the electronic IC world. In photonics, we’ve got a lot of work to do.

Pond: For me, it’s been exciting. I’ve been doing this for more than 20 years. In 2022, when I saw the first product with fibers actually coming out of the package, that was the dream from 20 years back. It took a lot of effort to get there. Things have been maturing very fast, especially in the last decade. That’s really promising from an EDA/EPDA-type of workflow perspective. The datacoms, as we’ve all said, are proven and not going to go away, given the investment from foundries, which is going to continue and even accelerate. It’s exciting times for all these other applications, from sensing to quantum and so on. There’s a lot of innovation possible. It’s not clear what’s going to be a winner yet and what’s not, but it’s a great time to be in photonics.

Read parts one and two of the discussion:
Photonics: The Former And Future Solution
Twenty-five years ago, photonics was supposed to be the future of high technology. Has that future finally arrived?
The Challenges Of Working With Photonics
From curvilinear designs to thermal vulnerabilities, what engineers need to know about the advantages and disadvantages of photonics

The post Design Considerations In Photonics appeared first on Semiconductor Engineering.

  • ✇IEEE Spectrum
  • What is CMOS 2.0?Samuel K. Moore
    CMOS, the silicon logic technology behind decades and decades of smaller transistors and faster computers, is entering a new phase. CMOS uses two types of transistors in pairs to limit a circuit’s power consumption. In this new phase, “CMOS 2.0,” that part’s not going to change, but how processors and other complex CMOS chips are made will. Julien Ryckaert, vice president of logic technologies at Imec, the Belgium-based nanotechnology research center, told IEEE Spectrum where things are headed
     

What is CMOS 2.0?

26. Únor 2024 v 17:00


CMOS, the silicon logic technology behind decades and decades of smaller transistors and faster computers, is entering a new phase. CMOS uses two types of transistors in pairs to limit a circuit’s power consumption. In this new phase, “CMOS 2.0,” that part’s not going to change, but how processors and other complex CMOS chips are made will. Julien Ryckaert, vice president of logic technologies at Imec, the Belgium-based nanotechnology research center, told IEEE Spectrum where things are headed.

Julien Ryckaert


Julien Ryckaert is vice president of logic technologies at Imec, in Belgium, where he’s been involved in exploring new technologies for 3D chips, among other topics.

Why is CMOS entering a new phase?

Julien Ryckaert: CMOS was the technology answer to build microprocessors in the 1960s. Making things smaller—transistors and interconnects—to make them better worked for 60, 70 years. But that has started to break down.

Why has CMOS scaling been breaking down?

Ryckaert: Over the years, people have made system-on-chips (SoCs)—such as CPUs and GPUs—more and more complex. That is, they have integrated more and more operations onto the same silicon die. That makes sense, because it is so much more efficient to move data on a silicon die than to move it from chip to chip in a computer.

For a long time, the scaling down of CMOS transistors and interconnects made all those operations work better. But now, it’s starting to be difficult to build the whole SoC, to make all of it better by just scaling the device and the interconnect. For example, SRAM [the system’s cache memory] no longer scales as well as logic.

What’s the solution?

Ryckaert: Seeing that something different needs to happen, we at Imec asked: Why do we scale? At the end of the day, Moore’s law is not about delivering smaller transistors and interconnects, it’s about achieving more functionality per unit area.

So what you are starting to see is breaking out certain functions, such as logic and SRAM, building them on separate chiplets using technologies that give each the best advantage, and then reintegrating them using advanced 3D packaging technologies. You can connect two functions that are built on the different substrates and achieve an efficiency in communication between those two functions that is competitive with how efficient they were when the two functions were on the same substrate. This is an evolution to what we call smart disintegration, or system technology co-optimization.

So is that CMOS 2.0?

Ryckaert: What we’re doing in CMOS 2.0 is pushing that idea further, with much finer-grained disintegration of functions and stacking of many more dies. A first sign of CMOS 2.0 is the imminent arrival of backside-power-delivery networks. On chips today, all interconnects—both those carrying data and those delivering power—are on the front side of the silicon [above the transistors]. Those two types of interconnect have different functions and different requirements, but they have had to exist in a compromise until now. Backside power moves the power-delivery interconnects to beneath the silicon, essentially turning the die into an active transistor layer which is sandwiched between two interconnect stacks, each stack having a different functionality.

Will transistors and interconnects still have to keep scaling in CMOS 2.0?

Ryckaert: Yes, because somewhere in that stack, you will still have a layer that still needs more transistors per unit area. But now, because you have removed all the other constraints that it once had, you are letting that layer nicely scale with the technology that is perfectly suited for it. I see fascinating times ahead.

This article appears in the March print issue as “5 Questions for Julien Ryckaert.”

❌
❌