FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇IEEE Spectrum
  • Nvidia Conquers Latest AI Tests​Samuel K. Moore
    For years, Nvidia has dominated many machine learning benchmarks, and now there are two more notches in its belt. MLPerf, the AI benchmarking suite sometimes called “the Olympics of machine learning,” has released a new set of training tests to help make more and better apples-to-apples comparisons between competing computer systems. One of MLPerf’s new tests concerns fine-tuning of large language models, a process that takes an existing trained model and trains it a bit more with specialized
     

Nvidia Conquers Latest AI Tests​

12. Červen 2024 v 17:00


For years, Nvidia has dominated many machine learning benchmarks, and now there are two more notches in its belt.

MLPerf, the AI benchmarking suite sometimes called “the Olympics of machine learning,” has released a new set of training tests to help make more and better apples-to-apples comparisons between competing computer systems. One of MLPerf’s new tests concerns fine-tuning of large language models, a process that takes an existing trained model and trains it a bit more with specialized knowledge to make it fit for a particular purpose. The other is for graph neural networks, a type of machine learning behind some literature databases, fraud detection in financial systems, and social networks.

Even with the additions and the participation of computers using Google’s and Intel’s AI accelerators, systems powered by Nvidia’s Hopper architecture dominated the results once again. One system that included 11,616 Nvidia H100 GPUs—the largest collection yet—topped each of the nine benchmarks, setting records in five of them (including the two new benchmarks).

“If you just throw hardware at the problem, it’s not a given that you’re going to improve.” —Dave Salvator, Nvidia

The 11,616-H100 system is “the biggest we’ve ever done,” says Dave Salvator, director of accelerated computing products at Nvidia. It smashed through the GPT-3 training trial in less than 3.5 minutes. A 512-GPU system, for comparison, took about 51 minutes. (Note that the GPT-3 task is not a full training, which could take weeks and cost millions of dollars. Instead, the computers train on a representative portion of the data, at an agreed-upon point well before completion.)

Compared to Nvidia’s largest entrant on GPT-3 last year, a 3,584 H100 computer, the 3.5-minute result represents a 3.2-fold improvement. You might expect that just from the difference in the size of these systems, but in AI computing that isn’t always the case, explains Salvator. “If you just throw hardware at the problem, it’s not a given that you’re going to improve,” he says.

“We are getting essentially linear scaling,” says Salvator. By that he means that twice as many GPUs lead to a halved training time. “[That] represents a great achievement from our engineering teams,” he adds.

Competitors are also getting closer to linear scaling. This round Intel deployed a system using 1,024 GPUs that performed the GPT-3 task in 67 minutes versus a computer one-fourth the size that took 224 minutes six months ago. Google’s largest GPT-3 entry used 12-times the number of TPU v5p accelerators as its smallest entry and performed its task nine times as fast.

Linear scaling is going to be particularly important for upcoming “AI factories” housing 100,000 GPUs or more, Salvator says. He says to expect one such data center to come online this year, and another, using Nvidia’s next architecture, Blackwell, to startup in 2025.

Nvidia’s streak continues

Nvidia continued to boost training times despite using the same architecture, Hopper, as it did in last year’s training results. That’s all down to software improvements, says Salvator. “Typically, we’ll get a 2-2.5x [boost] from software after a new architecture is released,” he says.

For GPT-3 training, Nvidia logged a 27 percent improvement from the June 2023 MLPerf benchmarks. Salvator says there were several software changes behind the boost. For example, Nvidia engineers tuned up Hopper’s use of less accurate, 8-bit floating point operations by trimming unnecessary conversions between 8-bit and 16-bit numbers and better targeting of which layers of a neural network could use the lower precision number format. They also found a more intelligent way to adjust the power budget of each chip’s compute engines, and sped communication among GPUs in a way that Salvator likened to “buttering your toast while it’s still in the toaster.”

Additionally, the company implemented a scheme called flash attention. Invented in the Stanford University laboratory of Samba Nova founder Chris Re, flash attention is an algorithm that speeds transformer networks by minimizing writes to memory. When it first showed up in MLPerf benchmarks, flash attention shaved as much as 10 percent from training times. (Intel, too, used a version of flash attention but not for GPT-3. It instead used the algorithm for one of the new benchmarks, fine-tuning.)

Using other software and network tricks, Nvidia delivered an 80 percent speedup in the text-to-image test, Stable Diffusion, versus its submission in November 2023.

New benchmarks

MLPerf adds new benchmarks and upgrades old ones to stay relevant to what’s happening in the AI industry. This year saw the addition of fine-tuning and graph neural networks.

Fine tuning takes an already trained LLM and specializes it for use in a particular field. Nvidia, for example took a trained 43-billion-parameter model and trained it on the GPU-maker’s design files and documentation to create ChipNeMo, an AI intended to boost the productivity of its chip designers. At the time, the company’s chief technology officer Bill Dally said that training an LLM was like giving it a liberal arts education, and fine tuning was like sending it to graduate school.

The MLPerf benchmark takes a pretrained Llama-2-70B model and asks the system to fine tune it using a dataset of government documents with the goal of generating more accurate document summaries.

There are several ways to do fine-tuning. MLPerf chose one called low-rank adaptation (LoRA). The method winds up training only a small portion of the LLM’s parameters leading to a 3-fold lower burden on hardware and reduced use of memory and storage versus other methods, according to the organization.

The other new benchmark involved a graph neural network (GNN). These are for problems that can be represented by a very large set of interconnected nodes, such as a social network or a recommender system. Compared to other AI tasks, GNNs require a lot of communication between nodes in a computer.

The benchmark trained a GNN on a database that shows relationships about academic authors, papers, and institutes—a graph with 547 million nodes and 5.8 billion edges. The neural network was then trained to predict the right label for each node in the graph.

Future fights

Training rounds in 2025 may see head-to-head contests comparing new accelerators from AMD, Intel, and Nvidia. AMD’s MI300 series was launched about six months ago, and a memory-boosted upgrade the MI325x is planned for the end of 2024, with the next generation MI350 slated for 2025. Intel says its Gaudi 3, generally available to computer makers later this year, will appear in MLPerf’s upcoming inferencing benchmarks. Intel executives have said the new chip has the capacity to beat H100 at training LLMs. But the victory may be short-lived, as Nvidia has unveiled a new architecture, Blackwell, which is planned for late this year.

  • ✇Semiconductor Engineering
  • Optimize Power For RF/μW Hybrid And Digital Phased ArraysKimia Azad
    Field-programmable gate arrays (FPGAs) are a critical component of both digital and hybrid phased array technology. Powering FPGAs for aerospace and defense applications comes with its own set of challenges, especially because these applications require higher reliability than many industrial or consumer technologies. This blog post will provide a brief history on beamforming and beam-steering technologies for defense applications, a guide on powering defense and space FPGAs, and a reflection on
     

Optimize Power For RF/μW Hybrid And Digital Phased Arrays

9. Květen 2024 v 09:03

Field-programmable gate arrays (FPGAs) are a critical component of both digital and hybrid phased array technology. Powering FPGAs for aerospace and defense applications comes with its own set of challenges, especially because these applications require higher reliability than many industrial or consumer technologies.

This blog post will provide a brief history on beamforming and beam-steering technologies for defense applications, a guide on powering defense and space FPGAs, and a reflection on the future of A&D communications systems.

Background

Active electronically scanned array (AESA) and phased array systems are more affordable and available than ever, namely in full digital and hybrid configuration. These systems can cover a wide RF/uW frequency spectrum and are suitable for use in RADAR and other military communications systems. AESA and phased array systems also boast advanced capabilities in beamforming and beam-steering technologies.

The emergence of 5G communications and commercial space data communications, coupled with advancements in semiconductor efficiency, has enabled companies to deploy innovative phased-array designs. However, roadblocks in efficiently powering these systems given watts consumed and dissipated, plus size and geometry constraints, can complicate development for designers.

Recent developments

Despite digital phased array technologies providing improved performance across a large part of the RF/uW frequency spectrum, their deployment is hindered by various factors. Cost barriers, power consumption, thermal constraints, latency concerns, and efficiency losses in the amplification and gain stages are some key hurdles.

Hybrid phased array is an ideal solution for meeting system level requirements without compromising on cost or power losses. This technology allows for less power consumption and reduced thermal concerns, paving the way for cost-saving solutions. The “digitizers” of hybrid phased array, such as FPGAs, are fewer and farther away from the antenna.

Powering FPGAs

FPGAs are valuable in their ability to perform extremely fast calculations in support of signal isolations, Fast Fourier Transformers (FFTs), I/Q data extractions, and in functions that form and steer radio-frequency beams. However, a key challenge in working with FPGAs lies in the necessity for consistent, sequenced power delivery.

Conclusion and future considerations

Complexities and challenges associated with deploying digital assets should not overlook industry advancements in powering FPGAs within phased array systems. With a focused approach, thermal management challenges can be mitigated, and performance optimized. Existing power solutions provide a strong and mature foundation for the on-going development and deployment of next-generation phased array systems within military and space applications. For a full guide on FPGA power considerations in aerospace and defense applications, take a look at our feature in Military Embedded Systems magazine.

The post Optimize Power For RF/μW Hybrid And Digital Phased Arrays appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Optimize Power For RF/μW Hybrid And Digital Phased ArraysKimia Azad
    Field-programmable gate arrays (FPGAs) are a critical component of both digital and hybrid phased array technology. Powering FPGAs for aerospace and defense applications comes with its own set of challenges, especially because these applications require higher reliability than many industrial or consumer technologies. This blog post will provide a brief history on beamforming and beam-steering technologies for defense applications, a guide on powering defense and space FPGAs, and a reflection on
     

Optimize Power For RF/μW Hybrid And Digital Phased Arrays

9. Květen 2024 v 09:03

Field-programmable gate arrays (FPGAs) are a critical component of both digital and hybrid phased array technology. Powering FPGAs for aerospace and defense applications comes with its own set of challenges, especially because these applications require higher reliability than many industrial or consumer technologies.

This blog post will provide a brief history on beamforming and beam-steering technologies for defense applications, a guide on powering defense and space FPGAs, and a reflection on the future of A&D communications systems.

Background

Active electronically scanned array (AESA) and phased array systems are more affordable and available than ever, namely in full digital and hybrid configuration. These systems can cover a wide RF/uW frequency spectrum and are suitable for use in RADAR and other military communications systems. AESA and phased array systems also boast advanced capabilities in beamforming and beam-steering technologies.

The emergence of 5G communications and commercial space data communications, coupled with advancements in semiconductor efficiency, has enabled companies to deploy innovative phased-array designs. However, roadblocks in efficiently powering these systems given watts consumed and dissipated, plus size and geometry constraints, can complicate development for designers.

Recent developments

Despite digital phased array technologies providing improved performance across a large part of the RF/uW frequency spectrum, their deployment is hindered by various factors. Cost barriers, power consumption, thermal constraints, latency concerns, and efficiency losses in the amplification and gain stages are some key hurdles.

Hybrid phased array is an ideal solution for meeting system level requirements without compromising on cost or power losses. This technology allows for less power consumption and reduced thermal concerns, paving the way for cost-saving solutions. The “digitizers” of hybrid phased array, such as FPGAs, are fewer and farther away from the antenna.

Powering FPGAs

FPGAs are valuable in their ability to perform extremely fast calculations in support of signal isolations, Fast Fourier Transformers (FFTs), I/Q data extractions, and in functions that form and steer radio-frequency beams. However, a key challenge in working with FPGAs lies in the necessity for consistent, sequenced power delivery.

Conclusion and future considerations

Complexities and challenges associated with deploying digital assets should not overlook industry advancements in powering FPGAs within phased array systems. With a focused approach, thermal management challenges can be mitigated, and performance optimized. Existing power solutions provide a strong and mature foundation for the on-going development and deployment of next-generation phased array systems within military and space applications. For a full guide on FPGA power considerations in aerospace and defense applications, take a look at our feature in Military Embedded Systems magazine.

The post Optimize Power For RF/μW Hybrid And Digital Phased Arrays appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Chip Industry Week In ReviewThe SE Staff
    Samsung and Synopsys collaborated on the first production tapeout of a high-performance mobile SoC design, including CPUs and GPUs, using the Synopsys.ai EDA suite on Samsung Foundry’s gate-all-around (GAA) process. Samsung plans to begin mass production of 2nm process GAA chips in 2025, reports BusinessKorea. UMC developed the first radio frequency silicon on insulator (RF-SOI)-based 3D IC process for chips used in smartphones and other 5G/6G mobile devices. The process uses wafer-to-wafer bond
     

Chip Industry Week In Review

3. Květen 2024 v 09:01

Samsung and Synopsys collaborated on the first production tapeout of a high-performance mobile SoC design, including CPUs and GPUs, using the Synopsys.ai EDA suite on Samsung Foundry’s gate-all-around (GAA) process. Samsung plans to begin mass production of 2nm process GAA chips in 2025, reports BusinessKorea.

UMC developed the first radio frequency silicon on insulator (RF-SOI)-based 3D IC process for chips used in smartphones and other 5G/6G mobile devices. The process uses wafer-to-wafer bonding technology to address radio frequency interference between stacked dies and reduces die size by 45%.

Fig. 1: UMC’s 3D IC solution for RFSOI technology. Source: UMC

The first programmable chip capable of shaping, splitting, and steering beams of light is now being produced by Skywater Technology and Lumotive. The technology is critical for advancing lidar-based systems used in robotics, automotive, and other 3D sensing applications.

Driven by demand for AI chips, SK hynix revealed it has already booked its entire production of high-bandwidth memory chips for 2024 and is nearly sold out of its production capacity for 2025, reported the Korea Times, while SEMI reported that silicon wafer shipments declined in Q1 2024, quarter over quarter, a 13% drop, attributed to continued weakness in IC fab utilization and inventory adjustments.

PCI-SIG published the CopprLink Internal and External Cable specifications to provide PCIe 5.0 and 6.0 signaling at 32 and 64 GT/s and leverage standard connector form factors for applications including storage, data centers, AI/ML, and disaggregated memory.

The U.S. Department of Commerce (DoC) launched the CHIPS Women in Construction Framework to boost the participation of women and economically disadvantaged people in the workforce, aiming to support on-time and successful completion of CHIPS Act-funded projects. Intel and Micron adopted the framework.

Quick links to more news:

Market Reports
Global
In-Depth
Education and Training
Security
Product News
Quantum
Research
Events
Further Reading


Markets and Money

The SiC wafer processing equipment market is growing rapidly, reports Yole. SiC devices will exceed $10B by 2029 at a CAGR of 25%, and the SiC manufacturing tool market is projected to reach $5B by 2026.

imec.xpand launched a €300 million (~$321 million) fund that will invest in semiconductor and nanotechnology startups with the potential to push semiconductor innovation beyond traditional applications and drive next-gen technologies.

Blaize raised $106 million for its programmable graph streaming processor architecture suite and low-code/no-code software platform for edge AI.

Guerrilla RF completed the acquisition of Gallium Semiconductor‘s portfolio of GaN power amplifiers and front-end modules.

About 90% of connected cars sold in 2030 will have embedded 5G capability, reported Counterpoint. Also, about 75% of laptop PCs sold in 2027 will be AI laptop PCs with advanced generative AI, and the global high-level OS (HLOS) or advanced smartwatch market is predicted to grow 15% in 2024.


Global

Powerchip Semiconductor opened a new 300mm facility in northwestern Taiwan targeting the production of AI semiconductors. The facility is expected to produce 50,000 wafers per month at 55, 40, and 28nm nodes.

Taiwan-based KYEC Semiconductor will withdraw its China operations by the third quarter due to increasing geopolitical tensions, reports the South China Morning Post.

Japan will expand its semiconductor export restrictions to China related to four technologies: Scanning electron microscopes, CMOS, FD-SOI, and the outputs of quantum computers, according to TrendForce.

IBM will invest CAD$187 million (~US$137M in Canada’s semiconductor industry, with the bulk of the investment focused on advanced assembly, testing, and packaging operations.

Microsoft will invest US$2.2 billion over the next four years to build Malaysia’s digital infrastructure, create AI skilling opportunities, establish an AI Center of Excellence, and enhance cybersecurity.


In-Depth

New stories and tech talks published by Semiconductor Engineering this week:


Security

Infineon collaborated with ETAS to integrate the ESCRYPT CycurHSM 3.x automotive security software stack into its next-gen AURIX MCUs to optimize security, performance, and functionality.

Synopsys released Polaris Assist, an AI-powered application security assistant on its Polaris Software Integrity Platform, combining LLM technology with application security knowledge and intelligence.

In security research:

U.S. President Biden signed a National Security Memorandum to enhance the resilience of critical infrastructure, and the White House announced key actions taken since Biden’s AI Executive Order, including measures to mitigate risk.

CISA and partners published a fact sheet on pro-Russia hacktivists who seek to compromise industrial control systems and small-scale operational technology systems in North American and European critical infrastructure sectors. CISA issued other alerts including two Microsoft vulnerabilities.


Education and Training

The U.S. National Institute for Innovation and Technology (NIIT) and the Department of Labor (DoL) partnered to celebrate the inaugural Youth Apprenticeship Week on May 5 to 11, highlighting opportunities in critical industries such as semiconductors and advanced manufacturing.

SUNY Poly received an additional $4 million from New York State for its Semiconductor Processing to Packaging Research, Education, and Training Center.

The University of Pennsylvania launched an online Master of Science in Engineering in AI degree.

The American University of Armenia celebrated its 10-year collaboration with Siemens, which provides AUA’s Engineering Research Center with annual research grants.


Product News

Renesas and SEGGER Embedded Studio launched integrated code generator support for its 32-bit RISC-V MCU. 

Rambus introduced a family of DDR5 server Power Management ICs (PMICs), including an extreme current device for high-performance applications.

Fig. 2: Rambus’ server PMIC on DDR5 RDIMM. Source: Rambus

Keysight added capabilities to Inspector, part of the company’s recently acquired device security research and test lab Riscure, that are designed to test the robustness of post-quantum cryptography (PQC) and help device and chip vendors identify and fix hardware vulnerabilities. Keysight also validated new conformance test cases for narrowband IoT non-terrestrial networks standards.

Ansys’ RedHawk-SC and Totem power integrity platforms were certified for TSMC‘s N2 nanosheet-based process technology, while its RaptorX solution for on-chip electromagnetic modeling was certified for TSMC’s N5 process.

Netherlands-based athleisure brand PREMIUM INC selected CLEVR to implement Siemens’ Mendix Digital Lifecycle Management for Fashion & Retail solution.

Micron will begin shipping high-capacity DRAM for AI data centers.

Microchip uncorked radiation-tolerant SoC FPGAs for space applications that uses a real-time Linux-capable RISC-V-based microprocessor subsystem.


Quantum

University of Chicago researchers developed a system to boost the efficiency of quantum error correction using a framework based on quantum low-density party-check (qLDPC) codes and new hardware involving reconfigurable atom arrays.

PsiQuantum will receive AUD $940 million (~$620 million) in equity, grants, and loans from the Australian and Queensland governments to deploy a utility-scale quantum computer in the regime of 1 million physical qubits in Brisbane, Australia.

Japan-based RIKEN will co-locate IBM’s Quantum System Two with its Fugaku supercomputer for integrated quantum-classical workflows in a heterogeneous quantum-HPC hybrid computing environment. Fugaku is currently one of the world’s most powerful supercomputers.

QuEra Computing was awarded a ¥6.5 billion (~$41 million) contract by Japan’s National Institute of Advanced Industrial Science and Technology (AIST) to deliver a gate-based neutral-atom quantum computer alongside AIST’s ABCI-Q supercomputer as part of a quantum-classical computing platform.

Novo Holdings, the controlling stakeholder of pharmaceutical company Novo Nordisk, plans to boost the quantum technology startup ecosystem in Denmark with DKK 1.4 billion (~$201 million) in investments.

The University of Sydney received AUD $18.4 million (~$12 million) from the Australian government to help grow the quantum industry and ecosystem.

The European Commission plans to spend €112 million (~$120 million) to support AI and quantum research and innovation.


Research

Intel researchers developed a 300-millimeter cryogenic probing process to collect high-volume data on the performance of silicon spin qubit devices across whole wafers using CMOS manufacturing techniques.

EPFL researchers used a form of ML called deep reinforcement learning (DRL) to train a four-legged robot to avoid falls by switching between walking, trotting, and pronking.=

The University of Cambridge researchers developed tiny, flexible nerve cuff devices that can wrap around individual nerve fibers without damaging them, useful to treat a range of neurological disorders.

Argonne National Laboratory and Toyota are exploring a direct recycling approach that carefully extracts components from spent batteries. Argonne is also working with Talon Metals on a process that could increase the number of EV batteries produced from mined nickel ore.


Events

Find upcoming chip industry events here, including:

Event Date Location
IEEE International Symposium on Hardware Oriented Security and Trust (HOST) May 6 – 9 Washington DC
MRS Spring Meeting & Exhibit May 7 – 9 Virtual
ASMC: Advanced Semiconductor Manufacturing Conference May 13 – 16 Albany, NY
ISES Taiwan 2024: International Semiconductor Executive Summit May 14 – 15 New Taipei City
Ansys Simulation World 2024 May 14 – 16 Online
NI Connect Austin 2024 May 20 – 22 Austin, Texas
ITF World 2024 (imec) May 21 – 22 Antwerp, Belgium
Embedded Vision Summit May 21 – 23 Santa Clara, CA
ASIP Virtual Seminar 2024 May 22 Online
Electronic Components and Technology Conference (ECTC) 2024 May 28 – 31 Denver, Colorado
Hardwear.io Security Trainings and Conference USA 2024 May 28 – Jun 1 Santa Clara, CA
Find All Upcoming Events Here

Upcoming webinars are here.


Further Reading

Read the latest special reports and top stories, or check out the latest newsletters:

Systems and Design
Low Power-High Performance
Test, Measurement and Analytics
Manufacturing, Packaging and Materials
Automotive, Security and Pervasive Computing

The post Chip Industry Week In Review appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Design Considerations In PhotonicsKaren Heyman
    Experts at the Table: Semiconductor Engineering sat down to talk about what CMOS and photonics engineers need to know to successfully collaborate, with James Pond, fellow at Ansys; Gilles Lamant, distinguished engineer at Cadence; and Mitch Heins, business development manager for photonic solutions at Synopsys. What follows are excerpts of that conversation. To view part one of this discussion, click here. Part two is here. L-R: Ansys’s Pond, Cadence’s Lamant, Synopsys’ Heins SE: What do engine
     

Design Considerations In Photonics

1. Květen 2024 v 09:05

Experts at the Table: Semiconductor Engineering sat down to talk about what CMOS and photonics engineers need to know to successfully collaborate, with James Pond, fellow at Ansys; Gilles Lamant, distinguished engineer at Cadence; and Mitch Heins, business development manager for photonic solutions at Synopsys. What follows are excerpts of that conversation. To view part one of this discussion, click here. Part two is here.


L-R: Ansys’s Pond, Cadence’s Lamant, Synopsys’ Heins

SE: What do engineers who have spent their careers in CMOS need to know about designing for photonics?

Lamant:  It’s hard, no illusion. I had good mentors, including both James and Mitch, so I actually did that transition. Ten years ago, I knew nearly nothing about photonics. It takes having good mentors who can help you. That’s the biggest thing. It’s not enough to just try the software on your own. In addition, having an RF background is very useful in many ways. Photonics is the multiplication of RF. In photonics, you have multiple modes. In RF, you tend to only consider one mode, but a lot of the theory behind photonics is very much a generalization of RF.

Heins: We try to make our photonics flow look as much as we can like our electronics flow. We try to take the last 30 to 40 years of learning in EDA and apply it to photonics. One thing we see a lot is that when people are coming right out of school in photonics, they don’t necessarily have a deep background in how to do IC design. There are a lot of things we’ve learned, like design rule checking, that we now take for granted. It’s like breathing. You’ve got to do it. Layout versus schematic, you’ve got to do it. Even circuit-level simulation. As CMOS veterans, you’d think, of course, you always simulate your circuit before you go to manufacture, but that’s not the case in photonics.

Lamant: Those people actually know photonics, but they don’t know how to create a system. This is a different type of challenge. People who know photonics, know how to make a device. They’re expert at that. But they have no idea how to take that device and bring it to a full system that they can sell. I see that in so many startups. It’s not to make the point for EDA software. They use free software. They use Klayout and all those things that they have access to in the university. But all of those tools are not part of the ecosystem of trying to make a system. They say, ‘We wrote a custom simulator to simulate our ring.’ But the question then is, ‘How do you simulate the driver for your ring that goes with it?’ I see many startups fail because they don’t have that ability to take it from academic thinking to production.

You have the electronics people trying to do photonics, they have some methodology background, and other things, but they have a gap in knowledge. Fortunately, they can get caught up, especially if they’re an analog designer or an RF designer. They can close that gap by talking to the right people. Unfortunately, the people who know photonics do not have the knowledge of how to make a full system out of it, and this is greatly hurting the photonics world.

Pond: I would agree. We have two worlds of engineers who have been coming together over the last decade or so. Those who came from an EDA background — electrical circuit design, especially RF — have probably had the easiest time. We’ve been doing better and better for them. Ten years ago there was nothing. Now, there’s a more traditional workflow that looks more like an EDA workflow. Still, they have a lot to learn. But the workflow, the cockpit, and so on, follows along with the EDA model.

In the other direction, maybe we haven’t done quite as good a job because people coming from a photonics background can be really thrown off by the scale and complexity of EDA tools. My impression, coming from photonics, is EDA tools have been developed over many decades. When that happens, you end up with tools that are incredibly powerful, but you wonder if they’d been developed more recently, maybe things wouldn’t be done this way. There’s a resistance on the photonic engineer side to dive into that world because there’s a lot to learn about the EDA workflows. People from photonics have to embrace and take on that EDA world, because, as Gilles says, it’s necessary, it really has to be done.

Heins:  Now, you’re seeing a ton of work going into how to apply AI to help folks bring these kinds of more complex flows under control. There’s so much to learn, but if AI can help you take care of the plumbing, if you will, you can advance much faster. We already extensively use AI for SoCs or packaged designs where you have tens and hundreds of billions of transistors. Photonics is a different vector. The signal itself is much more complex than electrical. The optimization that you have to go through is much more complex. But AI can help get a handle on that, so as we go forward, you’ll start to see these kinds of complexities simplified for people.

SE: Is there something analogous to error correction/parity checks in the photonics world?

Lamant: That can’t be analogized to photonics, because that’s about knowing the original signal and comparing it to the others. Once you have reconstituted your data, and it’s back to being a digital set of bits, then you have a parity check or different types of things that today have nothing to do with photonics because it’s the physical link. In physical links, you can do retiming or a lot of things, but the error correction happens independently, on both sides.

Heins: Tuning might be something closer to it. If my resonance frequencies are not as expected, can I detect that and then adjust for it? That happens a lot. You could think of those kinds of things as error correction.

Pond: Most of the kind of error correction we’re talking about is just using all the standard methods, whether you have an optical link or a copper link. But there are some really interesting things. We had a workflow, developed between Ansys and Cadence a few years ago on a PAM-4 system, where we did a driver simulation and the photonic link together. You look into shifting the timing of signals to compensate for different effects. If you look at the eye at different locations, it may look completely distorted and wrong, because you’re pre-compensating for an effect that’s going to come later through the photonic portion of the link. That’s one of the reasons why it’s important to be able to do the full system simulation. You can’t just independently optimize the driver electronics and the photonics. They have to be done together, so you can perform the signal correction work.

Heins: You do things like equalization. Dispersion is another one. You get different wavelengths traveling at different speeds, and we compensate for that. At the physical level, there are some corrections that do take place, depending on the kind of system you’re trying to make. If you’re in coherent systems, where path links matter, phase matters, that’s more like trying to make the circuit correct by construction, so that you don’t encounter problems.

That raises another issue, which is manufacturing variances. There, you’re back to doing lots of sensitivity analysis through Monte Carlo-type simulations, parameterized simulations, etc., where you’re trying to get a feel for the sensitivity of your device, to a shift that could occur, either through the manufacturing process or just as this system sits in its ecosystem of whatever’s around it. It’s not quite error correction, per se, but certainly trying to design for that is something we care about.

SE: Any concluding thoughts?

Lamant: There is a lot of wondering and pondering right now, but it’s also exciting. We’ve reached the point where photonics is here to stay and will be part of more and more things. Looking forward, the interesting question is where it will become part of the actual data processing. Sensing is a terrific application for photonics, but I am not totally sold on the actual data processing. I’m not even using the word “computing” here, because processing and computing are very different things. Photonics is probably never going to be doing general computing. It may be doing specialized niche, like a Fourier transform-type of processing, and it needs to be part of a system.

Heins: It comes down to two things. What will really happen with quantum computing? And will quantum computing use photonics? A lot of people are looking at photonics for quantum computing because you can do a lot more of that work at room temperature than at 4 Kelvin or something like that — not all of it, but big chunks of it. If quantum computing actually becomes more than prototypes, and photonics is a big part of that, that could shift the answer. The other big issue in compute is we don’t have memory for photonics. If someone makes a breakthrough where suddenly states can be stored in some fashion, then all bets are off and everything changes again. But at this point, I don’t see anything promising.

One of the biggest challenges we have going forward for the whole ecosystem, in general, is lack of standards in this space, which makes interoperability between tools from our companies very difficult. The signal in photonics is very complex. It’s actually complex math, with real and imaginary parts. There are a lot of extra things that we have to take into account, and a lot of times we don’t even have common nomenclature or agreement on metrics and how to measure things. This is going to take time, but it’s being pushed by customers driving us to work together. For example, chiplets are great for photonics because a photonic IC is a chiplet. But all of a sudden, now you’re in a mixed domain, multi-physics type of environment, and there are some huge challenges to make that all work together. We have a pretty good handle on system functional verification, design-for-test, and all these things in the electronic IC world. In photonics, we’ve got a lot of work to do.

Pond: For me, it’s been exciting. I’ve been doing this for more than 20 years. In 2022, when I saw the first product with fibers actually coming out of the package, that was the dream from 20 years back. It took a lot of effort to get there. Things have been maturing very fast, especially in the last decade. That’s really promising from an EDA/EPDA-type of workflow perspective. The datacoms, as we’ve all said, are proven and not going to go away, given the investment from foundries, which is going to continue and even accelerate. It’s exciting times for all these other applications, from sensing to quantum and so on. There’s a lot of innovation possible. It’s not clear what’s going to be a winner yet and what’s not, but it’s a great time to be in photonics.

Read parts one and two of the discussion:
Photonics: The Former And Future Solution
Twenty-five years ago, photonics was supposed to be the future of high technology. Has that future finally arrived?
The Challenges Of Working With Photonics
From curvilinear designs to thermal vulnerabilities, what engineers need to know about the advantages and disadvantages of photonics

The post Design Considerations In Photonics appeared first on Semiconductor Engineering.

  • ✇Eurogamer.net
  • Get ready for Amazon's Fallout adaptation with March's Prime GamingLiv Ngan
    Amazon is offering eight games to Prime Gaming subscribers in March, including Fallout 2 and Invincible Presents: Atom Eve. The month will start out with Fallout 2 to promote Amazon's upcoming Fallout adaptation, which is set to premiere on 12th April. We got a good look at the show in December when Amazon released a teaser trailer, following numerous set leaks and official promotional photos.If you haven't played it, Fallout 2 is one of the classic RPGs which still holds up today. In 2017, Eur
     

Get ready for Amazon's Fallout adaptation with March's Prime Gaming

Od: Liv Ngan
29. Únor 2024 v 17:56

Amazon is offering eight games to Prime Gaming subscribers in March, including Fallout 2 and Invincible Presents: Atom Eve.

The month will start out with Fallout 2 to promote Amazon's upcoming Fallout adaptation, which is set to premiere on 12th April. We got a good look at the show in December when Amazon released a teaser trailer, following numerous set leaks and official promotional photos.

If you haven't played it, Fallout 2 is one of the classic RPGs which still holds up today. In 2017, Eurogamer was able to visit Obsidian Entertainment and speak to Feargus Urquhart, Leonard Boyarsky and Tim Cain, who all previously worked on Fallout 1 and 2 whilst at Interplay Productions.

Read more

❌
❌