FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremSemiconductor Engineering
  • ✇Semiconductor Engineering
  • Chip Industry Week In ReviewThe SE Staff
    Samsung and Synopsys collaborated on the first production tapeout of a high-performance mobile SoC design, including CPUs and GPUs, using the Synopsys.ai EDA suite on Samsung Foundry’s gate-all-around (GAA) process. Samsung plans to begin mass production of 2nm process GAA chips in 2025, reports BusinessKorea. UMC developed the first radio frequency silicon on insulator (RF-SOI)-based 3D IC process for chips used in smartphones and other 5G/6G mobile devices. The process uses wafer-to-wafer bond
     

Chip Industry Week In Review

3. Květen 2024 v 09:01

Samsung and Synopsys collaborated on the first production tapeout of a high-performance mobile SoC design, including CPUs and GPUs, using the Synopsys.ai EDA suite on Samsung Foundry’s gate-all-around (GAA) process. Samsung plans to begin mass production of 2nm process GAA chips in 2025, reports BusinessKorea.

UMC developed the first radio frequency silicon on insulator (RF-SOI)-based 3D IC process for chips used in smartphones and other 5G/6G mobile devices. The process uses wafer-to-wafer bonding technology to address radio frequency interference between stacked dies and reduces die size by 45%.

Fig. 1: UMC’s 3D IC solution for RFSOI technology. Source: UMC

The first programmable chip capable of shaping, splitting, and steering beams of light is now being produced by Skywater Technology and Lumotive. The technology is critical for advancing lidar-based systems used in robotics, automotive, and other 3D sensing applications.

Driven by demand for AI chips, SK hynix revealed it has already booked its entire production of high-bandwidth memory chips for 2024 and is nearly sold out of its production capacity for 2025, reported the Korea Times, while SEMI reported that silicon wafer shipments declined in Q1 2024, quarter over quarter, a 13% drop, attributed to continued weakness in IC fab utilization and inventory adjustments.

PCI-SIG published the CopprLink Internal and External Cable specifications to provide PCIe 5.0 and 6.0 signaling at 32 and 64 GT/s and leverage standard connector form factors for applications including storage, data centers, AI/ML, and disaggregated memory.

The U.S. Department of Commerce (DoC) launched the CHIPS Women in Construction Framework to boost the participation of women and economically disadvantaged people in the workforce, aiming to support on-time and successful completion of CHIPS Act-funded projects. Intel and Micron adopted the framework.

Quick links to more news:

Market Reports
Global
In-Depth
Education and Training
Security
Product News
Quantum
Research
Events
Further Reading


Markets and Money

The SiC wafer processing equipment market is growing rapidly, reports Yole. SiC devices will exceed $10B by 2029 at a CAGR of 25%, and the SiC manufacturing tool market is projected to reach $5B by 2026.

imec.xpand launched a €300 million (~$321 million) fund that will invest in semiconductor and nanotechnology startups with the potential to push semiconductor innovation beyond traditional applications and drive next-gen technologies.

Blaize raised $106 million for its programmable graph streaming processor architecture suite and low-code/no-code software platform for edge AI.

Guerrilla RF completed the acquisition of Gallium Semiconductor‘s portfolio of GaN power amplifiers and front-end modules.

About 90% of connected cars sold in 2030 will have embedded 5G capability, reported Counterpoint. Also, about 75% of laptop PCs sold in 2027 will be AI laptop PCs with advanced generative AI, and the global high-level OS (HLOS) or advanced smartwatch market is predicted to grow 15% in 2024.


Global

Powerchip Semiconductor opened a new 300mm facility in northwestern Taiwan targeting the production of AI semiconductors. The facility is expected to produce 50,000 wafers per month at 55, 40, and 28nm nodes.

Taiwan-based KYEC Semiconductor will withdraw its China operations by the third quarter due to increasing geopolitical tensions, reports the South China Morning Post.

Japan will expand its semiconductor export restrictions to China related to four technologies: Scanning electron microscopes, CMOS, FD-SOI, and the outputs of quantum computers, according to TrendForce.

IBM will invest CAD$187 million (~US$137M in Canada’s semiconductor industry, with the bulk of the investment focused on advanced assembly, testing, and packaging operations.

Microsoft will invest US$2.2 billion over the next four years to build Malaysia’s digital infrastructure, create AI skilling opportunities, establish an AI Center of Excellence, and enhance cybersecurity.


In-Depth

New stories and tech talks published by Semiconductor Engineering this week:


Security

Infineon collaborated with ETAS to integrate the ESCRYPT CycurHSM 3.x automotive security software stack into its next-gen AURIX MCUs to optimize security, performance, and functionality.

Synopsys released Polaris Assist, an AI-powered application security assistant on its Polaris Software Integrity Platform, combining LLM technology with application security knowledge and intelligence.

In security research:

U.S. President Biden signed a National Security Memorandum to enhance the resilience of critical infrastructure, and the White House announced key actions taken since Biden’s AI Executive Order, including measures to mitigate risk.

CISA and partners published a fact sheet on pro-Russia hacktivists who seek to compromise industrial control systems and small-scale operational technology systems in North American and European critical infrastructure sectors. CISA issued other alerts including two Microsoft vulnerabilities.


Education and Training

The U.S. National Institute for Innovation and Technology (NIIT) and the Department of Labor (DoL) partnered to celebrate the inaugural Youth Apprenticeship Week on May 5 to 11, highlighting opportunities in critical industries such as semiconductors and advanced manufacturing.

SUNY Poly received an additional $4 million from New York State for its Semiconductor Processing to Packaging Research, Education, and Training Center.

The University of Pennsylvania launched an online Master of Science in Engineering in AI degree.

The American University of Armenia celebrated its 10-year collaboration with Siemens, which provides AUA’s Engineering Research Center with annual research grants.


Product News

Renesas and SEGGER Embedded Studio launched integrated code generator support for its 32-bit RISC-V MCU. 

Rambus introduced a family of DDR5 server Power Management ICs (PMICs), including an extreme current device for high-performance applications.

Fig. 2: Rambus’ server PMIC on DDR5 RDIMM. Source: Rambus

Keysight added capabilities to Inspector, part of the company’s recently acquired device security research and test lab Riscure, that are designed to test the robustness of post-quantum cryptography (PQC) and help device and chip vendors identify and fix hardware vulnerabilities. Keysight also validated new conformance test cases for narrowband IoT non-terrestrial networks standards.

Ansys’ RedHawk-SC and Totem power integrity platforms were certified for TSMC‘s N2 nanosheet-based process technology, while its RaptorX solution for on-chip electromagnetic modeling was certified for TSMC’s N5 process.

Netherlands-based athleisure brand PREMIUM INC selected CLEVR to implement Siemens’ Mendix Digital Lifecycle Management for Fashion & Retail solution.

Micron will begin shipping high-capacity DRAM for AI data centers.

Microchip uncorked radiation-tolerant SoC FPGAs for space applications that uses a real-time Linux-capable RISC-V-based microprocessor subsystem.


Quantum

University of Chicago researchers developed a system to boost the efficiency of quantum error correction using a framework based on quantum low-density party-check (qLDPC) codes and new hardware involving reconfigurable atom arrays.

PsiQuantum will receive AUD $940 million (~$620 million) in equity, grants, and loans from the Australian and Queensland governments to deploy a utility-scale quantum computer in the regime of 1 million physical qubits in Brisbane, Australia.

Japan-based RIKEN will co-locate IBM’s Quantum System Two with its Fugaku supercomputer for integrated quantum-classical workflows in a heterogeneous quantum-HPC hybrid computing environment. Fugaku is currently one of the world’s most powerful supercomputers.

QuEra Computing was awarded a ¥6.5 billion (~$41 million) contract by Japan’s National Institute of Advanced Industrial Science and Technology (AIST) to deliver a gate-based neutral-atom quantum computer alongside AIST’s ABCI-Q supercomputer as part of a quantum-classical computing platform.

Novo Holdings, the controlling stakeholder of pharmaceutical company Novo Nordisk, plans to boost the quantum technology startup ecosystem in Denmark with DKK 1.4 billion (~$201 million) in investments.

The University of Sydney received AUD $18.4 million (~$12 million) from the Australian government to help grow the quantum industry and ecosystem.

The European Commission plans to spend €112 million (~$120 million) to support AI and quantum research and innovation.


Research

Intel researchers developed a 300-millimeter cryogenic probing process to collect high-volume data on the performance of silicon spin qubit devices across whole wafers using CMOS manufacturing techniques.

EPFL researchers used a form of ML called deep reinforcement learning (DRL) to train a four-legged robot to avoid falls by switching between walking, trotting, and pronking.=

The University of Cambridge researchers developed tiny, flexible nerve cuff devices that can wrap around individual nerve fibers without damaging them, useful to treat a range of neurological disorders.

Argonne National Laboratory and Toyota are exploring a direct recycling approach that carefully extracts components from spent batteries. Argonne is also working with Talon Metals on a process that could increase the number of EV batteries produced from mined nickel ore.


Events

Find upcoming chip industry events here, including:

Event Date Location
IEEE International Symposium on Hardware Oriented Security and Trust (HOST) May 6 – 9 Washington DC
MRS Spring Meeting & Exhibit May 7 – 9 Virtual
ASMC: Advanced Semiconductor Manufacturing Conference May 13 – 16 Albany, NY
ISES Taiwan 2024: International Semiconductor Executive Summit May 14 – 15 New Taipei City
Ansys Simulation World 2024 May 14 – 16 Online
NI Connect Austin 2024 May 20 – 22 Austin, Texas
ITF World 2024 (imec) May 21 – 22 Antwerp, Belgium
Embedded Vision Summit May 21 – 23 Santa Clara, CA
ASIP Virtual Seminar 2024 May 22 Online
Electronic Components and Technology Conference (ECTC) 2024 May 28 – 31 Denver, Colorado
Hardwear.io Security Trainings and Conference USA 2024 May 28 – Jun 1 Santa Clara, CA
Find All Upcoming Events Here

Upcoming webinars are here.


Further Reading

Read the latest special reports and top stories, or check out the latest newsletters:

Systems and Design
Low Power-High Performance
Test, Measurement and Analytics
Manufacturing, Packaging and Materials
Automotive, Security and Pervasive Computing

The post Chip Industry Week In Review appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Distributing RTL Simulation Across Thousands Of Cores On 4 IPU Sockets (EPFL)Technical Paper Link
    A technical paper titled “Parendi: Thousand-Way Parallel RTL Simulation” was published by researchers at EPFL. Abstract: “Hardware development relies on simulations, particularly cycle-accurate RTL (Register Transfer Level) simulations, which consume significant time. As single-processor performance grows only slowly, conventional, single-threaded RTL simulation is becoming less practical for increasingly complex chips and systems. A solution is parallel RTL simulation, where ideally, simulators
     

Distributing RTL Simulation Across Thousands Of Cores On 4 IPU Sockets (EPFL)

A technical paper titled “Parendi: Thousand-Way Parallel RTL Simulation” was published by researchers at EPFL.

Abstract:

“Hardware development relies on simulations, particularly cycle-accurate RTL (Register Transfer Level) simulations, which consume significant time. As single-processor performance grows only slowly, conventional, single-threaded RTL simulation is becoming less practical for increasingly complex chips and systems. A solution is parallel RTL simulation, where ideally, simulators could run on thousands of parallel cores. However, existing simulators can only exploit tens of cores.
This paper studies the challenges inherent in running parallel RTL simulation on a multi-thousand-core machine (the Graphcore IPU, a 1472-core machine). Simulation performance requires balancing three factors: synchronization, communication, and computation. We experimentally evaluate each metric and analyze how it affects parallel simulation speed, drawing on contrasts between the large-scale IPU and smaller but faster x86 systems.
Using this analysis, we build Parendi, an RTL simulator for the IPU. It distributes RTL simulation across 5888 cores on 4 IPU sockets. Parendi runs large RTL designs up to 4x faster than a powerful, state-of-the-art x86 multicore system.”

Find the technical paper here. Published March 2024 (preprint).

Emami, Mahyar, Thomas Bourgeat, and James Larus. “Parendi: Thousand-Way Parallel RTL Simulation.” arXiv preprint arXiv:2403.04714 (2024).

Related Reading
Anatomy Of A System Simulation
Balancing the benefits of a model with the costs associated with that model is tough, but it becomes even trickier when dissimilar models are combined.

The post Distributing RTL Simulation Across Thousands Of Cores On 4 IPU Sockets (EPFL) appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • A Micro Light-Emitting Transistor With An N-Channel GaN FET In Series With A GaN LEDTechnical Paper Link
    A technical paper titled “Tunnel Junction-Enabled Monolithically Integrated GaN Micro-Light Emitting Transistor” was published by researchers at the Ohio State University and Sandia National Laboratory. Abstract: “GaN/InGaN microLEDs are a very promising technology for next generation displays. Switching control transistors and their integration are key components in achieving high-performance, efficient displays. Monolithic integration of microLEDs with GaN switching devices provides an opportu
     

A Micro Light-Emitting Transistor With An N-Channel GaN FET In Series With A GaN LED

A technical paper titled “Tunnel Junction-Enabled Monolithically Integrated GaN Micro-Light Emitting Transistor” was published by researchers at the Ohio State University and Sandia National Laboratory.

Abstract:

“GaN/InGaN microLEDs are a very promising technology for next generation displays. Switching control transistors and their integration are key components in achieving high-performance, efficient displays. Monolithic integration of microLEDs with GaN switching devices provides an opportunity to control microLED output power with capacitive (voltage) control rather than current controlled schemes. This approach can greatly reduce system complexity for the driver circuit arrays while maintaining device opto-electronic performance. In this work, we demonstrate a 3-terminal GaN micro-light emitting transistor that combines a GaN/InGaN blue tunneling-based microLED with a GaN n-channel FET. The integrated device exhibits excellent gate control, drain current control and optical emission control. This work provides a promising pathway for future monolithic integration of GaN FETs with microLED to enable fast switching high efficiency microLED display and communication systems.”

Find the technical paper here. Published April 2024.

Rahman, Sheikh Ifatur, Mohammad Awwad, Chandan Joishi, Zane-Jamal Eddine, Brendan Gunning, Andrew Armstrong, and Siddharth Rajan. “Tunnel Junction-Enabled Monolithically Integrated GaN Micro-Light Emitting Transistor.” arXiv preprint arXiv:2404.05095 (2024).

The post A Micro Light-Emitting Transistor With An N-Channel GaN FET In Series With A GaN LED appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Voltage Reference Architectures For Harsh Environments: Quantum Computing And SpaceTechnical Paper Link
    A technical paper titled “Cryo-CMOS Voltage References for the Ultrawide Temperature Range From 300 K Down to 4.2 K” was published by researchers at Delft University of Technology, QuTech, Kavli Institute of Nanoscience Delft, and École Polytechnique Fédérale de Lausanne (EPFL). Abstract: “This article presents a family of sub-1-V, fully-CMOS voltage references adopting MOS devices in weak inversion to achieve continuous operation from room temperature (RT) down to cryogenic temperatures. Their
     

Voltage Reference Architectures For Harsh Environments: Quantum Computing And Space

A technical paper titled “Cryo-CMOS Voltage References for the Ultrawide Temperature Range From 300 K Down to 4.2 K” was published by researchers at Delft University of Technology, QuTech, Kavli Institute of Nanoscience Delft, and École Polytechnique Fédérale de Lausanne (EPFL).

Abstract:

“This article presents a family of sub-1-V, fully-CMOS voltage references adopting MOS devices in weak inversion to achieve continuous operation from room temperature (RT) down to cryogenic temperatures. Their accuracy limitations due to curvature, body effect, and mismatch are investigated and experimentally validated. Implemented in 40-nm CMOS, the references show a line regulation better than 2.7%/V from a supply as low as 0.99 V. By applying dynamic element matching (DEM) techniques, a spread of 1.2% (3 σ ) from 4.2 to 300 K can be achieved, resulting in a temperature coefficient (TC) of 111 ppm/K. As the first significant statistical characterization extending down to cryogenic temperatures, the results demonstrate the ability of the proposed architectures to work under cryogenic harsh environments, such as space-and quantum-computing applications.”

Find the technical paper here. Published April 2024.

J. van Staveren et al., “Cryo-CMOS Voltage References for the Ultrawide Temperature Range From 300 K Down to 4.2 K,” in IEEE Journal of Solid-State Circuits, doi: 10.1109/JSSC.2024.3378768.

Further Reading
The Race Toward Quantum Advantage
Enormous amounts of money have been invested into quantum computing, but so far it has not surpassed conventional computers. When will that change?
Managing P/P Tradeoffs With Voltage Droop Gets Trickier
Higher current densities set against lower power envelopes makes meeting specs more challenging, especially at advanced nodes.

The post Voltage Reference Architectures For Harsh Environments: Quantum Computing And Space appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Sensor Fusion Challenges In AutomotiveEd Sperling
    The number of sensors in automobiles is growing rapidly alongside new safety features and increasing levels of autonomy. The challenge is integrating them in a way that makes sense, because these sensors are optimized for different types of data, sometimes with different resolution requirements even for the same type of data, and frequently with very different latency, power consumption, and reliability requirements. Pulin Desai, group director for product marketing, management and business deve
     

Sensor Fusion Challenges In Automotive

2. Květen 2024 v 09:15

The number of sensors in automobiles is growing rapidly alongside new safety features and increasing levels of autonomy. The challenge is integrating them in a way that makes sense, because these sensors are optimized for different types of data, sometimes with different resolution requirements even for the same type of data, and frequently with very different latency, power consumption, and reliability requirements. Pulin Desai, group director for product marketing, management and business development at Cadence, talks about challenges with sensor fusion, the growing importance of four-dimensional sensing, what’s needed to future-proof sensor designs, and the difficulty of integrating one or more software stacks with conflicting requirements.

The post Sensor Fusion Challenges In Automotive appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Fundamental Issues In Computer Vision Still UnresolvedKaren Heyman
    Given computer vision’s place as the cornerstone of an increasing number of applications from ADAS to medical diagnosis and robotics, it is critical that its weak points be mitigated, such as the ability to identify corner cases or if algorithms are trained on shallow datasets. While well-known bloopers are often the result of human decisions, there are also fundamental technical issues that require further research. “Computer vision” and “machine vision” were once used nearly interchangeably, w
     

Fundamental Issues In Computer Vision Still Unresolved

2. Květen 2024 v 09:08

Given computer vision’s place as the cornerstone of an increasing number of applications from ADAS to medical diagnosis and robotics, it is critical that its weak points be mitigated, such as the ability to identify corner cases or if algorithms are trained on shallow datasets. While well-known bloopers are often the result of human decisions, there are also fundamental technical issues that require further research.

“Computer vision” and “machine vision” were once used nearly interchangeably, with machine vision most often referring to the hardware embodiment of vision, such as in robots. Computer vision (CV), which started as the academic amalgam of neuroscience and AI research, has now become the dominant idea and preferred term.

“In today’s world, even the robotics people now call it computer vision,” said Jay Pathak, director, software development at Ansys. “The classical computer vision that used to happen outside of deep learning has been completely superseded. In terms of the success of AI, computer vision has a proven track record. Anytime self-driving is involved, any kind of robot that is doing work — its ability to perceive and take action — that’s all driven by deep learning.”

The original intent of CV was to replicate the power and versatility of human vision. Because vision is such a basic sense, the problem seemed like it would be far easier than higher-order cognitive challenges, like playing chess. Indeed, in the canonical anecdote about the field’s initial naïve optimism, Marvin Minsky, co-founder of the MIT AI Lab, having forgotten to include a visual system in a robot, assigned the task to undergraduates. But instead of being quick to solve, the problem consumed a generation of researchers.

Both academic and industry researchers work on problems that roughly can be split into three categories:

  • Image capture: The realm of digital cameras and sensors. It may use AI for refinements or it may rely on established software and hardware.
  • Image classification/detection: A subset of AI/ML that uses image datasets as training material to build models for visual recognition.
  • Image generation: The most recent work, which uses tools like LLMs to create novel images, and with the breakthrough demonstration of OpenAI’s Sora, even photorealistic videos.

Each one alone has spawned dozens of PhD dissertations and industry patents. Image classification/detection, the primary focus of this article, underlies ADAS, as well as many inspection applications.

The change from lab projects to everyday uses came as researchers switched from rules-based systems that simulated visual processing as a series of if/then statements (if red and round, then apple) to neural networks (NNs), in which computers learned to derive salient features by training on image datasets. NNs are basically layered graphs. The earliest model, 1943’s Perceptron, was a one-layer simulation of a biological neuron, which is one element in a vast network of interconnecting brain cells. Neurons have inputs (dendrites) and outputs (axons), driven by electrical and chemical signaling. The Perceptron and its descendant neural networks emulated the form but skipped the chemistry, instead focusing on electrical signals with algorithms that weighted input values. Over the decades, researchers refined different forms of neural nets with vastly increased inputs and layers, eventually becoming the deep learning networks that underlie the current advances in AI.

The most recent forms of these network models are convolutional neural networks (CNNs) and transformers. In highly simplified terms, the primary difference between them is that CNNs are very good at distinguishing local features, while transformers perceive a more globalized picture.

Thus, transformers are a natural evolution from CNNs and recurrent neural networks, as well as long short-term memory approaches (RNNs/LSTMs), according to Gordon Cooper, product marketing manager for Synopsys’ embedded vision processor family.

“You get more accuracy at the expense of more computations and parameters. More data movement, therefore more power,” said Cooper. “But there are cases where accuracy is the most important metric for a computer vision application. Pedestrian detection comes to mind. While some vision designs still will be well served with CNNs, some of our customers have determined they are moving completely to transformers. Ten years ago, some embedded vision applications that used DSPs moved to NNs, but there remains a need for both NNs and DSPs in a vision system. Developers still need a good handle on both technologies and are better served to find a vendor that can provide a combined solution.”

The emergence of CNN-based neural networks began supplanting traditional CV techniques for object detection and recognition.

“While first implemented using hardwired CNN accelerator hardware blocks, many of those CNN techniques then quickly migrated to programmable solutions on software-driven NPUs and GPNPUs,” said Aman Sikka, chief architect at Quadric.

Two parallel trends continue to reshape CV systems. “The first is that transformer networks for object detection and recognition, with greater accuracy and usability than their convolution-based predecessors, are beginning to leave the theoretical labs and enter production service in devices,” Sikka explained. “The second is that CV experts are reinventing the classical ISP functions with NN and transformer-based models that offer superior results. Thus, we’ve seen waves of ISP functionality migrating first from pure hardwired to C++ algorithmic form, and now into advanced ML network formats, with a modern design today in 2024 consisting of numerous machine-learning models working together.”

CV for inspection
While CV is well-known for its essential role in ADAS, another primary application is inspection. CV has helped detect everything from cancer tumors to manufacturing errors, or in the case of IBM’s productized research, critical flaws in the built environment. For example, a drone equipped with the IBM system could check if a bridge had cracks, a far safer and more precise way to perform visual inspection than having a human climb to dangerous heights.

By combining visual transformers with self-supervised learning, the annotation requirement is vastly reduced. In addition, the company has introduced a new process named “visual prompting,” where the AI can be taught to make the correct distinctions with limited supervision by using “in-context learning,” such as a scribble as a prompt. The optimal end result is that it should be able to respond to LLM-like prompts, such as “find all six-inch cracks.”

“Even if it makes mistakes and needs the help of human annotations, you’re doing far less labeling work than you would with traditional CNNs, where you’d have to do hundreds if not thousands of labels,” said Jayant Kalagnanam, director, AI applications at IBM Research.

Beware the humans
Ideally, domain-specific datasets should increase the accuracy of identification. They are often created by expanding on foundation models already trained on general datasets, such as ImageNet. Both types of datasets are subject to human and technical biases. Google’s infamous racial identification gaffes resulted from both technical issues and subsequent human overcorrections.

Meanwhile, IBM was working on infrastructure identification, and the company’s experience of getting its model to correctly identify cracks, including the problem of having too many images of one kind of defect, suggests a potential solution to the bias problem, which is to allow the inclusion of contradictory annotations.

“Everybody who is not a civil engineer can easily say what a crack is,” said Cristiano Malossi, IBM principal research scientist. “Surprisingly, when we discuss which crack has to be repaired with domain experts, the amount of disagreement is very high because they’re taking different considerations into account and, as a result, they come to different conclusions. For a model, this means if there’s ambiguity in the annotations, it may be because the annotations have been done by multiple people, which may actually have the advantage of introducing less bias.”

Fig.1 IBM’s Self-supervised learning model. Source: IBM

Fig. 1: IBM’s Self-supervised learning model. Source: IBM

Corner cases and other challenges to accuracy
The true image dataset is infinity, which in practical terms leaves most computer vision systems vulnerable to corner cases, potentially with fatal results, noted Alan Yuille, Bloomberg distinguished professor of cognitive science and computer science at Johns Hopkins University.

“So-called ‘corner cases’ are rare events that likely aren’t included in the dataset and may not even happen in everyday life,” said Yuille. “Unfortunately, all datasets have biases, and algorithms aren’t necessarily going to generalize to data that differs from the datasets they’re trained on. And one thing we have found with deep nets is if there is any bias in the dataset, the deep nets are wonderful at finding it and exploiting it.”

Thus, corner cases remain a problem to watch for. “A classic example is the idea of a baby in the road. If you’re training a car, you’re typically not going to have many examples of images with babies in the road, but you definitely want your car to stop if it sees a baby,” said Yuille. “If the companies are working in constrained domains, and they’re very careful about it, that’s not necessarily going to be a problem for them. But if the dataset is in any way biased, the algorithms may exploit the biases and corner cases, and may not be able to detect them, even if they may be of critical importance.”

This includes instances, such as real-world weather conditions, where an image may be partly occluded. “In academic cases, you could have algorithms that when evaluated on standard datasets like ImageNet are getting almost perfect results, but then you can give them an image which is occluded, for example, by a heavy rain,” he said. “In cases like that, the algorithms may fail to work, even if they work very well under normal weather conditions. A term for this is ‘out of domain.’ So you train in one domain and that may be cars in nice weather conditions, you test in out of domain, where there haven’t been many training images, and the algorithms would fail.”

The underlying reasons go back to the fundamental challenge of trying to replicate a human brain’s visual processing in a computer system.

“Objects are three-dimensional entities. Humans have this type of knowledge, and one reason for that is humans learn in a very different way than machine learning AI algorithms,” Yuille said. “Humans learn over a period of several years, where they don’t only see objects. They play with them, they touch them, they taste them, they throw them around.”

By contrast, current algorithms do not have that type of knowledge.

“They are trained as classifiers,” said Yuille. “They are trained to take images and output a class label — object one, object two, etc. They are not trained to estimate the 3D structure of objects. They have some sort of implicit knowledge of some aspects of 3D, but they don’t have it properly. That’s one reason why if you take some of those models, and you’ve contaminated the images in some way, the algorithms start degrading badly, because the vision community doesn’t have datasets of images with 3D ground truth. Only for humans, do we have datasets with 3D ground truth.”

Hardware implementation, challenges
The hardware side is becoming a bottleneck, as academics and industry work to resolve corner cases and create ever-more comprehensive and precise results. “The complexity of the operation behind the transformer is quadratic,“ said Malossi. “As a result, they don’t scale linearly with the size of the problem or the size of the model.“

While the situation might be improved with a more scalable iteration of transformers, for now progress has been stalled as the industry looks for more powerful hardware or any suitable hardware. “We’re at a point right now where progress in AI is actually being limited by the supply of silicon, which is why there’s so much demand, and tremendous growth in hardware companies delivering AI,” said Tony Chan Carusone, CTO of Alphawave Semi. “In the next year or two, you’re going to see more supply of these chips come online, which will fuel rapid progress, because that’s the only thing holding it back. The massive investments being made by hyperscalers is evidence about the backlogs in delivering silicon. People wouldn’t be lining up to write big checks unless there were very specific projects they had ready to run as soon as they get the silicon.”

As more AI silicon is developed, designers should think holistically about CV, since visual fidelity depends not only on sophisticated algorithms, but image capture by a chain of co-optimized hardware and software, according to Pulin Desai, group director of product marketing and management for Tensilica vision, radar, lidar, and communication DSPs at Cadence. “When you capture an image, you have to look at the full optical path. You may start with a camera, but you’ll likely also have radar and lidar, as well as different sensors. You have to ask questions like, ‘Do I have a good lens that can focus on the proper distance and capture the light? Can my sensor perform the DAC correctly? Will the light levels be accurate? Do I have enough dynamic range? Will noise cause the levels to shift?’ You have to have the right equipment and do a lot of pre-processing before you send what’s been captured to the AI. Remember, as you design, don’t think of it as a point solution. It’s an end-to-end solution. Every different system requires a different level of full path, starting from the lens to the sensor to the processing to the AI.”

One of the more important automotive CV applications is passenger monitoring, which can help reduce the tragedies of parents forgetting children who are strapped into child seats. But such systems depend on sensors, which can be challenged by noise to the point of being ineffective.

“You have to build a sensor so small it goes into your rearview mirror,” said Jayson Bethurem, vice president of marketing and business development at Flex Logix. “Then the issue becomes the conditions of your car. The car can have the sun shining right in your face, saturating everything, to the complete opposite, where it’s completely dark and the only light in the car is emitting off your dashboard. For that sensor to have that much dynamic range and the level of detail that it needs to have, that’s where noise creeps in, because you can’t build a sensor of that much dynamic range to be perfect. On the edges, or when it’s really dark or oversaturated bright, it’s losing quality. And those are sometimes the most dangerous times.”

Breaking into the black box
Finally, yet another serious concern for computer vision systems is the fact that they can’t be tested. Transformers, especially, are a notorious black box.

“We need to have algorithms that are more interpretable so that we can understand what’s going on inside them,” Yuille added. “AI will not be satisfactory till we move to a situation where we evaluate algorithms by being able to find the failure mode. In academia, and I hope companies are more careful, we test them on random samples. But if those random samples are biased in some way — and often they are — they may discount situations like the baby in the road, which don’t happen often. To find those issues, you’ve got to let your worst enemy test your algorithm and find the images that break it.”

Related Reading
Dealing With AI/ML Uncertainty
How neural network-based AI systems perform under the hood is currently unknown, but the industry is finding ways to live with a black box.

The post Fundamental Issues In Computer Vision Still Unresolved appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Design Considerations In PhotonicsKaren Heyman
    Experts at the Table: Semiconductor Engineering sat down to talk about what CMOS and photonics engineers need to know to successfully collaborate, with James Pond, fellow at Ansys; Gilles Lamant, distinguished engineer at Cadence; and Mitch Heins, business development manager for photonic solutions at Synopsys. What follows are excerpts of that conversation. To view part one of this discussion, click here. Part two is here. L-R: Ansys’s Pond, Cadence’s Lamant, Synopsys’ Heins SE: What do engine
     

Design Considerations In Photonics

1. Květen 2024 v 09:05

Experts at the Table: Semiconductor Engineering sat down to talk about what CMOS and photonics engineers need to know to successfully collaborate, with James Pond, fellow at Ansys; Gilles Lamant, distinguished engineer at Cadence; and Mitch Heins, business development manager for photonic solutions at Synopsys. What follows are excerpts of that conversation. To view part one of this discussion, click here. Part two is here.


L-R: Ansys’s Pond, Cadence’s Lamant, Synopsys’ Heins

SE: What do engineers who have spent their careers in CMOS need to know about designing for photonics?

Lamant:  It’s hard, no illusion. I had good mentors, including both James and Mitch, so I actually did that transition. Ten years ago, I knew nearly nothing about photonics. It takes having good mentors who can help you. That’s the biggest thing. It’s not enough to just try the software on your own. In addition, having an RF background is very useful in many ways. Photonics is the multiplication of RF. In photonics, you have multiple modes. In RF, you tend to only consider one mode, but a lot of the theory behind photonics is very much a generalization of RF.

Heins: We try to make our photonics flow look as much as we can like our electronics flow. We try to take the last 30 to 40 years of learning in EDA and apply it to photonics. One thing we see a lot is that when people are coming right out of school in photonics, they don’t necessarily have a deep background in how to do IC design. There are a lot of things we’ve learned, like design rule checking, that we now take for granted. It’s like breathing. You’ve got to do it. Layout versus schematic, you’ve got to do it. Even circuit-level simulation. As CMOS veterans, you’d think, of course, you always simulate your circuit before you go to manufacture, but that’s not the case in photonics.

Lamant: Those people actually know photonics, but they don’t know how to create a system. This is a different type of challenge. People who know photonics, know how to make a device. They’re expert at that. But they have no idea how to take that device and bring it to a full system that they can sell. I see that in so many startups. It’s not to make the point for EDA software. They use free software. They use Klayout and all those things that they have access to in the university. But all of those tools are not part of the ecosystem of trying to make a system. They say, ‘We wrote a custom simulator to simulate our ring.’ But the question then is, ‘How do you simulate the driver for your ring that goes with it?’ I see many startups fail because they don’t have that ability to take it from academic thinking to production.

You have the electronics people trying to do photonics, they have some methodology background, and other things, but they have a gap in knowledge. Fortunately, they can get caught up, especially if they’re an analog designer or an RF designer. They can close that gap by talking to the right people. Unfortunately, the people who know photonics do not have the knowledge of how to make a full system out of it, and this is greatly hurting the photonics world.

Pond: I would agree. We have two worlds of engineers who have been coming together over the last decade or so. Those who came from an EDA background — electrical circuit design, especially RF — have probably had the easiest time. We’ve been doing better and better for them. Ten years ago there was nothing. Now, there’s a more traditional workflow that looks more like an EDA workflow. Still, they have a lot to learn. But the workflow, the cockpit, and so on, follows along with the EDA model.

In the other direction, maybe we haven’t done quite as good a job because people coming from a photonics background can be really thrown off by the scale and complexity of EDA tools. My impression, coming from photonics, is EDA tools have been developed over many decades. When that happens, you end up with tools that are incredibly powerful, but you wonder if they’d been developed more recently, maybe things wouldn’t be done this way. There’s a resistance on the photonic engineer side to dive into that world because there’s a lot to learn about the EDA workflows. People from photonics have to embrace and take on that EDA world, because, as Gilles says, it’s necessary, it really has to be done.

Heins:  Now, you’re seeing a ton of work going into how to apply AI to help folks bring these kinds of more complex flows under control. There’s so much to learn, but if AI can help you take care of the plumbing, if you will, you can advance much faster. We already extensively use AI for SoCs or packaged designs where you have tens and hundreds of billions of transistors. Photonics is a different vector. The signal itself is much more complex than electrical. The optimization that you have to go through is much more complex. But AI can help get a handle on that, so as we go forward, you’ll start to see these kinds of complexities simplified for people.

SE: Is there something analogous to error correction/parity checks in the photonics world?

Lamant: That can’t be analogized to photonics, because that’s about knowing the original signal and comparing it to the others. Once you have reconstituted your data, and it’s back to being a digital set of bits, then you have a parity check or different types of things that today have nothing to do with photonics because it’s the physical link. In physical links, you can do retiming or a lot of things, but the error correction happens independently, on both sides.

Heins: Tuning might be something closer to it. If my resonance frequencies are not as expected, can I detect that and then adjust for it? That happens a lot. You could think of those kinds of things as error correction.

Pond: Most of the kind of error correction we’re talking about is just using all the standard methods, whether you have an optical link or a copper link. But there are some really interesting things. We had a workflow, developed between Ansys and Cadence a few years ago on a PAM-4 system, where we did a driver simulation and the photonic link together. You look into shifting the timing of signals to compensate for different effects. If you look at the eye at different locations, it may look completely distorted and wrong, because you’re pre-compensating for an effect that’s going to come later through the photonic portion of the link. That’s one of the reasons why it’s important to be able to do the full system simulation. You can’t just independently optimize the driver electronics and the photonics. They have to be done together, so you can perform the signal correction work.

Heins: You do things like equalization. Dispersion is another one. You get different wavelengths traveling at different speeds, and we compensate for that. At the physical level, there are some corrections that do take place, depending on the kind of system you’re trying to make. If you’re in coherent systems, where path links matter, phase matters, that’s more like trying to make the circuit correct by construction, so that you don’t encounter problems.

That raises another issue, which is manufacturing variances. There, you’re back to doing lots of sensitivity analysis through Monte Carlo-type simulations, parameterized simulations, etc., where you’re trying to get a feel for the sensitivity of your device, to a shift that could occur, either through the manufacturing process or just as this system sits in its ecosystem of whatever’s around it. It’s not quite error correction, per se, but certainly trying to design for that is something we care about.

SE: Any concluding thoughts?

Lamant: There is a lot of wondering and pondering right now, but it’s also exciting. We’ve reached the point where photonics is here to stay and will be part of more and more things. Looking forward, the interesting question is where it will become part of the actual data processing. Sensing is a terrific application for photonics, but I am not totally sold on the actual data processing. I’m not even using the word “computing” here, because processing and computing are very different things. Photonics is probably never going to be doing general computing. It may be doing specialized niche, like a Fourier transform-type of processing, and it needs to be part of a system.

Heins: It comes down to two things. What will really happen with quantum computing? And will quantum computing use photonics? A lot of people are looking at photonics for quantum computing because you can do a lot more of that work at room temperature than at 4 Kelvin or something like that — not all of it, but big chunks of it. If quantum computing actually becomes more than prototypes, and photonics is a big part of that, that could shift the answer. The other big issue in compute is we don’t have memory for photonics. If someone makes a breakthrough where suddenly states can be stored in some fashion, then all bets are off and everything changes again. But at this point, I don’t see anything promising.

One of the biggest challenges we have going forward for the whole ecosystem, in general, is lack of standards in this space, which makes interoperability between tools from our companies very difficult. The signal in photonics is very complex. It’s actually complex math, with real and imaginary parts. There are a lot of extra things that we have to take into account, and a lot of times we don’t even have common nomenclature or agreement on metrics and how to measure things. This is going to take time, but it’s being pushed by customers driving us to work together. For example, chiplets are great for photonics because a photonic IC is a chiplet. But all of a sudden, now you’re in a mixed domain, multi-physics type of environment, and there are some huge challenges to make that all work together. We have a pretty good handle on system functional verification, design-for-test, and all these things in the electronic IC world. In photonics, we’ve got a lot of work to do.

Pond: For me, it’s been exciting. I’ve been doing this for more than 20 years. In 2022, when I saw the first product with fibers actually coming out of the package, that was the dream from 20 years back. It took a lot of effort to get there. Things have been maturing very fast, especially in the last decade. That’s really promising from an EDA/EPDA-type of workflow perspective. The datacoms, as we’ve all said, are proven and not going to go away, given the investment from foundries, which is going to continue and even accelerate. It’s exciting times for all these other applications, from sensing to quantum and so on. There’s a lot of innovation possible. It’s not clear what’s going to be a winner yet and what’s not, but it’s a great time to be in photonics.

Read parts one and two of the discussion:
Photonics: The Former And Future Solution
Twenty-five years ago, photonics was supposed to be the future of high technology. Has that future finally arrived?
The Challenges Of Working With Photonics
From curvilinear designs to thermal vulnerabilities, what engineers need to know about the advantages and disadvantages of photonics

The post Design Considerations In Photonics appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Blog Review: May 1Jesse Allen
    Cadence’s Vatsal Patel stresses the importance of having testing and training capabilities for high-bandwidth memory to prevent the entire SoC from becoming useless and points to key HBM DRAM test instructions through IEEE 1500. In a podcast, Siemens’ Stephen V. Chavez chats with Anaya Vardya of American Standard Circuits about the growing significance of high density interconnect and Ultra HDI technologies, which enable denser component placement and increased signal integrity compared to tradi
     

Blog Review: May 1

1. Květen 2024 v 09:01

Cadence’s Vatsal Patel stresses the importance of having testing and training capabilities for high-bandwidth memory to prevent the entire SoC from becoming useless and points to key HBM DRAM test instructions through IEEE 1500.

In a podcast, Siemens’ Stephen V. Chavez chats with Anaya Vardya of American Standard Circuits about the growing significance of high density interconnect and Ultra HDI technologies, which enable denser component placement and increased signal integrity compared to traditional PCB designs.

Synopsys’ Ian Land and Randy Fish find that silicon lifecycle management is increasingly being used on chips that target the aerospace and government market to ensure system health and longevity.

Arm’s Hristo Belchev looks at how to enable testing of system designs using the Memory Partitioning and Monitoring (MPAM) Arm architecture supplement, which allows privileged software to partition caches, memory controllers and interconnects on the hardware level.

Keysight’s Jonathon Wright considers where generative AI can add value in software testing by proposing a wide range of scenarios and improving communication between different stakeholders.

Ansys’ Laura Carter checks out how simulation is used to reduce the risks to drivers during a crash in stock car racing.

SEMI’s Maria Daniela Perez chats with Owen J. Guy of Swansea University about the challenge of onboarding talent within the microelectronics industry and the importance of ensuring students receive hands-on experience and exposure to real-world applications.

And don’t miss the blogs featured in the latest Systems & Design newsletter:

Technology Editor Brian Bailey suggests that although it is great to see the DAC conference come back to life, EDA companies need to do something about the show floor.

Siemens’ John Ferguson shows how to glean useful information well before all the details of an assembly are known.

Axiomise’ Ashish Darbari explains how formal verification can help improve chips.

Arteris’ Frank Schirrmeister tracks the race to centralized computing in automotive.

Synopsys’ Andrew Appleby explores the co-optimization of foundation IP and design flows for new transistors.

Cadence’s Anika Sunda looks at controlling the access to physical memory addresses.

Keysight’s Ben Coffin digs into how AI will be used in just about every subsystem of 6G networks.

The post Blog Review: May 1 appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Framework For Early Anomaly Detection In AMS Components Of Automotive SoCsTechnical Paper Link
    A technical paper titled “Enhancing Functional Safety in Automotive AMS Circuits through Unsupervised Machine Learning” was published by researchers at University of Texas at Dallas, Intel Corporation, NXP Semiconductors, and Texas Instruments. Abstract: “Given the widespread use of safety-critical applications in the automotive field, it is crucial to ensure the Functional Safety (FuSa) of circuits and components within automotive systems. The Analog and Mixed-Signal (AMS) circuits prevalent in
     

Framework For Early Anomaly Detection In AMS Components Of Automotive SoCs

A technical paper titled “Enhancing Functional Safety in Automotive AMS Circuits through Unsupervised Machine Learning” was published by researchers at University of Texas at Dallas, Intel Corporation, NXP Semiconductors, and Texas Instruments.

Abstract:

“Given the widespread use of safety-critical applications in the automotive field, it is crucial to ensure the Functional Safety (FuSa) of circuits and components within automotive systems. The Analog and Mixed-Signal (AMS) circuits prevalent in these systems are more vulnerable to faults induced by parametric perturbations, noise, environmental stress, and other factors, in comparison to their digital counterparts. However, their continuous signal characteristics present an opportunity for early anomaly detection, enabling the implementation of safety mechanisms to prevent system failure. To address this need, we propose a novel framework based on unsupervised machine learning for early anomaly detection in AMS circuits. The proposed approach involves injecting anomalies at various circuit locations and individual components to create a diverse and comprehensive anomaly dataset, followed by the extraction of features from the observed circuit signals. Subsequently, we employ clustering algorithms to facilitate anomaly detection. Finally, we propose a time series framework to enhance and expedite anomaly detection performance. Our approach encompasses a systematic analysis of anomaly abstraction at multiple levels pertaining to the automotive domain, from hardware- to block-level, where anomalies are injected to create diverse fault scenarios. By monitoring the system behavior under these anomalous conditions, we capture the propagation of anomalies and their effects at different abstraction levels, thereby potentially paving the way for the implementation of reliable safety mechanisms to ensure the FuSa of automotive SoCs. Our experimental findings indicate that our approach achieves 100% anomaly detection accuracy and significantly optimizes the associated latency by 5X, underscoring the effectiveness of our devised solution.”

Find the technical paper here. Published April 2024 (preprint).

Arunachalam, Ayush, Ian Kintz, Suvadeep Banerjee, Arnab Raha, Xiankun Jin, Fei Su, Viswanathan Pillai Prasanth, Rubin A. Parekhji, Suriyaprakash Natarajan, and Kanad Basu. “Enhancing Functional Safety in Automotive AMS Circuits through Unsupervised Machine Learning.” arXiv preprint arXiv:2404.01632 (2024).

Related Reading
Creating IP In The Shadow Of ISO 26262
Automotive regulations can turn an interesting chip design project into a complex and often frustrating checklist exercise. In the case of ISO 26262, that includes a 12-part standard for automotive safety.
Shifting Left Using Model-Based Engineering
MBSE becomes useful for identifying potential problems earlier in the design flow, but it’s not perfect.

 

The post Framework For Early Anomaly Detection In AMS Components Of Automotive SoCs appeared first on Semiconductor Engineering.

Metrology For 2D Materials: A Review From The International Roadmap For Devices And Systems (NIST, Et Al.)

A technical paper titled “Metrology for 2D materials: a perspective review from the international roadmap for devices and systems” was published by researchers at Arizona State University, IBM Research, Unity-SC, and the National Institute of Standards and Technology (NIST).

Abstract:

“The International Roadmap for Devices and Systems (IRDS) predicts the integration of 2D materials into high-volume manufacturing as channel materials within the next decade, primarily in ultra-scaled and low-power devices. While their widespread adoption in advanced chip manufacturing is evolving, the need for diverse characterization methods is clear. This is necessary to assess structural, electrical, compositional, and mechanical properties to control and optimize 2D materials in mass-produced devices. Although the lab-to-fab transition remains nascent and a universal metrology solution is yet to emerge, rapid community progress underscores the potential for significant advancements. This paper reviews current measurement capabilities, identifies gaps in essential metrology for CMOS-compatible 2D materials, and explores fundamental measurement science limitations when applying these techniques in high-volume semiconductor manufacturing.”

Find the technical paper here. Published April 2024.

Changming Wu et al., Freeform direct-write and rewritable photonic integrated circuits in phase-change thin films.Sci. Adv.10,eadk1361(2024).DOI:10.1126/sciadv.adk1361

Further Reading
Closing The Test And Metrology Gap In 3D-IC Packages
Finding defects in stacked die is a daunting challenge. Equipment, processes, and methodologies all need modifications, and that’s just for starters.
Pressure Builds On Failure Analysis Labs
Goal is to find the causes of failures faster and much earlier — preferably before first silicon.

The post Metrology For 2D Materials: A Review From The International Roadmap For Devices And Systems (NIST, Et Al.) appeared first on Semiconductor Engineering.

❌
❌