FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Research Bits: Aug. 20

EUV mirror interference lithography

Researchers from the Paul Scherrer Institute developed an EUV lithography technique that can produce conductive tracks with a separation of just five nanometers by exposing the sample indirectly rather than directly.

Called EUV mirror interference lithography (MIL), the technique uses two mutually coherent beams that are reflected onto the wafer by two identical mirrors. The beams then create an interference pattern whose period depends on both the angle of incidence and the wavelength of the light. In addition to the 5nm resolution, the conductive tracks were found to have high contrast and sharp edges.

“Our results show that EUV lithography can produce extremely high resolutions, indicating that there are no fundamental limitations yet. This is really exciting since it extends the horizon of what we deem as possible and can also open up new avenues for research in the field of EUV lithography and photoresist materials,” said Dimitrios Kazazis of the Laboratory of X-ray Nanoscience and Technologies at PSI in a statement.

The method is currently too slow for industrial chip production and can produce only simple and periodic structures. However, the team sees it as a resource for early development of new photoresists and plans to continue research to improve its performance and capabilities. [1]

Artificial sapphire dielectrics

Researchers from Shanghai Institute of Microsystem and Information Technology created artificial sapphire dielectric wafers made of single-crystalline aluminum oxide (Al2O3).

“The aluminum oxide we created is essentially artificial sapphire, identical to natural sapphire in terms of crystal structure, dielectric properties and insulation characteristics,” said Tian Zi’ao, a researcher at SIMIT, in a release.

“By using intercalation oxidation technology on single-crystal aluminum, we were able to produce this single-crystal aluminum oxide dielectric material,” added Di Zengfeng, a researcher at SIMIT, in a release. “Unlike traditional amorphous dielectric materials, our crystalline sapphire can achieve exceptionally low leakage at just one-nanometer level.”

The researchers hope the improved dielectric properties could lead to more power-efficient devices. [2]

Accelerating computation on sparse data sets

Researchers from Lehigh University and Lawrence Berkeley National Laboratory developed specialized hardware that enables faster computation on data sets that have a high number of zero values, frequent in the fields of bioinformatics and physical sciences. The hardware is portable and can be integrated into general-purpose multi-core computers.

“The accelerating sparse accumulation (ASA) architecture includes a hardware buffer, a hardware cache, and a hardware adder. It takes two sparse matrices, performs a matrix multiplication, and outputs a sparse matrix. The ASA only uses non-zero data when it performs this operation, which makes the architecture more efficient. The hardware buffer and the cache allow the computer processor to easily manage the flow of data; the hardware adder allows the processor to quickly generate values to fill up the empty matrices,” explained Berkely Lab’s Ingrid Ockert in a press release. “Once these values are calculated, the ASA system produces an output. This operation is a building block that the researcher can then use in other functions. For instance, researchers could use these outputs to generate graphs or they could process these outputs through other algorithms such as a Sparse General Matrix-Matrix Multiplication (SpGEMM) algorithm.”

The ASA architecture could accelerate a variety of algorithms. Microbiome research is presented as an example, where it could be used to run metagenomic assembly and similarity clustering algorithms such as Markov Cluster Algorithms that quickly characterize the genetic markers of all of the organisms in a soil sample. [3]

References

[1] I. Giannopoulos, I. Mochi, M. Vockenhuber, Y. Ekinci & D. Kazazis. Extreme ultraviolet lithography reaches 5 nm resolution. Nanoscale, 12.08.2024 https://doi.org/10.1039/D4NR01332H

[2] Zeng, D., Zhang, Z., Xue, Z. et al. Single-crystalline metal-oxide dielectrics for top-gate 2D transistors. Nature (2024). https://doi.org/10.1038/s41586-024-07786-2

[3] Chao Zhang, Maximilian Bremer, Cy Chan, John M Shalf, and Xiaochen Guo. ASA: Accelerating Sparse Accumulation in Column-wise SpGEMM. ACM Transactions on Architecture and Code Optimization (TACO) Volume 19, Issue 4, Article No.: 49, Pages 1-24 https://doi.org/10.1145/3543068

The post Research Bits: Aug. 20 appeared first on Semiconductor Engineering.

Metrology And Inspection For The Chiplet Era

New developments and innovations in metrology and inspection will enable chipmakers to identify and address defects faster and with greater accuracy than ever before, all of which will be required at future process nodes and in densely-packed assemblies of chiplets.

These advances will affect both front-end and back-end processes, providing increased precision and efficiency, combined with artificial intelligence/machine learning and big data analytics. These kinds of improvements will be crucial for meeting the industry’s changing needs, enabling deeper insights and more accurate measurements at rates suitable for high-volume manufacturing. But gaps still need to be filled, and new ones are likely to show up as new nodes and processes are rolled out.

“As semiconductor devices become more complex, the demand for high-resolution, high-accuracy metrology tools increases,” says Brad Perkins, product line manager at Nordson Test & Inspection. “We need new tools and techniques that can keep up with shrinking geometries and more intricate designs.”

The shift to high-NA EUV lithography (0.55 NA EUV) at the 2nm node and beyond is expected to exacerbate stochastic variability, demanding more robust metrology solutions on the front end. Traditional critical dimension (CD) measurements alone are insufficient for the level of analysis required. Comprehensive metrics, including line-edge roughness (LER), line-width roughness (LWR), local edge-placement error (LEPE), and local CD uniformity (LCDU), alongside CD measurements, are necessary for ensuring the integrity and performance of advanced semiconductor devices. These metrics require sophisticated tools that can capture and analyze tiny variations at the nanometer scale, where even slight discrepancies can significantly impact device functionality and yield.

“Metrology is now at the forefront of yield, especially considering the current demands for DRAM and HBM,” says Hamed Sadeghian, president and CEO of Nearfield Instruments. “The next generations of HBMs are approaching a stage where hybrid bonding will be essential due to the increasing stack thickness. Hybrid bonding requires high resolutions in vertical directions to ensure all pads, and the surface height versus the dielectric, remain within nanometer-scale process windows. Consequently, the tools used must be one order of magnitude more precise.”

To address these challenges, companies are developing hybrid metrology systems that combine various measurement techniques for a comprehensive data set. Integrating scatterometry, electron microscopy, and/or atomic force microscopy allows for more thorough analysis of critical features. Moreover, AI and ML algorithms enhance the predictive capabilities of these tools, enabling process adjustments.

“Our customers who are pushing into more advanced technology nodes are desperate to understand what’s driving their yield,” says Ronald Chaffee, senior director of applications engineering at NI/Emerson Test & Measurement. “They may not know what all the issues are, but they are gathering all possible data — metrology, AEOI, and any measurable parameters — and seeking correlations.”

Traditional methods for defect detection, pattern recognition, and quality control typically used spatial pattern-recognition modules and wafer image-based algorithms to address wafer-level issues. “However, we need to advance beyond these techniques,” says Prasad Bachiraju, senior director of business development at Onto Innovation. “Our observations show that about 20% of wafers have systematic issues that can limit yield, with nearly 4% being new additions. There is a pressing need for advanced metrology for in-line monitoring to achieve zero-defect manufacturing.”

Several companies recently announced metrology innovations to provide more precise inspections, particularly for difficult-to-see areas, edge effects, and highly reflective surfaces.

Nordson unveiled its AMI SpinSAM acoustic rotary scan system. The system represents a significant departure from traditional raster scan methods, utilizing a rotational scanning approach. Rather than moving the wafer in an x,y pattern relative to a stationary lens, the wafer spins, similar to a record player. This reduces motion over the wafer and increases inspection speed, negating the need for image stitching and improving image quality.

“For years, we’d been trying to figure out this technique, and it’s gratifying to finally achieve it. It’s something we’ve always thought would be incredibly beneficial,” says Perkins. “The SpinSAM is designed primarily to enhance inspection speed and efficiency, addressing the common industry demand for more product throughput and better edge inspection capabilities.”

Meanwhile, Nearfield Instruments introduced a multi-head atomic force microscopy (AFM) system called QUADRA. It is a high-throughput, non-destructive metrology tool for HVM that features a novel multi-miniaturized AFM head architecture. Nearfield claims the parallel independent multi-head scanner can deliver a 100-fold throughput advantage versus conventional single-probe AFM tools. This architecture allows for precise measurements of high-aspect-ratio structures and complex 3D features, critical for advanced memory (3D NAND, DRAM, HBM) and logic processes.


Fig. 1: Image capture comparison of standard AFM and multi-head AFM. Source: Nearfield Instruments

In April, Onto Innovation debuted an advancement in subsurface defect inspection technology with the release of its Dragonfly G3 inspection system. The new system allows for 100% wafer inspection, targeting subsurface defects that can cause yield losses, such as micro-cracks and other hidden flaws that may lead to entire wafers breaking during subsequent processing steps. The Dragonfly G3 utilizes novel infrared (IR) technology combined with specially designed algorithms to detect these defects, which previously were undetectable in a production environment. This new capability supports HBM, advanced logic, and various specialty segments, and aims to improve final yield and cost savings by reducing scrapped wafers and die stacks.

More recently, researchers at the Paul Scherrer Institute announced a high-performance X-ray tomography technique using burst ptychography. This new method can provide non-destructive, detailed views of nanostructures as small as 4nm in materials like silicon and metals at a fast acquisition rate of 14,000 resolution elements per seconds. The tomographic back-propagation reconstruction allows imaging of samples up to ten times larger than the conventional depth of field.

There are other technologies and techniques for improving metrology in semiconductor manufacturing, as well, including wafer-level ultrasonic inspection, which involves flipping the wafer to inspect from the other side. New acoustic microscopy techniques, such as scanning acoustic microscopy (SAM) and time-of-flight acoustic microscopy (TOF-AM), enable the detection and characterization of very small defects, such as voids, delaminations, and cracks within thin films and interfaces.

“We used to look at 80 to 100 micron resist films, but with 3D integrated packaging, we’re now dealing with films that are 160 to 240 microns—very thick resist films,” says Christopher Claypool, senior application scientist at Bruker OCD. “In TSVs and microbumps, the dominant technique today is white light interferometry, which provides profile information. While it has some advantages, its throughput is slow, and it’s a focus-based technique. This limitation makes it difficult to measure TSV structures smaller than four or five microns in diameter.”

Acoustic metrology tools equipped with the newest generation of focal length transducers (FLTs) can focus acoustic waves with precision down to a few nanometers, allowing for non-destructive detailed inspection of edge defects and critical stress points. This capability is particularly useful for identifying small-scale defects that might be missed by other inspection methods.

The development and integration of smart sensors in metrology equipment is instrumental in collecting the vast amounts of data needed for precise measurement and quality control. These sensors are highly sensitive and capable of operating under various environmental conditions, ensuring consistent performance. One significant advantage of smart sensors is their ability to facilitate predictive maintenance. By continuously monitoring the health and performance of metrology equipment, these sensors can predict potential failures and schedule maintenance before significant downtime occurs. This capability enhances the reliability of the equipment, reduces maintenance costs, and improves overall operational efficiency.

Smart sensors also are being developed to integrate seamlessly with metrology systems, offering real-time data collection and analysis. These sensors can monitor various parameters throughout the manufacturing process, providing continuous feedback and enabling quick adjustments to prevent defects. Smart sensors, combined with big data platforms and advanced data analytics, allow for more efficient and accurate defect detection and classification.

Critical stress points

A persistent challenge in semiconductor metrology is the identification and inspection of defects at critical stress points, particularly at the silicon edges. For bonded wafers, it’s at the outer ring of the wafer. For chip-on-wafer packaging, it’s at the edge of the chips. These edge defects are particularly problematic because they occur at the highest stress points from the neutral axis, making them more prone to failures. As semiconductor devices continue to involve more intricate packaging techniques, such as chip-on-wafer and wafer-level packaging, the focus on edge inspection becomes even more critical.

“When defects happen in a factory, you need imaging that can detect and classify them,” says Onto’s Bachiraju. “Then you need to find the root causes of where they’re coming from, and for that you need the entire data integration and a big data platform to help with faster analysis.”

Another significant challenge in semiconductor metrology is ensuring the reliability of known good die (KGD), especially as advanced packaging techniques and chiplets become more prevalent. Ensuring that every chip/chiplet in a stacked die configuration is of high quality is essential for maintaining yield and performance, but the speed of metrology processes is a constant concern. This leads to a balancing act between thoroughness and efficiency. The industry continuously seeks to develop faster machines that can handle the increasing volume and complexity of inspections without compromising accuracy. In this race, innovations in data processing and analysis are key to achieving quicker results.

“Customers would like, generally, 100% inspection for a lot of those processes because of the known good die, but it’s cost-prohibitive because the machines just can’t run fast enough,” says Nordson’s Perkins.

Metrology and Industry 4.0

Industry 4.0 — a term introduced in Germany in 2011 for the fourth industrial revolution, and called smart manufacturing in the U.S. — emphasizes the integration of digital technologies such as the Internet of Things, artificial intelligence, and big data analytics into manufacturing processes. Unlike past revolutions driven by mechanization, electrification, and computerization, Industry 4.0 focuses on connectivity, data, and automation to enhance manufacturing capabilities and efficiency.

“The better the data integration is, the more efficient the yield ramp,” says Dieter Rathei, CEO of DR Yield. “It’s essential to integrate all available data into the system for effective monitoring and analysis.”

In semiconductor manufacturing, this shift toward Industry 4.0 is particularly transformative, driven by the increasing complexity of semiconductor devices and the demand for higher precision and yield. Traditional metrology methods, heavily reliant on manual processes and limited automation, are evolving into highly interconnected systems that enable real-time data sharing and decision-making across the entire production chain.

“There haven’t been many tools to consolidate different data types into a single platform,” says NI’s Chaffee. “Historically, yield management systems focused on testing, while FDC or process systems concentrated on the process itself, without correlating the two. As manufacturers push into the 5, 3, and 2nm spaces, they’re discovering that defect density alone isn’t the sole governing factor. Process control is also crucial. By integrating all data, even the most complex correlations that a human might miss can be identified by AI and ML. The goal is to use machine learning to detect patterns or connections that could help control and optimize the manufacturing process.”

IoT forms the backbone of Industry 4.0 by connecting various devices, sensors, and systems within the manufacturing environment. In semiconductor manufacturing, IoT enables seamless communication between metrology tools, production equipment, and factory management systems. This interconnected network facilitates real-time monitoring and control of manufacturing processes, allowing for immediate adjustments and optimization.

“You need to integrate information from various sources, including sensors, metrology tools, and test structures, to build predictive models that enhance process control and yield improvement,” says Michael Yu, vice president of advanced solutions at PDF Solutions. “This holistic approach allows you to identify patterns and correlations that were previously undetectable.”

AI and ML are pivotal in processing and analyzing the vast amounts of data generated in a smart factory. These technologies can identify patterns, predict equipment failures, and optimize process parameters with a level of precision and speed unattainable by human operators alone. In semiconductor manufacturing, AI-driven analytics enhance process control, improve yield rates, and reduce downtime. “One of the major trends we see is the integration of artificial intelligence and machine learning into metrology tools,” says Perkins. “This helps in making sense of the vast amounts of data generated and enables more accurate and efficient measurements.”

AI’s role extends further as it assists in discovering anomalies within the production process that might have gone unnoticed with traditional methods. AI algorithms integrated into metrology systems can dynamically adjust processes in real-time, ensuring that deviations are corrected before they affect the end yield. This incorporation of AI minimizes defect rates and enhances overall production quality.

“Our experience has shown that in the past 20 years, machine learning and AI algorithms have been critical for automatic data classification and die classification,” says Bachiraju. “This has significantly improved the efficiency and accuracy of our metrology tools.”

Big data analytics complements AI/ML by providing the infrastructure necessary to handle and interpret massive datasets. In semiconductor manufacturing, big data analytics enables the extraction of actionable insights from data generated by IoT devices and production systems. This capability is crucial for predictive maintenance, quality control, and continuous process improvement.

“With big data, we can identify patterns and correlations that were previously impossible to detect, leading to better process control and yield improvement,” says Perkins.

Big data analytics also helps in understanding the lifecycle of semiconductor devices from production to field deployment. By analyzing product performance data over time, manufacturers can predict potential failures and enhance product designs, increasing reliability and lifecycle management.

“In the next decade, we see a lot of opportunities for AI,” says DR Yield’s Rathei. “The foundation for these advancements is the availability of comprehensive data. AI models need extensive data for training. Once all the data is available, we can experiment with different models and ideas. The ingenuity of engineers, combined with new tools, will drive exponential progress in this field.”

Metrology gaps remain

Despite recent advancements in metrology, analytics, and AI/ML, several gaps still remain, particularly in the context of high-volume manufacturing (HVM) and next-generation devices. The U.S. Commerce Department’s CHIPS R&D Metrology Program, along with industry stakeholders, have highlighted seven “grand challenges,” areas where current metrology capabilities fall short:

Metrology for materials purity and properties: There is a critical need for new measurements and standards to ensure the purity and physical properties of materials used in semiconductor manufacturing. Current techniques lack the sensitivity and throughput required to detect particles and contaminants throughout the supply chain.

Advanced metrology for future manufacturing: Next-generation semiconductor devices, such as gate-all-around (GAA) FETs and complementary FETs (CFETs), require breakthroughs in both physical and computational metrology. Existing tools are not yet capable of providing the resolution, sensitivity, and accuracy needed to characterize the intricate features and complex structures of these devices. This includes non-destructive techniques for characterizing defects and impurities at the nanoscale.

“There is a secondary challenge with some of the equipment in metrology, which often involves sampling data from single points on a wafer, much like heat test data that only covers specific sites,” says Chaffee. “To be meaningful, we need to move beyond sampling methods and find creative ways to gather information from every wafer, integrating it into a model. This involves building a knowledge base that can help in detecting patterns and correlations, which humans alone might miss. The key is to leverage AI and machine learning to identify these correlations and make sense of them, especially as we push into the 5, 3, and 2nm spaces. This process is iterative and requires a holistic approach, encompassing various data points and correlating them to understand the physical boundaries and the impact on the final product.”

Metrology for advanced packaging: The integration of sophisticated components and novel materials in advanced packaging technologies presents significant metrology challenges. There is a need for rapid, in-situ measurements to verify interfaces, subsurface interconnects, and internal 3D structures. Current methods do not adequately address issues such as warpage, voids, substrate yield, and adhesion, which are critical for the reliability and performance of advanced packages.

Modeling and simulating semiconductor materials, designs, and components: Modeling and simulating semiconductor processes require advanced computational models and data analysis tools. Current capabilities are limited in their ability to seamlessly integrate the entire semiconductor value chain, from materials inputs to system assembly. There is a need for standards and validation tools to support digital twins and other advanced simulation techniques that can optimize process development and control.

“Predictive analytics is particularly important,” says Chaffee. “They aim to determine the probability of any given die on a wafer being the best yielding or presenting issues. By integrating various data points and running different scenarios, they can identify and understand how specific equipment combinations, sequences and processes enhance yields.”

Modeling and simulating semiconductor processes: Current capabilities are limited in their ability to seamlessly integrate the entire semiconductor value chain, from materials inputs to system assembly. There is a need for standards and validation tools to support digital twins and other advanced simulation techniques that can optimize process development and control.

“Part of the problem comes from the back-end packaging and assembly process, but another part of the problem can originate from the quality of the wafer itself, which is determined during the front-end process,” says PDF’s Yu. “An effective ML model needs to incorporate both front-end and back-end information, including data from equipment sensors, metrology, and structured test information, to make accurate predictions and take proactive actions to correct the process.”

Standardizing new materials and processes: The development of future information and communication technologies hinges on the creation of new standards and validation methods. Current reference materials and calibration services do not meet the requirements for next-generation materials and processes, such as those used in advanced packaging and heterogeneous integration. This gap hampers the industry’s ability to innovate and maintain competitive production capabilities.

Metrology to enhance security and provenance of components and products: With the increasing complexity of the semiconductor supply chain, there is a need for metrology solutions that can ensure the security and provenance of components and products. This involves developing methods to trace materials and processes throughout the manufacturing lifecycle to prevent counterfeiting and ensure compliance with regulatory standards.

“The focus on security and sharing changes the supplier relationship into more of a partnership and less of a confrontation,” says Chaffee. “Historically, there’s always been a concern of data flowing across that boundary. People are very protective about their process, and other people are very protective about their product. But once you start pushing into the deep sub-micron space, those barriers have to come down. The die are too expensive for them not to communicate, but they can still do so while protecting their IP. Companies are starting to realize that by sharing parametric test information securely, they can achieve better yield management and process optimization without compromising their intellectual property.”

Conclusion

Advancements in metrology and testing are pivotal for the semiconductor industry’s continued growth and innovation. The integration of AI/ML, IoT, and big data analytics is transforming how manufacturers approach process control and yield improvement. As adoption of Industry 4.0 grows, the role of metrology will become even more critical in ensuring the efficiency, quality, and reliability of semiconductor devices. And by leveraging these advanced technologies, semiconductor manufacturers can achieve higher yields, reduce costs, and maintain the precision required in this competitive industry.

With continuous improvements and the integration of smart technologies, the semiconductor industry will keep pushing the boundaries of innovation, leading to more robust and capable electronic devices that define the future of technology. The journey toward a fully realized Industry 4.0 is ongoing, and its impact on semiconductor manufacturing undoubtedly will shape the future of the industry, ensuring it stays at the forefront of global technological advancements.

“Anytime you have new packaging technologies and process technologies that are evolving, you have a need for metrology,” says Perkins. “When you are ramping up new processes and need to make continuous improvements for yield, that is when you see the biggest need for new metrology solutions.”

The post Metrology And Inspection For The Chiplet Era appeared first on Semiconductor Engineering.

Is the Future of Moore’s Law in a Particle Accelerator?

Od: John Boyd


As Intel, Samsung, TSMC, and Japan’s upcoming advanced foundry Rapidus each make their separate preparations to cram more and more transistors into every square millimeter of silicon, one thing they all have in common is that the extreme ultraviolet (EUV) lithography technology underpinning their efforts is extremely complex, extremely expensive, and extremely costly to operate. A prime reason is that the source of this system’s 13.5-nanometer light is the precise and costly process of blasting flying droplets of molten tin with the most powerful commercial lasers on the planet.

But an unconventional alternative is in the works. A group of researchers at the High Energy Accelerator Research Organization, known as KEK, in Tsukuba, Japan, is betting EUV lithography might be cheaper, quicker, and more efficient if it harnesses the power of a particle accelerator.

Even before the first EUV machines had been installed in fabs, researchers saw possibilities for EUV lithography using a powerful light source called a free-electron laser (FEL), which is generated by a particle accelerator. However, not just any particle accelerator will do, say the scientists at KEK. They claim the best candidate for EUV lithography incorporates the particle-accelerator version of regenerative braking. Known as an energy recovery linear accelerator, it could enable a free electron laser to economically generate tens of kilowatts of EUV power. This is more than enough to drive not one but many next-generation lithography machines simultaneously, pushing down the cost of advanced chipmaking.

“The FEL beam’s extreme power, its narrow spectral width, and other features make it suitable as an application for future lithography,” Norio Nakamura, researcher in advanced light sources at KEK, told me on a visit to the facility.

Linacs Vs. Laser-Produced Plasma

Today’s EUV systems are made by a single manufacturer, ASML, headquartered in Veldhoven, Netherlands. When ASML introduced the first generation of these US $100-million-plus precision machines in 2016, the industry was desperate for them. Chipmakers had been getting by with workaround after workaround for the then most advanced system, lithography using 193-nm light. Moving to a much shorter, 13.5-nm wavelength was a revolution that would collapse the number of steps needed in chipmaking and allow Moore’s Law to continue well into the next decade.

The chief cause of the continual delays was a light source that was too dim. The technology that ultimately delivered a bright enough source of EUV light is called laser-produced plasma, or EUV-LPP. It employs a carbon dioxide laser to blast molten droplets of tin into plasma thousands of times per second. The plasma emits a spectrum of photonic energy, and specialized optics then capture the necessary 13.5-nm wavelength from the spectrum and guide it through a sequence of mirrors. Subsequently, the EUV light is reflected off a patterned mask and then projected onto a silicon wafer.

A room full of industrial equipment with a line of instruments at hip height that goes off into the distance. The experimental compact energy recovery linac at KEK uses most of the energy from electrons on a return journey to speed up a new set of electrons.KEK

It all adds up to a highly complex process. And although it starts off with kilowatt-consuming lasers, the amount of EUV light that is reflected onto the wafer is just several watts. The dimmer the light, the longer it takes to reliably expose a pattern on the silicon. Without enough photons carrying the pattern, EUV would be uneconomically slow. And pushing too hard for speed can lead to costly errors.

When the machines were first introduced, the power level was enough to process about 100 wafers per hour. Since then, ASML has managed to steadily hike the output to about 200 wafers per hour for the present series of machines.

ASML’s current light sources are rated at 500 watts. But for the even finer patterning needed in the future, Nakamura says it could take 1 kilowatt or more. ASML says it has a road map to develop a 1,000-W light source. But it could be difficult to achieve, says Nakamura, who formerly led the beam dynamics and magnet group at KEK and came out of retirement to work on the EUV project.

Difficult but not necessarily impossible. Doubling the source power is “very challenging,” agrees Ahmed Hassanein who leads the Center for Materials Under Extreme Environment, at Purdue University, in Indiana. But he points out that ASML has achieved similarly difficult targets in the past using an integrated approach of improving and optimizing the light source and other components, and he isn’t ruling out a repeat.

A read zig-zag line makes a path through a series of cartoon magnets. A yellow arrow projects from the end of the magnets. In a free electron laser, accelerated electrons are subject to alternating magnetic fields, causing them to undulate and emit electromagnetic radiation. The radiation bunches up the electrons, leading to their amplifying only a specific wavelength, creating a laser beam.Chris Philpot

But brightness isn’t the only issue ASML faces with laser-produced plasma sources. “There are a number of challenging issues in upgrading to higher EUV power,” says Hassanein. He rattles off several, including “contamination, wavelength purity, and the performance of the mirror-collection system.”

High operating costs are another problem. These systems consume some 600 liters of hydrogen gas per minute, most of which goes into keeping tin and other contaminants from getting onto the optics and wafers. (Recycling, however, could reduce this figure.)

But ultimately, operating costs come down to electricity consumption. Stephen Benson, recently retired senior research scientist at the Thomas Jefferson National Accelerator Facility, in Virginia., estimates that the wall-plug efficiency of the whole EUV-LPP system might be less than 0.1 percent. Free electron lasers, like the one KEK is developing, could be as much as 10 to 100 times as efficient, he says.

The Energy Recovery Linear Accelerator

The system KEK is developing generates light by boosting electrons to relativistic speeds and then deviating their motion in a particular way.

The process starts, Nakamura explains, when an electron gun injects a beam of electrons into a meters-long cryogenically cooled tube. Inside this tube, superconductors deliver radio-frequency (RF) signals that drive the electrons along faster and faster. The electrons then make a 180-degree turn and enter a structure called an undulator, a series of oppositely oriented magnets. (The KEK system currently has two.) The undulators force the speeding electrons to follow a sinusoidal path, and this motion causes the electrons to emit light.


A line-shaped schematic with a wave above it at left, and an oval shaped schematic with a wave inside it.


In linear accelerator, injected electrons gain energy from an RF field. Ordinarily, the electrons would then enter a free electron laser and are immediately disposed of in a beam dump. But in an energy recovery linear accelerator (ERL), the electrons circle back into the RF field and lend their energy to newly injected electrons before exiting to a beam dump.

What happens next is a phenomenon called self-amplified spontaneous emissions, or SASE. The light interacts with the electrons, slowing some and speeding up others, so they gather into “microbunches,” peaks in density that occur periodically along the undulator’s path. The now-structured electron beam amplifies only the light that’s in phase with the period of these microbunches, generating a coherent beam of laser light.

It’s at this point that KEK’s compact energy recovery linac (cERL), diverges from lasers driven by conventional linear accelerators. Ordinarily, the spent beam of electrons is disposed of by diverting the particles into what is called a beam dump. But in the cERL, the electrons first loop back into the RF accelerator. This beam is now in the opposite phase to newly injected electrons that are just starting their journey. The result is that the spent electrons transfer much of their energy to the new beam, boosting its energy. Once the original electrons have had some of their energy drained away like this, they are diverted into a beam dump.

“The acceleration energy in the linac is recovered, and the dumped beam power is drastically reduced compared to [that of] an ordinary linac,” Nakamura explains to me while scientists in another room operate the laser. Reusing the electrons’ energy means that for the same amount of electricity the system sends more current through the accelerator and can fire the laser more frequently, he says.

Other experts agree. The energy-recovery linear accelerator’s improved efficiency can lower costs, “which is a major concern of using EUV laser-produced plasma,” says Hassanein.

The Energy Recovery Linac for EUV

The KEK compact energy-recovery linear accelerator was initially constructed between 2011 and 2013 with the aim of demonstrating its potential as a synchrotron radiation source for researchers working for the institution’s physics and materials-science divisions. But researchers were dissatisfied with the planned system, which had a lower performance target than could be achieved by some storage ring-based synchrotrons—huge circular accelerators that keep a beam of electrons moving with a constant kinetic energy. So, the KEK researchers went in search of a more appropriate application. After talking with Japanese tech companies, including Toshiba, which had a flash memory chip division at the time, the researchers conducted an initial study that confirmed that a kilowatt-class light source was possible with a compact energy-recovery linear accelerator. And so, the EUV free-electron-laser project was born. In 2019 and 2020, the researchers modified the existing experimental accelerator to start the journey to EUV light.

The system is housed in an all-concrete room to protect researchers from the intense electromagnetic radiation produced during operation. The room is some 60 meters long and 20 meters wide with much of the space taken up by a bewildering tangle of complex equipment, pipes, and cables that snakes along both sides of its length in the form of an elongated racetrack.

The accelerator is not yet able to generate EUV wavelengths. With an electron beam energy of 17 megaelectronvolts, the researchers have been able to generate SASE emissions in bursts of 20-micrometer infrared light. Early test results were published in the Japanese Journal of Applied Physics in April 2023. The next step, which is underway, is to generate much greater laser power in continuous-wave mode.

To be sure, 20 micrometers is a far cry from 13.5 nanometers. And there are already types of particle accelerators that produce synchrotron radiation of even shorter wavelengths than EUV. But lasers based on energy-recovery linear accelerators could generate significantly more EUV power due to their inherent efficiency, the KEK researchers claim. In synchrotron radiation sources, light intensity increases proportionally to the number of injected electrons. By comparison, in free-electron laser systems, light intensity increases roughly with the square of the number of injected electrons, resulting in much more brightness and power.

For an energy-recovery linear accelerator to reach the EUV range will require equipment upgrades beyond what KEK currently has room for. So, the researchers are now making the case for constructing a new prototype system that can produce the needed 800 MeV.

A room full of industrial equipment. An electron gun injects charge into the compact energy recovery linear accelerator at KEK.KEK

In 2021, before severe inflation affected economies around the globe, the KEK team estimated the construction cost (excluding land) for a new system at 40 billion yen ($260 million) for a system that delivers 10 kW of EUV and supplies multiple lithography machines. Annual running costs were judged to be about 4 billion yen. So even taking recent inflation into account, “the estimated costs per exposure tool in our setup are still rather low compared to the estimated costs” for today’s laser-produced plasma source, says Nakamura.

There are plenty of technical challenges to work out before such a system can achieve the high levels of performance and stability of operations demanded by semiconductor manufacturers, admits Nakamura. The team will have to develop new editions of key components such as the superconducting cavity, the electron gun, and the undulator. Engineers will also have to develop good procedural techniques to ensure, for instance, that the electron beam does not degrade or falter during operations.

And to ensure their approach is cost effective enough to grab the attention of chipmakers, the researchers will need to create a system that can reliably transport more than 1 kW of EUV power simultaneously to multiple lithography machines. The researchers already have a conceptual design for an arrangement of special mirrors that would convey the EUV light to multiple exposure tools without significant loss of power or damage to the mirrors.

Other EUV Possibilities

It’s too early in the development of EUV free-electron lasers for rapidly expanding chipmakers to pay it much attention. But the KEK team is not alone in chasing the technology. A venture-backed startup xLight, in Palo Alto, Calif. is also among those chasing it. The company, which is packed with particle-accelerator veterans from the Stanford Linear Accelerator and elsewhere, recently inked an R&D deal with Fermi National Accelerator Laboratory, in Illinois, to develop superconducting cavities and cryomodule technology. Attempts to contact xLight went unanswered, but in January, the company took part in the 8th Workshop EUV-FEL in Tokyo, and former CEO Erik Hosler gave a presentation on the technology.

Significantly, ASML considered turning to particle accelerators a decade ago and again more recently when it compared the progress of free-electron laser technology to the laser-produced plasma road map. But company executives decided LLP presented fewer risks.

And, indeed, it is a risky road. Independent views on KEK’s project emphasize that reliability and funding will be the biggest challenges the researchers face going forward. “The R&D road map will involve numerous demanding stages in order to develop a reliable, mature system,” says Hassanein. “This will require serious investment and take considerable time.”

“The machine design must be extremely robust, with redundancy built in,” adds retired research scientist Benson. The design must also ensure that components are not damaged from radiation or laser light.” And this must be accomplished “without compromising performance, which must be good enough to ensure decent wall-plug efficiency.”

More importantly, Benson warns that without a forthcoming commitment to invest in the technology, “development of EUV-FELs might not come in time to help the semiconductor industry.”

Computational Lithography Solutions To Enable High NA EUV

Od: Synopsys

This white paper identifies and discusses the computational needs required to support the development, optimization, and implementation of high NA extreme ultraviolet (EUV) lithography. It explores the challenges associated with the increased complexity of high NA systems, proposes potential solutions, and highlights the importance of computational lithography in driving the success of advanced EUV lithography technologies.

Click here to read more.

The post Computational Lithography Solutions To Enable High NA EUV appeared first on Semiconductor Engineering.

Lilbits: Intel Foundry’s new roadmap, Google Pixel Fold 2, and the OnePlus Watch 2

Intel had a rough few years a while back, when the company struggled to meet its original goals for moving from 14nm to 10nm and wound up shipping multiple generations of processors manufactured on a 14nm node while competitors were moving to smaller and more efficient processes. But Intel says its Foundry business is back […]

The post Lilbits: Intel Foundry’s new roadmap, Google Pixel Fold 2, and the OnePlus Watch 2 appeared first on Liliputing.

❌