FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇IEEE Spectrum
  • How India Is Starting a Chip Industry From ScratchSamuel K. Moore
    In March, India announced a major investment to establish a semiconductor-manufacturing industry. With US $15 billion in investments from companies, state governments, and the central government, India now has plans for several chip-packaging plants and the country’s first modern chip fab as part of a larger effort to grow its electronics industry. But turning India into a chipmaking powerhouse will also require a substantial investment in R&D. And so the Indian government turned to IEEE
     

How India Is Starting a Chip Industry From Scratch

28. Červenec 2024 v 17:01


In March, India announced a major investment to establish a semiconductor-manufacturing industry. With US $15 billion in investments from companies, state governments, and the central government, India now has plans for several chip-packaging plants and the country’s first modern chip fab as part of a larger effort to grow its electronics industry.

But turning India into a chipmaking powerhouse will also require a substantial investment in R&D. And so the Indian government turned to IEEE Fellow and retired Georgia Tech professor Rao Tummala, a pioneer of some of the chip-packaging technologies that have become critical to modern computers. Tummala spoke with IEEE Spectrum during the IEEE Electronic Component Technology Conference in Denver, Colo., in May.

Rao Tummala


Rao Tummala is a pioneer of semiconductor packaging and a longtime research leader at Georgia Tech.

What are you helping the government of India to develop?

Rao Tummala: I’m helping to develop the R&D side of India’s semiconductor efforts. We picked 12 strategic research areas. If you explore research in those areas, you can make almost any electronic system. For each of those 12 areas, there’ll be one primary center of excellence. And that’ll be typically at an IIT (Indian Institute of Technology) campus. Then there’ll be satellite centers attached to those throughout India. So when we’re done with it, in about five years, I expect to see probably almost all the institutions involved.

Why did you decide to spend your retirement doing this?

Tummala: It’s my giving back. India gave me the best education possible at the right time.

I’ve been going to India and wanting to help for 20 years. But I wasn’t successful until the current government decided they’re going to make manufacturing and semiconductors important for the country. They asked themselves: What would be the need for semiconductors, in 10 years, 20 years, 30 years? And they quickly concluded that if you have 1.4 billion people, each consuming, say, $5,000 worth of electronics each year, it requires billions and billions of dollars’ worth of semiconductors.

“It’s my giving back. India gave me the best education possible at the right time.” —Rao Tummala, advisor to the government of India

What advantages does India have in the global semiconductor space?

Tummala: India has the best educational system in the world for the masses. It produces the very best students in science and engineering at the undergrad level and lots of them. India is already a success in design and software. All the major U.S. tech companies have facilities in India. And they go to India for two reasons. It has a lot of people with a lot of knowledge in the design and software areas, and those people are cheaper [to employ].

What are India’s weaknesses, and is the government response adequate to overcoming them?

Tummala: India is clearly behind in semiconductor manufacturing. It’s behind in knowledge and behind in infrastructure. Government doesn’t solve these problems. All that the government does is set the policies and give the money. This has given companies incentives to come to India, and therefore the semiconductor industry is beginning to flourish.

Will India ever have leading-edge chip fabs?

Tummala: Absolutely. Not only will it have leading-edge fabs, but in about 20 years, it will have the most comprehensive system-level approach of any country, including the United States. In about 10 years, the size of the electronics industry in India will probably have grown about 10 times.

This article appears in the August 2024 print issue as “5 Questions for Rao Tummala.”

  • ✇Semiconductor Engineering
  • Making Adaptive Test Work BetterEd Sperling
    One of the big challenges for IC test is making sense of mountains of data, a direct result of more features being packed onto a single die, or multiple chiplets being assembled into an advanced package. Collecting all that data through various agents and building models on the tester no longer makes sense for a couple reasons — there is too much data, and there are multiple customers using the same equipment. Steve Zamek, director of product management at PDF Solutions, and Eli Roth, product ma
     

Making Adaptive Test Work Better

10. Červen 2024 v 09:15

One of the big challenges for IC test is making sense of mountains of data, a direct result of more features being packed onto a single die, or multiple chiplets being assembled into an advanced package. Collecting all that data through various agents and building models on the tester no longer makes sense for a couple reasons — there is too much data, and there are multiple customers using the same equipment. Steve Zamek, director of product management at PDF Solutions, and Eli Roth, product manager at Teradyne, explain how to optimize testing around different data sources, how to partition that data between the edge and the cloud, and how to ensure it remains secure.

The post Making Adaptive Test Work Better appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Integrating Digital Twins In Semiconductor OperationsMark da Silva
    By Mark da Silva, Nishita Rao and Karim Somani Chipmakers must adopt transformative technologies including Digital Twins (DT) to keep pace with unprecedented global semiconductor industry growth that is expected to drive its total market value to $1 trillion[1] as soon as 2030. Leveraging predictive modeling and other efficiency-enhancing innovations, DTs promise to optimize semiconductor design, manufacturing processes and equipment maintenance while improving overall operational efficiency. Wi
     

Integrating Digital Twins In Semiconductor Operations

22. Únor 2024 v 09:01

By Mark da Silva, Nishita Rao and Karim Somani

Chipmakers must adopt transformative technologies including Digital Twins (DT) to keep pace with unprecedented global semiconductor industry growth that is expected to drive its total market value to $1 trillion[1] as soon as 2030. Leveraging predictive modeling and other efficiency-enhancing innovations, DTs promise to optimize semiconductor design, manufacturing processes and equipment maintenance while improving overall operational efficiency.

With DTs rising in prominence as a critical enabler of industry growth, key players from across the semiconductor ecosystem – including OEMs, platforms and end users – gathered at the Semiconductor Digital Twin Workshop last December at SEMI headquarters in Milpitas, Calif. to discuss the latest DT developments and explore the path to advancing the technology.

Following are highlights from the sold-out event hosted by the SEMI Smart Manufacturing Initiative.

Key takeaways

  • Industry Alignment on DT Definition and Taxonomy
    • The semiconductor industry needs to align on the definition and taxonomy of DTs in semiconductor operations.
    • With collaboration crucial to advances in DTs, the industry must come together to develop a common understanding of the technology.
  • Data Sharing for Sustainability Improvements
    • Sharing data among various chip ecosystem players will be vital to driving sustainability improvements.
    • Focusing on equipment and operational DTs with sustainability in mind will help foster collaboration among industry stakeholders.
  • Advocacy for Standardized DT Architecture and Framework
    • A standardized DT framework architecture must be established to enhance interoperability, reliability, synchronization, and security.
    • The adoption of digital twin technical standards is in its early stages but increasing in importance as DT technology evolves.
    • Collaboration will be essential to accelerate the availability and adoption of several digital twin technical standards under development by SEMI and other Standards Development Organizations (SDOs).

Key challenges

  • Robust DT Framework and Overcoming Development Silos
    • Establishing a robust DT framework and overcoming isolated development silos in microelectronics are challenges the industry must overcome.
  • Managing Unclean Factory Data
    • Challenges include managing unclean factory data, varying data granularity, and addressing the lifecycle of data models.
  • Sharing Data Between Tools and Process Steps
    • Data sharing between various semiconductor tools and process steps must be seamless. Data provenance is critical for DT accuracy and validation.
  • Legacy Factories & Small/Medium Firms
    • Factories with older generation tools and processes have a unique challenge in developing process level DTs for existing products.

Workshop sessions

The workshop consisted of four sessions focused on DT efforts by equipment makers, solution providers, device makers, and factory integration providers.

Equipment-level digital twins session

The session focused on OEM efforts to develop tool-level DTs and highlighted the potential to improve efficiency, performance, and sustainability. The session also featured discussions on equipment-level data sharing, standards, and interoperability challenges that need to be addressed. Speakers included IRDS Co-Chair Supika Mashiro of TEL, Ala Moradian of Applied Materials, Joseph Ervin of LAM Research, Sean Glazier of Onto Innovation, Basil Milton and Chan-Pin Chong of Kulicke & Soffa, and Mark Huntington of McKinsey & Company.

Session speakers: (L) Supika Mashiro, TEL, and (R) Ala Moradien, AMAT. 

Speakers discussed existing DTs deployed in manufacturing such as Run-to-Run (R2R) control, virtual metrology, and predictive maintenance (PdM) and the need for standardized DTs that can communicate with each other. Tool-level DT solutions such as Applied Materials EcoTwin within the AppliedTwin platform provide a virtualized replica of chipmaking equipment for development and improvement of chip-level processes. The platform has also demonstrated extensibility to sustainability analysis, a significant development.

Other focus areas were the connectivity of DTs across different levels (tools to factories) and the use of AI to make them self-adjusting for manufacturing processes. The importance of DT infrastructure and associated challenges such as ensuring clean and accessible data, data flow, and communication to keeping DTs synchronized were raised as significant challenges. In the back-end, OEMs are making steady progress to virtualize various tools such as wire bonding. The session also highlighted DTs as a major investment across industries, with huge potential in chipmaking. Building a strong data sharing foundation is key to success.

Chamber process, operations and planning level digital twin session

The session was led by solution providers from across the semiconductor ecosystem that develop tools to facilitate DTs at various hierarchical levels. The providers offer a variety of products and services across areas such as process physics-based models, chamber processes, operations, as well as planning modelling approaches to help companies implement and manage DTs. The session included technical details of DT models and their potential impact on the entire manufacturing process.

While the session made clear that DTs promise to revolutionize the semiconductor industry, it must overcome significant technical development challenges of integrating DTs into day-to-day operations. Speakers included Sarbajit Ghosal of SC Solutions, Norman Chang of Ansys, Holland Smith of INFICON, Chandra Reddy of IBM Research, Jon Herlocker of TIGNIS, Ken Smerz of ZELUS and John Behnke of INFICON.

Speakers emphasized the need for fast, multi-physics-based (and data-assisted) accurate DTs for real-time control and monitoring and that react instantly to changes, just like physical equipment. Think of it as having a virtual process line that can predict how different processes will interact. Sitting on top of the DTs are AI-powered (physics and/or data-driven) models that can then be harnessed to optimize manufacturing processes and predict yield.

Speakers also discussed operational-level DTs and the need for a central hub for all factory operational data to boost efficiency, maximize productivity, and reduce waste – all critical as the number of fabs grows in the years ahead. Construction DTs for pre-construction planning in the building of new chip fabs or expanding brown-field sites provide a preconstruction virtual blueprint that can help identify potential problems early on and minimize time to wafer starts. Lastly, how these various levels of DTs are integrated vertically within a factory play a key role in making decisions about autonomous fabs.

Digital twin adoption and implementation session

The session was led by device makers and owners of fabs, where DTs are critical for improving productivity by predicting yield, quality, and efficiency. A process-level DT enables a virtual representation of a product’s process flow in the fab, and it can be used to speed integration efforts (MRL 5-7), simulate specific outcomes, and optimize operations. Imagine a future where chip fabs are run by AI agents, with virtual models predicting problems before they happen and optimizing processes on the fly. That’s the vision shared by the session’s expert speakers. Their insights painted a fascinating picture of what’s next for the semiconductor industry. Speakers included Professor. H.-S Philip Wong of Stanford University, Steven J Meyer of Intel, Jae Yong Park of Samsung, Rosa Javadi of JABIL, Professor Amit Lal and Peter Doerschuk of Cornell University, Ben Davaji of Northeastern University, Pushkar Apte of SEMI and Bobby Mitra of Deloitte.

Session speakers: (L) Steven J Meyer, Intel, and (R) Jae Yong Park, Samsung.

The key development target is advanced AI-assisted manufacturing with three layers of virtual models – processes, tools, and the entire fab itself – all working together seamlessly is critical. This ambitious vision aligns with the National Semiconductor Technology Center (NSTC) DT Grand Challenge, which focuses on generating, sharing, and using data effectively. Intel’s AFS Software Suite, which includes high-speed simulators and graphical models to enable better planning and decision-making across multiple sites, is a real-world example of DTs used in today’s fabs.

Use cases of deploying AI to improve Automated Material Handling Systems (AMHS) asset utilization by 30% have also been demonstrated in real-world fab environments. The session highlighted the importance of scheduling with AI-powered DTs and standardizing data availability across the industry. Other impressive product development use case studies shared included a rapid COVID-19 tester system development and a global supply chain DT.

Speakers described how challenges such as infrastructure readiness, talent gaps, and data privacy concerns are slowing industrywide adoption. They also discussed efforts to develop an open-access academic cleanroom dedicated to developing and testing DT models for lithography and etching processes, with investigation of federated learning to address data privacy & sharing concerns. The experts characterized the hierarchy of DT types as a framework based on the ISA-95 standard to ensure seamless communication and collaboration between DTs across various levels, from process development to production. This interconnected approach could revolutionize chipmaking across the entire enterprise, as demonstrated by an example showing DTs spanning the enterprise.

Digital twin connectivity and platform integration session

The session focused on a variety of product and service offerings by cloud, facilities, and supply chain solution providers that help companies implement and manage DTs of various levels. These solutions include integration, connectivity, security and horizontal integration across the supply chain. Almost all speakers pointed to the importance of standardization efforts as crucial for future development. Speakers included Rad Desiraju of Microsoft, Gautham Unni of AWS, David Gross and Srividya Jayaram of Siemens, Slava Libman of FTD Solutions, Becky Kelderman of Rockwell Automation, Ram Walvekar of HCL Technologies, and Paul Trio of SEMI International Standards

Session speakers touched on definitions and categorization of DTs, including types and uses, as well as building dedicated infrastructure to support their development. The experts highlighted a few DT development challenges in areas such as data sources and provenance, as well as visualization and shared their solution offerings for creating, connecting, and maintaining these digital twins both vertically and horizontally within an enterprise.

The presenters also shared use cases on how DTs bridge design and manufacturing, enabling simulations and faster production, and how connecting DTs for various assets, processes, and products creates a holistic view. Session speakers also discussed a DT maturity scorecard that enables players from across the supply chain to track their progress and identify areas for improvement. Use cases of facility-level DTs for water management in fabs for promoting sustainability was also a topic of discussion.

The semiconductor industry’s commitment to digital twins

The Semiconductor Digital Twin Workshop showcased the industry’s commitment to adopting and advancing the technology. Continued collaboration and adherence to standards and sustainable practices will play a crucial role in unlocking the full potential of DT technology in semiconductor manufacturing.

SEMI thanks the speakers who provided access to their material presented at the workshop. Visit Semiconductor Digital Twin Workshop OnDemand | SEMI for the workshop materials.

Reference

  1. Ondrej Burkacky, Julia Dragon, and Nikolaus Lehmann, The semiconductor decade: A trillion-dollar industry, McKinsey & Company (blog), April 1, 2021

Nishita Rao is senior product marketing manager at SEMI.

Karim Somani is program manager at SEMI.  

The post Integrating Digital Twins In Semiconductor Operations appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Tackling Variability With AI-based Process ControlAnne Meixner
    Jon Herlocker, co-founder and CEO of Tignis, sat down with Semiconductor Engineering to talk about how AI in advanced process control reduces equipment variability and corrects for process drift. What follows are excerpts of that conversation. SE: How is AI being used in semiconductor manufacturing and what will the impact be? Herlocker: AI is going to create a completely different factory. The real change is going to happen when AI gets integrated, from the design side all the way through the m
     

Tackling Variability With AI-based Process Control

22. Únor 2024 v 09:01

Jon Herlocker, co-founder and CEO of Tignis, sat down with Semiconductor Engineering to talk about how AI in advanced process control reduces equipment variability and corrects for process drift. What follows are excerpts of that conversation.

SE: How is AI being used in semiconductor manufacturing and what will the impact be?

Herlocker: AI is going to create a completely different factory. The real change is going to happen when AI gets integrated, from the design side all the way through the manufacturing side. We are just starting to see the beginnings of this integration right now. One of the biggest challenges in the semiconductor industry is it can take years from the time an engineer designs a new device to that device reaching high-volume production. Machine learning is going to cut that to half, or even a quarter. The AI technology that Tignis offers today accelerates that very last step — high-volume manufacturing. Our customers want to know how to tune their tools so that every time they process a wafer the process is in control. Traditionally, device makers get the hardware that meets their specifications from the equipment manufacturer, and then the fab team gets their process recipes working. Depending on the size of the fab, they try to physically replicate that process in a ‘copy exact’ manner, which can take a lot of time and effort. But now device makers can use machine learning (ML) models to autonomously compensate for the differences in equipment variation to produce the exact same outcome, but with significantly less effort by process engineers and equipment technicians.

SE: How is this typically done?

Herlocker: A classic APC system on the floor today might model three input parameters using linear models. But if you need to model 20 or 30 parameters, these linear models don’t work very well. With AI controllers and non-linear models, customers can ingest all of their rich sensor data that shows what is happening in the chamber, and optimally modulate the recipe settings to ensure that the outcome is on-target. AI tools such as our PAICe Maker solution can control any complex process with a greater degree of precision.

SE: So, the adjustments AI process control software makes is to tweak inputs to provide consistent outputs?

Herlocker: Yes, I preach this all the time. By letting AI automate the tasks that were traditionally very manual and time-consuming, engineers and technicians in the fab can remove a lot of the manual precision tasks they needed to do to control their equipment, significantly reducing module operating costs. AI algorithms also can help identify integration issues — interacting effects between tools that are causing variability. We look at process control from two angles. Software can autonomously control the tool by modulating the recipe parameters in response to sensor readings and metrology. But your autonomous control cannot control the process if your equipment is not doing what it is supposed to do, so we developed a separate AI learning platform that ensures equipment is performing to specification. It brings together all the different data silos across the fab – the FDC trace data, metrology data, test data, equipment data, and maintenance data. The aggregation of all that data is critical to understanding the causes of a variation in equipment. This is where ML algorithms can automatically sift through massive amount of data to help process engineers and data scientists determine what parameters are most influencing their process outcomes.

SE: Which process tools benefit the most from AI modeling of advanced process control?

Herlocker: We see the most interest in thin film deposition tools. The physics involved in plasma etching and plasma-enhanced CVD are non-linear processes. That is why you can get much better control with ML modeling. You also can model how the process and equipment evolves over time. For example, every time you run a batch through the PECVD chamber you get some amount of material accumulation on the chamber walls, and that changes the physics and chemistry of the process. AI can build a predictive model of that chamber. In addition to reacting to what it sees in the chamber, it also can predict what the chamber is going to look like for the next run, and now the ML model can tweak the input parameters before you even see the feedback.

SE: How do engineers react to the idea that the AI will be shifting the tool recipe?

Herlocker: That is a good question. Depending on the customer, they have different levels of comfort about how frequently things should change, and how much human oversight there needs to be for that change. We have seen everything from, ‘Just make a recommendation and one of our engineers will decide whether or not to accept that recommendation,’ to adjusting the recipe once a day, to autonomously adjusting for every run. The whole idea behind these adjustments is for variability reduction and drift management, and customers weigh the targeted results versus the perceived risk of taking a novel approach.

SE: Does this involve building confidence in AI-based approaches?

Herlocker: Absolutely, and our systems have a large number of fail-safes, and some limits are hard-coded. We have people with PhDs in chemical engineering and material science who have operated these tools for years. These experts understand the physics of what is happening in these tools, and they have the practical experience to know what level of change can be expected or not.

SE: How much of your modeling is physics-based?

Herlocker: In the beginning, all of our modeling was physics-based, because we were working with equipment makers on their next-generation tools. But now we are also bringing our technology to device makers, where we can also deliver a lot of value by squeezing the most juice out of a data-driven approach. The main challenge with physics models is they are usually IP-protected. When we work with equipment makers, they typically pay us to build those physics-based models so they cannot be shared with other customers.

SE: So are your target customers the toolmakers or the fabs?

Herlocker: They are both our target customers. Most of our sales and marketing efforts are focused on device makers with legacy fabs. In most cases, the fab manager has us engage with their team members to do an assessment. Frequently, that team includes a cross section of automation, process, and equipment teams. The automation team is most interested in reducing the time to detect some sort of deviation that is going to cause yield loss, scrap, or tool downtime. The process and equipment engineers are interested in reducing variability or controlling drift, which also increases chamber life.

For example, let’s consider a PECVD tool. As I mentioned, every time you run the process, byproducts such as polymer materials build up on the chamber walls. You want a thickness of x in your deposition, but you are getting a slightly different wafer thickness uniformity due to drift of that chamber because of plasma confinement changes. Eventually, you must shut down the tool, wet clean the chamber, replace the preventive maintenance kit parts, and send them through the cleaning loop (i.e., to the cleaning vendor shop). Then you need to season the chamber and bring it back online. By controlling the process better, the PECVD team does not have to vent the chamber as often to clean parts. Just a 5% increase in chamber life can be quite significant from a maintenance cost reduction perspective (e.g., parts spend, refurb spend, cleaning spend, etc.). Reducing variability has a similarly large impact, particularly if it is a bottleneck tool, because then that reduction directly contributes to higher or more stable yields via more ‘sweet spot’ processing time, and sometimes better wafer throughput due to the longer chamber lifetime. The ROI story is more nuanced on non-bottleneck tools because they don’t modulate fab revenue, but the ROI there is still there. It is just more about chamber life stability.

SE: Where does this go next?

Herlocker: We also are working with OEMs on next-generation toolsets. Using AI/ML as the core of process control enables equipment makers to control processes that are impossible to implement with existing control strategies and software. For example, imagine on each process step there are a million different parameters that you can control. Further imagine that changing any one parameter has a global effect on all the other parameters, and only by co-varying all the million parameters in just the right way will you get the ideal outcome. And to further complicate things, toss in run-to-run variance, so that the right solution continues to change over time. And then there is the need to do this more than 200 times per hour to support high-volume manufacturing. AI/ML enables this kind of process control, which in turn will enable a step function increase in the ability to produce more complex devices more reliably.

SE: What additional changes do you see from AI-based algorithms?

Herlocker: Machine learning will dramatically improve the agility and productivity of the facility broadly. For example, process engineers will spend less time chasing issues and have more time to implement continuous improvement. Maintenance engineers will have time to do more preventive maintenance. Agility and resiliency — the ability to rapidly adjust to or maintain operations, despite disturbances in the factory or market — will increase. If you look at ML combined with upcoming generative AI capabilities, within a year or two we are going to have agents that effectively will understand many aspects of how equipment or a process works. These agents will make good engineers great, and enable better capture, aggregation, and transfer of manufacturing knowledge. In fact, we have some early examples of this running in our labs. These ML agents capture and ingest knowledge very quickly. So when it comes to implementing the vision of smart factories, machine learning automation will have a massive impact on manufacturing in the future.

The post Tackling Variability With AI-based Process Control appeared first on Semiconductor Engineering.

  • ✇IEEE Spectrum
  • Figuring Out Semiconductor Manufacturing's Climate FootprintSamuel K. Moore
    Samuel K. Moore Hi. I’m Samuel K. Moore for IEEE Spectrum‘s Fixing the Future podcast. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. The semiconductor industry is in the midst of a major expansion driven by the seemingly insatiable demands of AI, the addition of more intelligence
     

Figuring Out Semiconductor Manufacturing's Climate Footprint

9. Únor 2024 v 17:44


Samuel K. Moore Hi. I’m Samuel K. Moore for IEEE Spectrum‘s Fixing the Future podcast. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. The semiconductor industry is in the midst of a major expansion driven by the seemingly insatiable demands of AI, the addition of more intelligence in transportation, and national security concerns, among many other things. Governments and the industry itself are starting to worry what this expansion might mean for chip-making’s carbon footprint and its sustainability generally. Can we make everything in our world smarter without worsening climate change? I’m here with someone who’s helping figure out the answer. Lizzie Boakes is a life cycle analyst in the Sustainable Semiconductor Technologies and Systems Program at IMEC, the Belgium-based nanotech research organization. Welcome, Lizzie.

Lizzie Boakes: Hello.

Moore: Thanks very much for coming to talk with us.

Boakes: You’re welcome. Pleasure to be here.

Moore: So let’s start with, just how big is the carbon footprint of the semiconductor industry? And is it really big enough for us to worry about?

Boakes: Yeah. So quantifying the carbon footprint of the semiconductor industry is not an easy task at all, and that’s because semiconductors are now embedded in so many industries. So the most obvious industry is the ICT industry, which is estimated to be about approximately 3 percent of the global emissions. However, semiconductors can also be found in so many other industries, and their embedded nature is increasing dramatically. So they’re embedded in automotives, they’re embedded in healthcare applications, as far as aerospace and defense applications too. So their expansion and adoption of semiconductors in all of these different industries just makes it very hard to quantify.

And the global impact of the semiconductor chip manufacturing itself is expected to increase as well because of the fact that we need more and more of these chips. So the global chip market is projected to have a 7 percent compound annual growth rate in the next coming years. And bearing in mind that the manufacturing of the IC chips itself often accounts for the largest share of the life cycle climate impact, especially for consumer electronics, for instance. This increase in demand for so many chips and the demand for the manufacturing of those chips will have a significant impact on the climate impact of the semiconductor industry. So it’s really crucial that we focus on this and we identify the challenges and try to work towards reducing the impact to achieve any of our ambitions at reaching net zero before 2050.

Moore: Okay. So the way you looked at this, it was sort of a— it was cradle-to-gate life cycle. Can you sort of explain what that entails, what that really means?

Boakes: Yeah. So cradle to gate here means that we quantify the climate impacts, not only of the IC manufacturing processes that occur inside the semiconductor fab, but also we quantify the embedded impact of all of the energy and material flows that are entering the fab that are necessary for the fab to operate. So in other words, we try to quantify the climate impact of the value chain upstream to the fab itself, and that’s where the cradle begins. So the extraction of all of the materials that you need, all of the energy sources. For instance, the extraction of coal for electricity production. That’s the cradle. And the gate refers to the point where you stop the analysis, you stop the quantification of the impact. And in our case, that is the end of the processing of the silicon wafer for a specific technology node.

Moore: Okay. So it stops basically when you’ve got the die, but it hasn’t been packaged and put in a computer.

Boakes: Exactly.

Moore: And so why do you feel like you have to look at all the upstream stuff that a chip-maker may not really have any control over, like coal and such like that?

Boakes: So there is a big need to analyze your scope through what is called— in greenhouse gas protocol, you have three different scopes. Your scope one is your direct emissions. Your scope two is the emissions related to the electricity consumption and the production of electricity that you have consumed in your operation. And scope three is basically everything else, and a lot of people start with scope three, all of their upstream materials. And it does have— it’s obviously the largest scope because it’s everything else other than what you’re doing. And I think it’s necessary to coordinate your supply chain so that you make sure you’re doing the most sustainable solution that you can. So if there are— you have power in your purchasing, you have power over how you choose your supply chain. And if you can manipulate it in a way where you have reduced emissions, then that should be done. Often, scope three is the largest proportion of the total impact, A, because it’s one of the biggest groups, but B, because there is a lot of materials and things coming in. So yeah, it’s necessary to have a look up there and see how you can best reduce your emissions. And yeah, you can have power in your influence over what you choose in the end, in terms of what you’re purchasing.

Moore: All right. So in your analysis, what did you see as sort of the biggest contributors to the chip fabs carbon output?

Boakes: So without effective abatement, the processed gases that are released as direct emissions, they would really dominate the total emissions of the IC chip manufacturing. And this is because the processed gases that are often consumed in IC manufacturing, they have a very high GWP value. So if you do not abate them and you do not destroy them in a small abatement system, then their emissions and contribution to global warming are very large. However, you can drastically reduce that emission already by deploying effective abatements on specific process areas, the high-impact process areas. And if you do that, then this distribution shifts.

So then you would see that the direct emission-- the contribution of the direct emissions would reduce because you’ve reduced your direct emission output. But then the next-biggest contributor would be the electrical energy. So the scope to the emissions that are related to the production of the electricity that you’re consuming. And as you can imagine, IC manufacturing is very energy-intensive. So there’s a lot of electricity coming in, so it’s necessary then to try to start to decarbonize your electricity provider or reduce your carbon intensity of your electricity that you’re purchasing.

And then once you do that step, you would also see that again the distribution changes, and your scope three, your upstream materials, would then be the largest contributors to the total impact. And the materials that we’ve identified as being the most or the largest contributors to that impact would be, for instance, the silicon wafers themselves, the raw wafers before you start processing, as well as wet chemicals. So these are chemicals that are very specific to the semiconductor industry. There’s a lot of consumption there, and they’re very specific and have a high GWP value.

Moore: Okay. So if we could start with— unpack a few of those. First off, what are some of these chemicals, and are they generally abated well these days? Or is this sort of something that’s still a coming problem?

Boakes: Yeah. So they could be from specific photoresists to— there is a very heavy consumption of basic chemicals for neutralization of wastewater, these types of things. So there’s a combination of having in a high embedded GWP value, which means that it takes a very large amount of-- or has a very large impact to produce the chemical itself, or you just have a lot that you’re consuming of it. So it might have a low embedded impact, but you’re just using so much of it that, in the end, it’s the higher contributor anyway. So you have two kind of buckets there. And yeah, it would just be a matter of, you have to multiply through the amounts by your embedded emission to see which ones come on top. But yeah, we see that often, the wastewater treatment uses a lot of these chemicals just for neutralization and treatment of wastewater on site, as well as very specific chemicals for the semiconductor industry such as photoresists and CMP cleans, those types of very specific chemistries which, again, it’s difficult to quantify the embedded impact of because often there’s a proprietary— you don’t exactly know what goes into it, and it’s a lot of difficulty trying to actually characterize those chemicals appropriately. So often we apply a proxy value to those. So this is something that we would really like to improve in the future would be having more communication with our supply chain and really understanding what the real embedded impact of those chemicals would be. This is something that we really would need to work on to really identify the high-impact chemicals and try anything we can to reduce them.

Moore: Okay. And what about those direct greenhouse gas emission chemicals? Are those generally abated, or is that something that’s still being worked on?

Boakes: So there is quite, yeah, a substantial amount of work going into the abatement system. So we have the usual methane combustion of processed gases. There’s also now development in plasma abatement systems. So there are different abatement systems being developed, and their effectiveness is quite high. However, we don’t have such a good oversight at the moment on the amount of abatement that’s being deployed in high-volume manufacturing. This, again, is quite a sensitive topic to discuss from a research perspective when you don’t have insight into the fab itself. So asking particular questions about how much abatement is deployed on certain tools is not such easy data to come across.

So we often go with models. So we apply the IPCC Tier 2c model where, basically, you calculate the direct emissions by how much you’ve used. So it’s a mathematical model based on how much you’ve consumed. There is a model that generates the amounts that would be emitted directly into the atmosphere. So this is the model that we’ve applied. And we see that, yeah, it does correlate sometimes with the top-down reporting that comes from the industry. So yeah, I think there is a lot of way forward where we can start comparing top-down reporting to these bottom-up models that we’ve been generating from a kind of research perspective. So yeah, there’s still a lot of work to do to match those.

Moore: Okay. Are there any particular nasties in terms of what those chemicals are? I don’t think people are familiar with really what comes out of the smokestack of chip fab.

Boakes: So one of the highest GWP gases, for instance, would be the sulfur hexafluoride, so SF6. This has a GWP value of 25,200 kilograms of CO2 equivalent. So that really means that it has over 25,000 times more damaging effects to the climate compared to a CO2, so the equivalent CO2 molecule. So this is extremely high. But there’s also others like NF4 that— these also have over 1,000 times more damaging to the climate than CO2. However, they can be abated. So in these abatement systems, you can destroy them and they’re no longer being released.

There are also efforts going into replacing high GWP gases such as these that I’ve mentioned to use alternatives which have a lower GWP value. However, this is going to take a lot of process development and a lot of effort to go into changing those process flows to adapt to these new alternatives. And this will then be a slow adoption into the high-volume fabs because, as we know, this industry is quite rigid to any changes that you suggest. So yeah, it will be a slow adoption if there are any alternatives. And for the meantime, effective abatement can destroy quite a lot. But it would really be having to employ and really have those abatement systems on those high-impact process areas.

Moore: As Moore’s Law continues, each step or manufacturing node might have a different carbon footprint. What were some of the big trends your research revealed regarding that?

Boakes: So in our model, we’ve assumed a constant fab operation condition, and this means that we’ve assumed the same abatement systems, the same electrical carbon intensities, for all of the different technology nodes, which-- yeah. So we see that there is a general increase in total emissions under these assumptions, and we double in total climate impact from N28 to A14. So when we evolve in that technology node, we do see it doubling between N28 and A14. And this can be attributed to the increased process complexity as well as the increased number of steps, in process steps, as well as the different chemistries being used, different materials that are being embedded in the chips. This all contributes to it. So generally, there is an increase because of the process complexities that’s required to really reach those aggressive pitches in the more advanced technology nodes.

Moore: I see. Okay. So as things are progressing, they’re also kind of getting worse in some ways. Is there anything—?

Boakes: Yeah.

Moore: Is this inevitable, or is there—?

Boakes: [laughter] Yeah. If you make things more complicated, it will probably take more energy and more materials to do it. Also, when you make things smaller, you need to change your processes and use-- yeah, for instance, with interconnect metals, we’ve really reached the physical limits sometimes because it’s gotten so small that the physical limits of really traditional metals like copper or tungsten has been reached. And now they’re looking for new alternatives like ruthenium, yeah, or platinum. Different types of metals which-- again, if it’s a platinum group metal, of course it’s going to have a higher embedded impact. So when we hit those limits, physical limits or limits to the current technology and we need to change it in a way that makes it more complicated, more energy-intensive— again, the move to EUV. EUV is an extremely energy-intensive tool compared to DUV.

But an interesting point there on the EUV topic would be that it’s really important to keep this holistic view because even though moving from a DUV tool to an EUV tool, it has a large jump in energy intensity per kilowatt hour. The power intensity of the tool is much higher. However, you’re able to reduce the number of total steps to achieve a certain deposition or edge. So you’re able to overall reduce your emissions, or you’re able to reduce your energy intensity of the process flow. So even though we make all these changes and we might think, “Oh, that’s a very powerful tool,” it could go and cut down on process steps in the holistic view. So it’s always good to keep a kind of life cycle perspective to be able to see, “Okay, if I implement this tool, it does have a higher power intensity, but I can reduce half of the number of steps to achieve the same result. So it’s overall better. So it’s always good to keep that kind of holistic view when we’re doing any type of sustainability assessment.

Moore: Oh, that’s interesting. That’s interesting. So you also looked at— as sort of the nodes get more advanced and processes get more complex. What did that do to water consumption?

Boakes: Also, so again, the number of steps in a similar sense. If you’re increasing your number of process steps, there would be an increase in the number of those wet clean steps as well that are often the high-water-consumption steps. So if you have an increased number of those particular process steps, then you’re going to have a higher water consumption in the end. So it is just based on the number of steps and the complexity of the process as we advance into the more advanced technology nodes.

Moore: Okay. So it sounds like complexity is kind of king in this field.

Boakes: Yeah.

Moore: What should the industry be focusing on most to achieve its carbon goals going forward?

Boakes: Yeah. So I think to start off, you need to think of the largest contributors and prioritize those. So of course, if you’re looking at the total impact and we’re looking at a system that doesn’t have effective abatement, then of course, direct emissions would be the first thing that you want to try to focus on and reducing, as they would be the largest contributors. However, once you start moving into a system which already has effective abatement, then your next objective would be to decarbonize your electricity production, go for a lower-carbon-intensity electricity provider, so you’re moving more towards green energy.

And at the same time, you would also want to try to target your high-impact value chain. So your materials and energy that are coming into the fab, you need to look at the ones that are the most highly impacting and then try to find a way to find a provider that does a kind of decarbonized version of the same material or try to design a way where you don’t need that certain material. So not necessarily that it has to be done in a sequential order. Of course, you can do it all in parallel. It would be better. So it doesn’t have to be one, two, three, but the idea and the prioritizing comes from targeting the largest contributors. And that would be direct emissions, decarbonizing your electricity production, and then looking at your supply chain and looking into those high-impact materials.

Moore: Okay. And as a researcher, I’m sure there’s data you would love to have that you probably don’t have. What could industry do better about providing that kind of data to make these models work?

Boakes: So for a lot of our a lot of our scope three, so that upstream, that cradle-to-fab, let’s call it— those impacts. We’ve had to use quite a lot— we had to rely quite a lot on life cycle assessment literature or life cycle assessment databases, which are available through purchasing, or sometimes if you’re lucky, you have a free database. So I would say-- and that’s also because my role in my research group is more looking at that LCA and upstream materials and quantifying the environmental impact of that. So from my perspective, I really think that this industry needs to work on providing data through the supply chain, which is standardized in a way that people can understand, which is product-specific so that we can really allocate embedded impact to a specific product and multiply that through then by our inventory, which we have data on. So for me, it’s really having a standardized way of communicating sustainability impact of production, upstream production, throughout the supply chain. Not only tier one, but all the way up to the cradle, the beginning of the value chain. So this is something-- and I know it is evolving and it will be slow, and it does need a lot of cooperation. But I do think that that would be very, very useful for really making our work more realistic, more representative. And then people can rely on it better when they start using our data in their product carbon footprints, for instance.

Moore: Okay. And speaking of sort of your work, can you tell me what imec.netzero is and how that works?

Boakes: Yeah. This is a web app that’s been developed in our program, so the SSTS program at IMEC. And this web app is a way for people to interact with the model that we’ve been building, the LCA model. So it’s based on life cycle assessment, and it’s really what we’ve been talking about with this cradle-to-gate model of the IC-chip-manufacturing process. It tries to model a generic fab. So we don’t necessarily point to any specific fab or process flow from a certain company. But we try to make a very generic industry average that people can use to estimate and get a more realistic view on the modern IC chip. Because we noticed that, in literature and what’s available in LCA databases, the semiconductor data is extremely old, and we know that this industry moves very quickly. So there is a huge gap between what’s happening now and what is going into your phones and what’s going into the computers and the LCA data that’s available to try to quantify that from a sustainability perspective. So imec.netzero, we work with all of— we have the benefit of being connected with the industry and now a position in IMEC, and we have a view on those more advanced technology nodes.

So not only do we have models for the nodes that are being generated and produced today, but we also predict the future nodes. And we have models to predict what will happen in 5 years’ time, in 10 years’ time. So it’s a really powerful tool, and it’s available publicly. We have a public version, which is a limited-- it has limited functionality in comparison to the program partner version. So we work with our program partners who have access to a much more complicated and, yeah, deep way of using the web app, as well as the other work that we do in our program. And our program partners also contribute data to the model, and we’re constantly evolving the model to improve always. So that’s a bit of an overview.

Moore: Cool. Cool. Thank you very much, Lizzie. I have been speaking to Lizzie Boakes, a life cycle analyst in the Sustainable Semiconductor Technologies and Systems Program at IMEC, the Belgium-based nanotech research organization. Thank you again, Lizzie. This has been fantastic.

❌
❌