FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Achieving Zero Defect Manufacturing Part 2: Finding Defect Sources

Semiconductor manufacturing creates a wealth of data – from materials, products, factory subsystems and equipment. But how do we best utilize that information to optimize processes and reach the goal of zero defect manufacturing?

This is a topic we first explored in our previous blog, “Achieving Zero Defect Manufacturing Part 1: Detect & Classify.” In it, we examined real-time defect classification at the defect, die and wafer level. In this blog, the second in our three-part series, we will discuss how to use root cause analysis to determine the source of defects. For starters, we will address the software tools needed to properly conduct root cause analysis for a faster understanding of visual, non-visual and latent defect sources.

About software

The software platform fabs choose impacts how well users are able to integrate data, conduct database analytics and perform server-side and real-time analytics. Manufacturers want the ability to choose a platform that can scale by data volume, type and multisite integration. In addition, all of this data – whether it is coming from metrology, inspection or testing – must be normalized before fabs can apply predictive modeling and machine learning based analytics to find the root cause of defects and failures. This search, however, goes beyond a simple examination of process steps and tools; manufacturers also need a clear understanding of each device’s genealogy. In addition, fabs should employ an AI-based yield optimizer capable of running multiple models and offering potential optimization measures that can be taken in the factory to improve the process.

Now that we have discussed software needs, we will turn our attention to two use cases to further our examination of root cause analysis in zero defect manufacturing.

Root Cause Case No. 1

The first root cause value case we would like to discuss involves the integration of wafer probe, photoluminescence and epitaxial (epi) data. Previously, integrating these three kinds of data was not possible because the identification for wafers and lots – pre- and post-epi – were generally not linked. Wafers and lots were often identified by entirely different names before and after the epi step. For reasons that do not need to be explained, this was a huge hindrance to advancing the goal of zero defect manufacturing because the impact of the epi process on yield was not detected in a timely manner, resulting in higher defectivity and yield loss.

But the challenge is not as simple as identification and naming practices. Typical wafer ID trackers are not applied prior to the post-epi step because of technical and logistical constraints. The solution is for fabs to employ defect and yield analytics software that will enable genealogy that can link data from the epi and pre-epi processes to post-epi processes. The real innovation occurs when the genealogical information is normalized and interpolated with electrical test data. Once integrated, this data offers users a more complete understanding of where yield limiting events are occurring.

Fig. 1: Photoluminescence map (left) and electrical test performance by epi tool (right).

For example, let us consider the following scenario: in figure 1 (left) we show a group of dies that negatively affect performance on the upper left edge of the wafer. Through more traditional measures, this pocket of defectivity may have gone unnoticed, allowing for bad die to move forward in the process. But by applying integrated data, genealogical information and electrical test data, this trouble-plagued area was identified down to the epi tool and chamber (figure 1, right), and the defective material was prevented from going forward in the process. As significant as this is, with the right software platform this approach enables root cause analysis to be conducted in minutes, not days.

Now, onto the second use case in which we look at how to problem solve within the supply chain.

Root Cause Case No. 2

During final test and measurement, chips sometimes fail. In many cases, the faulty chips were previously determined to be good chips and were advanced forward in the process as a result of combining multiple chips coming from different products, lots, or wafers. The important thing here is to understand why this happens.

When there is a genealogy model in a yield software platform, fabs are able to pick the lots and wafers where bad chips come from and then run this information through pattern analysis software. In one particular scenario (figure 2), users were able to apply pattern analysis software to discover that all of the defective die arose from a spin coater issue, in this case, a leak negatively impacting the underbump metallization area following typical preventive maintenance measures.

To compensate for this, the team used integrated analytics to create a fault detection and classification (FDC) model to identify similar circumstances going forward. In this case, the FDC model monitors the suction power of the spin coater. If suction power for more than 10 consecutive samples are above the set limit, alarms are triggered and an appropriate Out of Control Action Plan (OCAP) process is executed that includes notification to tool owner.

Fig. 2: Proactive zero defect manufacturing at-a-glance.

The above explains how fabs are able to turn reactive root cause analytics into proactive monitoring. With such an approach, manufacturers can monitor for this and other issues and avoid the advancement of future defective die. Furthermore, the number of defect signatures that can be monitored inline can be as high as 40 different signatures, if not more. And in case these defects are missed at the process level, they can be identified at the inspection level or post-inspection, avoiding hundreds of issues further along in the process.

Conclusion

Zero defect manufacturing is not so much of a goal as it is a commitment to root out defects before they happen. To accomplish this, fabs need a wealth of data from the entire process to achieve a clear picture of what is going wrong, where it is going wrong and why it is going wrong. In this blog, we offered specific scenarios where root cause analysis was used to find defects across wafers and dies. However, these are just a few examples of how software can be used to find difficult-to-find defects. It can be beneficial in many different areas across the entire process, with each application further strengthening a fab’s efforts to employ a zero defect manufacturing approach, increasing yield and meeting the stringent requirements of some of the industry’s most advanced customers.

In our next blog, we will discuss how to detect dormant defects, use feedback and feedforward measures, and monitor the health of process control equipment. We hope you join us as we continue to explore methods for achieving zero defect manufacturing.

The post Achieving Zero Defect Manufacturing Part 2: Finding Defect Sources appeared first on Semiconductor Engineering.

Metrology And Inspection For The Chiplet Era

New developments and innovations in metrology and inspection will enable chipmakers to identify and address defects faster and with greater accuracy than ever before, all of which will be required at future process nodes and in densely-packed assemblies of chiplets.

These advances will affect both front-end and back-end processes, providing increased precision and efficiency, combined with artificial intelligence/machine learning and big data analytics. These kinds of improvements will be crucial for meeting the industry’s changing needs, enabling deeper insights and more accurate measurements at rates suitable for high-volume manufacturing. But gaps still need to be filled, and new ones are likely to show up as new nodes and processes are rolled out.

“As semiconductor devices become more complex, the demand for high-resolution, high-accuracy metrology tools increases,” says Brad Perkins, product line manager at Nordson Test & Inspection. “We need new tools and techniques that can keep up with shrinking geometries and more intricate designs.”

The shift to high-NA EUV lithography (0.55 NA EUV) at the 2nm node and beyond is expected to exacerbate stochastic variability, demanding more robust metrology solutions on the front end. Traditional critical dimension (CD) measurements alone are insufficient for the level of analysis required. Comprehensive metrics, including line-edge roughness (LER), line-width roughness (LWR), local edge-placement error (LEPE), and local CD uniformity (LCDU), alongside CD measurements, are necessary for ensuring the integrity and performance of advanced semiconductor devices. These metrics require sophisticated tools that can capture and analyze tiny variations at the nanometer scale, where even slight discrepancies can significantly impact device functionality and yield.

“Metrology is now at the forefront of yield, especially considering the current demands for DRAM and HBM,” says Hamed Sadeghian, president and CEO of Nearfield Instruments. “The next generations of HBMs are approaching a stage where hybrid bonding will be essential due to the increasing stack thickness. Hybrid bonding requires high resolutions in vertical directions to ensure all pads, and the surface height versus the dielectric, remain within nanometer-scale process windows. Consequently, the tools used must be one order of magnitude more precise.”

To address these challenges, companies are developing hybrid metrology systems that combine various measurement techniques for a comprehensive data set. Integrating scatterometry, electron microscopy, and/or atomic force microscopy allows for more thorough analysis of critical features. Moreover, AI and ML algorithms enhance the predictive capabilities of these tools, enabling process adjustments.

“Our customers who are pushing into more advanced technology nodes are desperate to understand what’s driving their yield,” says Ronald Chaffee, senior director of applications engineering at NI/Emerson Test & Measurement. “They may not know what all the issues are, but they are gathering all possible data — metrology, AEOI, and any measurable parameters — and seeking correlations.”

Traditional methods for defect detection, pattern recognition, and quality control typically used spatial pattern-recognition modules and wafer image-based algorithms to address wafer-level issues. “However, we need to advance beyond these techniques,” says Prasad Bachiraju, senior director of business development at Onto Innovation. “Our observations show that about 20% of wafers have systematic issues that can limit yield, with nearly 4% being new additions. There is a pressing need for advanced metrology for in-line monitoring to achieve zero-defect manufacturing.”

Several companies recently announced metrology innovations to provide more precise inspections, particularly for difficult-to-see areas, edge effects, and highly reflective surfaces.

Nordson unveiled its AMI SpinSAM acoustic rotary scan system. The system represents a significant departure from traditional raster scan methods, utilizing a rotational scanning approach. Rather than moving the wafer in an x,y pattern relative to a stationary lens, the wafer spins, similar to a record player. This reduces motion over the wafer and increases inspection speed, negating the need for image stitching and improving image quality.

“For years, we’d been trying to figure out this technique, and it’s gratifying to finally achieve it. It’s something we’ve always thought would be incredibly beneficial,” says Perkins. “The SpinSAM is designed primarily to enhance inspection speed and efficiency, addressing the common industry demand for more product throughput and better edge inspection capabilities.”

Meanwhile, Nearfield Instruments introduced a multi-head atomic force microscopy (AFM) system called QUADRA. It is a high-throughput, non-destructive metrology tool for HVM that features a novel multi-miniaturized AFM head architecture. Nearfield claims the parallel independent multi-head scanner can deliver a 100-fold throughput advantage versus conventional single-probe AFM tools. This architecture allows for precise measurements of high-aspect-ratio structures and complex 3D features, critical for advanced memory (3D NAND, DRAM, HBM) and logic processes.


Fig. 1: Image capture comparison of standard AFM and multi-head AFM. Source: Nearfield Instruments

In April, Onto Innovation debuted an advancement in subsurface defect inspection technology with the release of its Dragonfly G3 inspection system. The new system allows for 100% wafer inspection, targeting subsurface defects that can cause yield losses, such as micro-cracks and other hidden flaws that may lead to entire wafers breaking during subsequent processing steps. The Dragonfly G3 utilizes novel infrared (IR) technology combined with specially designed algorithms to detect these defects, which previously were undetectable in a production environment. This new capability supports HBM, advanced logic, and various specialty segments, and aims to improve final yield and cost savings by reducing scrapped wafers and die stacks.

More recently, researchers at the Paul Scherrer Institute announced a high-performance X-ray tomography technique using burst ptychography. This new method can provide non-destructive, detailed views of nanostructures as small as 4nm in materials like silicon and metals at a fast acquisition rate of 14,000 resolution elements per seconds. The tomographic back-propagation reconstruction allows imaging of samples up to ten times larger than the conventional depth of field.

There are other technologies and techniques for improving metrology in semiconductor manufacturing, as well, including wafer-level ultrasonic inspection, which involves flipping the wafer to inspect from the other side. New acoustic microscopy techniques, such as scanning acoustic microscopy (SAM) and time-of-flight acoustic microscopy (TOF-AM), enable the detection and characterization of very small defects, such as voids, delaminations, and cracks within thin films and interfaces.

“We used to look at 80 to 100 micron resist films, but with 3D integrated packaging, we’re now dealing with films that are 160 to 240 microns—very thick resist films,” says Christopher Claypool, senior application scientist at Bruker OCD. “In TSVs and microbumps, the dominant technique today is white light interferometry, which provides profile information. While it has some advantages, its throughput is slow, and it’s a focus-based technique. This limitation makes it difficult to measure TSV structures smaller than four or five microns in diameter.”

Acoustic metrology tools equipped with the newest generation of focal length transducers (FLTs) can focus acoustic waves with precision down to a few nanometers, allowing for non-destructive detailed inspection of edge defects and critical stress points. This capability is particularly useful for identifying small-scale defects that might be missed by other inspection methods.

The development and integration of smart sensors in metrology equipment is instrumental in collecting the vast amounts of data needed for precise measurement and quality control. These sensors are highly sensitive and capable of operating under various environmental conditions, ensuring consistent performance. One significant advantage of smart sensors is their ability to facilitate predictive maintenance. By continuously monitoring the health and performance of metrology equipment, these sensors can predict potential failures and schedule maintenance before significant downtime occurs. This capability enhances the reliability of the equipment, reduces maintenance costs, and improves overall operational efficiency.

Smart sensors also are being developed to integrate seamlessly with metrology systems, offering real-time data collection and analysis. These sensors can monitor various parameters throughout the manufacturing process, providing continuous feedback and enabling quick adjustments to prevent defects. Smart sensors, combined with big data platforms and advanced data analytics, allow for more efficient and accurate defect detection and classification.

Critical stress points

A persistent challenge in semiconductor metrology is the identification and inspection of defects at critical stress points, particularly at the silicon edges. For bonded wafers, it’s at the outer ring of the wafer. For chip-on-wafer packaging, it’s at the edge of the chips. These edge defects are particularly problematic because they occur at the highest stress points from the neutral axis, making them more prone to failures. As semiconductor devices continue to involve more intricate packaging techniques, such as chip-on-wafer and wafer-level packaging, the focus on edge inspection becomes even more critical.

“When defects happen in a factory, you need imaging that can detect and classify them,” says Onto’s Bachiraju. “Then you need to find the root causes of where they’re coming from, and for that you need the entire data integration and a big data platform to help with faster analysis.”

Another significant challenge in semiconductor metrology is ensuring the reliability of known good die (KGD), especially as advanced packaging techniques and chiplets become more prevalent. Ensuring that every chip/chiplet in a stacked die configuration is of high quality is essential for maintaining yield and performance, but the speed of metrology processes is a constant concern. This leads to a balancing act between thoroughness and efficiency. The industry continuously seeks to develop faster machines that can handle the increasing volume and complexity of inspections without compromising accuracy. In this race, innovations in data processing and analysis are key to achieving quicker results.

“Customers would like, generally, 100% inspection for a lot of those processes because of the known good die, but it’s cost-prohibitive because the machines just can’t run fast enough,” says Nordson’s Perkins.

Metrology and Industry 4.0

Industry 4.0 — a term introduced in Germany in 2011 for the fourth industrial revolution, and called smart manufacturing in the U.S. — emphasizes the integration of digital technologies such as the Internet of Things, artificial intelligence, and big data analytics into manufacturing processes. Unlike past revolutions driven by mechanization, electrification, and computerization, Industry 4.0 focuses on connectivity, data, and automation to enhance manufacturing capabilities and efficiency.

“The better the data integration is, the more efficient the yield ramp,” says Dieter Rathei, CEO of DR Yield. “It’s essential to integrate all available data into the system for effective monitoring and analysis.”

In semiconductor manufacturing, this shift toward Industry 4.0 is particularly transformative, driven by the increasing complexity of semiconductor devices and the demand for higher precision and yield. Traditional metrology methods, heavily reliant on manual processes and limited automation, are evolving into highly interconnected systems that enable real-time data sharing and decision-making across the entire production chain.

“There haven’t been many tools to consolidate different data types into a single platform,” says NI’s Chaffee. “Historically, yield management systems focused on testing, while FDC or process systems concentrated on the process itself, without correlating the two. As manufacturers push into the 5, 3, and 2nm spaces, they’re discovering that defect density alone isn’t the sole governing factor. Process control is also crucial. By integrating all data, even the most complex correlations that a human might miss can be identified by AI and ML. The goal is to use machine learning to detect patterns or connections that could help control and optimize the manufacturing process.”

IoT forms the backbone of Industry 4.0 by connecting various devices, sensors, and systems within the manufacturing environment. In semiconductor manufacturing, IoT enables seamless communication between metrology tools, production equipment, and factory management systems. This interconnected network facilitates real-time monitoring and control of manufacturing processes, allowing for immediate adjustments and optimization.

“You need to integrate information from various sources, including sensors, metrology tools, and test structures, to build predictive models that enhance process control and yield improvement,” says Michael Yu, vice president of advanced solutions at PDF Solutions. “This holistic approach allows you to identify patterns and correlations that were previously undetectable.”

AI and ML are pivotal in processing and analyzing the vast amounts of data generated in a smart factory. These technologies can identify patterns, predict equipment failures, and optimize process parameters with a level of precision and speed unattainable by human operators alone. In semiconductor manufacturing, AI-driven analytics enhance process control, improve yield rates, and reduce downtime. “One of the major trends we see is the integration of artificial intelligence and machine learning into metrology tools,” says Perkins. “This helps in making sense of the vast amounts of data generated and enables more accurate and efficient measurements.”

AI’s role extends further as it assists in discovering anomalies within the production process that might have gone unnoticed with traditional methods. AI algorithms integrated into metrology systems can dynamically adjust processes in real-time, ensuring that deviations are corrected before they affect the end yield. This incorporation of AI minimizes defect rates and enhances overall production quality.

“Our experience has shown that in the past 20 years, machine learning and AI algorithms have been critical for automatic data classification and die classification,” says Bachiraju. “This has significantly improved the efficiency and accuracy of our metrology tools.”

Big data analytics complements AI/ML by providing the infrastructure necessary to handle and interpret massive datasets. In semiconductor manufacturing, big data analytics enables the extraction of actionable insights from data generated by IoT devices and production systems. This capability is crucial for predictive maintenance, quality control, and continuous process improvement.

“With big data, we can identify patterns and correlations that were previously impossible to detect, leading to better process control and yield improvement,” says Perkins.

Big data analytics also helps in understanding the lifecycle of semiconductor devices from production to field deployment. By analyzing product performance data over time, manufacturers can predict potential failures and enhance product designs, increasing reliability and lifecycle management.

“In the next decade, we see a lot of opportunities for AI,” says DR Yield’s Rathei. “The foundation for these advancements is the availability of comprehensive data. AI models need extensive data for training. Once all the data is available, we can experiment with different models and ideas. The ingenuity of engineers, combined with new tools, will drive exponential progress in this field.”

Metrology gaps remain

Despite recent advancements in metrology, analytics, and AI/ML, several gaps still remain, particularly in the context of high-volume manufacturing (HVM) and next-generation devices. The U.S. Commerce Department’s CHIPS R&D Metrology Program, along with industry stakeholders, have highlighted seven “grand challenges,” areas where current metrology capabilities fall short:

Metrology for materials purity and properties: There is a critical need for new measurements and standards to ensure the purity and physical properties of materials used in semiconductor manufacturing. Current techniques lack the sensitivity and throughput required to detect particles and contaminants throughout the supply chain.

Advanced metrology for future manufacturing: Next-generation semiconductor devices, such as gate-all-around (GAA) FETs and complementary FETs (CFETs), require breakthroughs in both physical and computational metrology. Existing tools are not yet capable of providing the resolution, sensitivity, and accuracy needed to characterize the intricate features and complex structures of these devices. This includes non-destructive techniques for characterizing defects and impurities at the nanoscale.

“There is a secondary challenge with some of the equipment in metrology, which often involves sampling data from single points on a wafer, much like heat test data that only covers specific sites,” says Chaffee. “To be meaningful, we need to move beyond sampling methods and find creative ways to gather information from every wafer, integrating it into a model. This involves building a knowledge base that can help in detecting patterns and correlations, which humans alone might miss. The key is to leverage AI and machine learning to identify these correlations and make sense of them, especially as we push into the 5, 3, and 2nm spaces. This process is iterative and requires a holistic approach, encompassing various data points and correlating them to understand the physical boundaries and the impact on the final product.”

Metrology for advanced packaging: The integration of sophisticated components and novel materials in advanced packaging technologies presents significant metrology challenges. There is a need for rapid, in-situ measurements to verify interfaces, subsurface interconnects, and internal 3D structures. Current methods do not adequately address issues such as warpage, voids, substrate yield, and adhesion, which are critical for the reliability and performance of advanced packages.

Modeling and simulating semiconductor materials, designs, and components: Modeling and simulating semiconductor processes require advanced computational models and data analysis tools. Current capabilities are limited in their ability to seamlessly integrate the entire semiconductor value chain, from materials inputs to system assembly. There is a need for standards and validation tools to support digital twins and other advanced simulation techniques that can optimize process development and control.

“Predictive analytics is particularly important,” says Chaffee. “They aim to determine the probability of any given die on a wafer being the best yielding or presenting issues. By integrating various data points and running different scenarios, they can identify and understand how specific equipment combinations, sequences and processes enhance yields.”

Modeling and simulating semiconductor processes: Current capabilities are limited in their ability to seamlessly integrate the entire semiconductor value chain, from materials inputs to system assembly. There is a need for standards and validation tools to support digital twins and other advanced simulation techniques that can optimize process development and control.

“Part of the problem comes from the back-end packaging and assembly process, but another part of the problem can originate from the quality of the wafer itself, which is determined during the front-end process,” says PDF’s Yu. “An effective ML model needs to incorporate both front-end and back-end information, including data from equipment sensors, metrology, and structured test information, to make accurate predictions and take proactive actions to correct the process.”

Standardizing new materials and processes: The development of future information and communication technologies hinges on the creation of new standards and validation methods. Current reference materials and calibration services do not meet the requirements for next-generation materials and processes, such as those used in advanced packaging and heterogeneous integration. This gap hampers the industry’s ability to innovate and maintain competitive production capabilities.

Metrology to enhance security and provenance of components and products: With the increasing complexity of the semiconductor supply chain, there is a need for metrology solutions that can ensure the security and provenance of components and products. This involves developing methods to trace materials and processes throughout the manufacturing lifecycle to prevent counterfeiting and ensure compliance with regulatory standards.

“The focus on security and sharing changes the supplier relationship into more of a partnership and less of a confrontation,” says Chaffee. “Historically, there’s always been a concern of data flowing across that boundary. People are very protective about their process, and other people are very protective about their product. But once you start pushing into the deep sub-micron space, those barriers have to come down. The die are too expensive for them not to communicate, but they can still do so while protecting their IP. Companies are starting to realize that by sharing parametric test information securely, they can achieve better yield management and process optimization without compromising their intellectual property.”

Conclusion

Advancements in metrology and testing are pivotal for the semiconductor industry’s continued growth and innovation. The integration of AI/ML, IoT, and big data analytics is transforming how manufacturers approach process control and yield improvement. As adoption of Industry 4.0 grows, the role of metrology will become even more critical in ensuring the efficiency, quality, and reliability of semiconductor devices. And by leveraging these advanced technologies, semiconductor manufacturers can achieve higher yields, reduce costs, and maintain the precision required in this competitive industry.

With continuous improvements and the integration of smart technologies, the semiconductor industry will keep pushing the boundaries of innovation, leading to more robust and capable electronic devices that define the future of technology. The journey toward a fully realized Industry 4.0 is ongoing, and its impact on semiconductor manufacturing undoubtedly will shape the future of the industry, ensuring it stays at the forefront of global technological advancements.

“Anytime you have new packaging technologies and process technologies that are evolving, you have a need for metrology,” says Perkins. “When you are ramping up new processes and need to make continuous improvements for yield, that is when you see the biggest need for new metrology solutions.”

The post Metrology And Inspection For The Chiplet Era appeared first on Semiconductor Engineering.

Driving Cost Lower and Power Higher With GaN

Gallium nitride is starting to make broader inroads in the lower-end of the high-voltage, wide-bandgap power FET market, where silicon carbide has been the technology of choice. This shift is driven by lower costs and processes that are more compatible with bulk silicon.

Efficiency, power density (size), and cost are the three major concerns in power electronics, and GaN can meet all three criteria. However, to satisfy all of those criteria consistently, the semiconductor ecosystem needs to develop best practices for test, inspection, and metrology, determining what works best for which applications and under varying conditions.

Power ICs play an essential role in stepping up and down voltage levels from one power source to another. GaN is used extensively today in smart phone and laptop adapters, but market opportunities are beginning to widen for this technology. GaN likely will play a significant role in both data centers and automotive applications [1]. Data centers are expanding rapidly due to the focus on AI and a build-out at the edge. And automotive is keen to use GaN power ICs for inverter modules because they will be cheaper than SiC, as well as for onboard battery chargers (OBCs) and various DC-DC conversions from the battery to different applications in the vehicle.


Fig. 1: Current and future fields of interest for GaN and SiC power devices. Source A. Meixner/Semiconductor Engineering

But to enter new markets, GaN device manufactures need to more quickly ramp up new processes and their associated products. Because GaN for power transistors is a developing process technology, measurement data is critical to qualify both the manufacturing process and the reliability of the new semiconductor technology and resulting product.

Much of GaN’s success will depend on metrology and inspection solutions that offer high throughput, as well as non-destructive testing methods such as optical and X-ray. Electron microscopy is useful for drilling down into key device parameters and defect mechanisms. And electrical tests provide complementary data that assists with product/process validation, reliability and qualification, system-level validation, as well as being used for production screening.

Silicon carbide (SiC) remains the material of choice for very high-voltage applications. It offers better performance and higher efficiency than silicon. But SiC is expensive. It requires different equipment than silicon, it’s difficult to grow SiC ingots, and today there is limited wafer capacity.

In contrast, GaN offers some of the same desirable characteristics as SiC and can operate at even higher switching speeds. GaN wafer production is cheaper because it can be created on a silicon substrate utilizing typical silicon processing equipment other than the GaN epitaxial deposition tool. That enables a fab/foundry with a silicon CMOS process to ramp a GaN process with an engineering team experienced in GaN.

The cost comparison isn’t entirely apples-to-apples, of course. The highest-voltage GaN on the market today uses silicon on sapphire (SoS) or other engineered substrates, which are more expensive. But below those voltages, GaN typically has a cost advantage, and that has sparked renewed interest in this technology.

“GaN-based products increase the performance envelopes relative to the incumbent and mature silicon-based technologies,” said Vineet Pancholi, senior director of test technology at Amkor. “Switching speeds with GaN enable the application in ways never possible with silicon. But as the GaN production volumes ramp, these products have extreme economic pressures. The production test list includes static attributes. However, the transient and dynamic attributes are the primary benefit of GaN in the end application.”

Others agree. “The world needs cheaper material, and GaN is easy to build,” said Frank Heidemann, vice president and technology leader of SET at NI/Emerson Test & Measurement. “Gallium nitride has a huge success in the lower voltages ranges — anything up to 500V. This is where the GaN process is very well under control. The problem now is building in higher voltages is a challenge. In the near future there will be products at even higher voltage levels.”

Those higher-voltage applications require new process recipes, new power IC designs, and subsequently product/process validation and qualification.

GaN HEMT properties
Improving the processes needed to create GaN high-electron-mobility transistors (HEMTs) requires a deep understanding of the material properties and the manufacturing consequences of layering these materials.

The underlying physics and structure of wide-bandgap devices significantly differs from silicon high-voltage transistors. Silicon transistors rely on doping of p and n materials. When voltage is applied at the gate, it creates a channel for current to flow from source to drain. In contrast, wide-bandgap transistors are built by layering thin films of different materials, which differ in their bandgap energy. [2] Applying a voltage to the gate enables an electron exchange between the two materials, driving those electrons along the channel between source and drain.


Fig. 2. Cross-sectional animation of e-mode GaN HEMT device. Source: Zeiss Microscopy

“GaN devices rely on two-dimensional electron gas (2DEG) created at the GaN and AlGaN interface to conduct current at high speed,” said Jiangtao Hu, senior director of product marketing at Onto Innovation. “To enable high electron mobility, the epitaxy process creating complex multi-layer crystalline films must be carefully monitored and controlled, ensuring critical film properties such as thickness, composition, and interface roughness are within a tight spec. The ongoing trend of expanding wafer sizes further requires the measurement to be on-product and non-destructive for uniformity control.”


Fig. 3: SEM cross-section of enhancement-mode GaN HEMT built on silicon which requires a superlattice. Source: Zeiss Microscopy

Furthermore, each layer’s electrical properties need to be understood. “It is of utmost importance to determine, as early as possible in the manufacturing process, the electrical characteristics of the structures, the sheet resistance of the 2DEG, the carrier concentration, and the mobility of carriers in the channel, preferably at the wafer level in a non-destructive assessment,” said Christophe Maleville, CTO and senior executive vice president of innovation at Soitec.

Developing process recipes for GaN HEMT devices at higher operating ranges require measurements taken during wafer manufacturing and device testing, both for qualification of a process/product and production manufacturing. Inspection, metrology, and electrical tests focus on process anomalies and defects, which impact the device performance.

“Crystal defects such as dislocations and stacking faults, which can form during deposition and subsequently be grown over and buried, can create long-term reliability concerns even if the devices pass initial testing,” said David Taraci, business development manager of electronics strategic accounts at ZEISS Research Microscopy Solutions. “Gate oxides can pinch off during deposition, creating voids which may not manifest as an issue immediately.”

The quality of the buffer layer is critical because it affects the breakdown voltage. “The maximum breakdown voltage of the devices will be ultimately limited by the breakdown of the buffer layer grown in between the Si substrate and the GaN channel,” said Soitec’s Maleville. “An electrical assessment (IV at high voltage) requires destructive measurements as well as device isolation. This is performed on a sample basis only.”

One way to raise the voltage limit of a GaN device is to add a ‘gate driver’ which keeps it reliable at higher voltages. But to further expand GaN technology’s performance envelope to higher voltage operation engineers need to comprehend a new GaN device reliability properties.

“We are supporting GaN lifetime validation, which is the prediction of a mission characteristic of lifetime for gallium nitride power devices,” said Emerson’s Heidemann. “Engineers build physics-based failure models of these devices. Next, they investigate the acceleration factors. How can we really make tests and verification properly so that we can assess lifetime health?”

The qualification procedures necessitate life-stressing testing, which duplicates predicated mission profile usage, as well as electrical testing, after each life-stress period. That allows engineers to determine shifts in transistor characteristics and outright failures. For example, life stress periods could start with 4,000 hours and increase in 1,000-hour increments to 12,000 hours, during which time the device is turned on/off with specific durations of ‘on’ times.

“Reliability predictions are based upon application mission profiles,” said Stephanie Watts Butler, independent consultant and vice president of industry and standards in the IEEE Power Electronics Society. “In some cases, GaN is going into a new application, or being used differently than silicon, and the mission profile needs to be elucidated. This is one area that the industry is focused upon together.”

As an example of this effort, Butler pointed to JEDEC JEP186 spec [3], which provides guidelines for specifying the breakdown voltage for GaN HEMT devices. “JEDEC and IEC both are issuing guideline documents for methods for test and characterization of wide-bandgap devices, as well as reliability and qualification procedures, and datasheet parameters to enable wide bandgap devices, including GaN, to ramp faster with higher quality in the marketplace,” she said.

Electrical tests remain essential to screening for both time-zero and reliability-associated defects (e.g. infant mortality and reduced lifetime). This holds true for screening wafers, singulated die, and packaged devices. And test content includes tests specific to GaN HEMT power devices performance specifications and tests more directed at defect detection.

Due to inherent device differences, the GaN test list varies in some significant ways from Si and SiC power ICs. Assessing GaN health for qualification and manufacturing purposes requires both static and dynamic tests (SiC DC and AC). A partial list includes zero gate voltage drain leakage current, rise time, fall time, dynamic RDSon, and dielectric integrity tests.

“These are very time-intensive measurement techniques for GaN devices,” said Tom Tran, product manager for power discrete test products at Teradyne. “On top of the static measurement techniques is the concern about trapped charge — both for functionality and efficiency — revealed through dynamic RDSon testing.”

Transient tests are necessary for qualification and production purposes due to the high electron mobility, which is what gives GaN HEMT its high switching speed. “From a test standpoint, static test failures indicate basic processing failures, while transient switching failures indicate marginal or process excursions,” said Amkor’s Vineet Pancholi. “Both tests continue to be important to our customers until process maturity is achieved. With the extended range of voltage, current, and switching operations, mainstream test equipment suppliers have been adding complementary instrumentation capabilities.”

And ATE suppliers look to reduce test time, which reduces cost. “Both static and dynamic test requirements drive very high test times,” said Teradyne’s Tran. “But the GaN of today is very different than GaN from a decade ago. We’re able to accelerate this testing just due to the core nature of our ATE architecture. We think there is the possibility further reducing the cost of test for our customers.”

Tools for process control and quality management
GaN HEMT devices’ reliance on thin-film processes highlights the need to understand the material properties and the nature of the interfaces between each layer. That requires tools for process control, yield management, and failure analysis.

“GaN device performance is highly reflective of the film characteristics used in its manufacture,” said Mike McIntyre, director of software product management at Onto Innovation. “The smallest process variations when it comes to film thickness, film stress, line width or even crystalline make-up, can have a dramatic impact on how the device performs, or even if it is usable in its target market. This lack of tolerance to any variation places a greater burden on engineers to understand the factors that correlate to device performance and its profitability.”

Inspection methods that are non-destructive vary in throughput time and in the level of detail provided for engineers to make decisions. While optical methods are fast and provide full wafer coverage, they cannot accurately classify chemical or structural defects for engineers/technicians to review. In contrast, destructive methods provide the information that’s needed to truly understand the nature of the defects. For example, conductive atomic force microscopy (AFM) probing remains slow, but it can identify electrical nature of a defect. And to truly comprehend crystallographic defects and the chemical nature of impurities, engineers can turn to electron microscopy based methods.

One way to assess thin films is with X-rays. “High resolution X-ray measurements are useful to provide production control of the wafer crystalline quality and defects in the buffer, said Soitec’s Maleville. “Minor changes in composition of the buffer, barrier, or capping layer, as well as their layer thickness, can result in significant deviations in device performance. Thickness of the layers, in particular the top cap, barrier, and spacer layers, are typically measured by XRD. However, the throughput of XRD systems is low. Alternatively, ellipsometry offers a reasonably good throughput measurement with more data points for both development and production mode scenarios.”

Optical techniques have been the standard for thin film assessment in the semiconductor industry. Inspection equipment providers have long been on the continuation improvement always evolving journey to improve accuracy, precision and throughput. Providing better metrology tools helps device makers with process control and yield management.

“Recently, we successfully developed a non-destructive on product measurement capability for GaN epi process monitoring,” said Onto’s Hu. “It takes advantage of our advanced optical film experience and our modeling software to simultaneously measure multi-layer epi film thickness, composition, and interface roughness on product wafers.”


Fig. 4: Metrology measurements on GaN for roughness and for Al concentration. Source: Onto Innovation

Assessing the electrical characteristics — 2DEG sheet resistance, channel carrier mobility, and concentration are required for controlling the manufacturing process. A non-destructive assessment would be an improvement over currently used destructive techniques (e.g. SEM). The solutions used for other power ICs do not work for GaN HEMT. As of today, no one has come up with a commercial solution.

Inspection looks for yield impacting defects, as well as defects that affect wafer acceptance in the case of companies that provide engineered substrates.

“Defect inspection for incoming silicon wafers looks for particles, scratches, and other anomalies that might seed imperfections in the subsequent buffer and crystal growth,” said Antonio Mani, business development manager at Thermo Fisher Scientific. “After the growth of the buffer and termination layers, followed by the growth of the doped GaN layers, another set of inspections is carried out. In this case, it is more focused on the detection of cracks, other macroscopic defects (micropipes, carrots), and looking for micro-pits, which are associated to threading dislocations that have survived the buffer layer and are surfacing at the top GaN surface.”

Mani noted that follow-up inspection methods for Si and GaN devices are similar. The difference is the importance in connecting observations back to post-epi results.

More accurate defect libraries would shorten inspection time. “The lack of standardization of surface defect analysis impedes progress,” said Soitec’s Maleville. “Different tools are available on the market, while defect libraries are still being developed essentially by the different user. This lack of globally accepted method and standard defect library for surface defect analysis is slowing down the GaN surface qualification process.”

Whether it involves a manufacturing test failure or a field return, the necessary steps for determining root cause on a problematic packaged part begins with fault isolation. “Given the direct nature of the bandgap of GaN and its operating window in terms of voltage/frequency/power density, classical methods of fault isolation (e.g. optical emission spectroscopy) are forced to focus on different wavelengths and different ranges of excitation of the typical electrical defects,” said Thermo Fisher’s Mani. “Hot carrier pairs are just one example, which highlights the radical difference between GaN and silicon devices.”

In addition to fault isolation there are challenges in creating a device cross-section with focused-ion beam milling methods.

“Several challenges exist in FA for GaN power ICs,” said Zeiss’ Taraci. “In any completed device, in particular, there are numerous materials and layers present for stress mitigation/relaxation and thermal management, depending on whether we are talking enhancement- or depletion-mode devices. Length-scale can be difficult to manage as you are working with these samples, because they have structures of varying dimension present in close proximity. Many of the structures are quite unique to power GaN and can pose challenges themselves in cross-section and analyses. Beam-milling approaches have to be tailored to prevent heavy re-deposition and masking, and are dependent on material, lattice orientation, current, geometry, etc.”

Conclusion
To be successful in bringing new GaN power ICs to new application space engineers and their equipment suppliers need faster process development and a reduction in overall costs. For HEMT devices, it’s understanding the resulting layers and their material properties. This requires a host of metrology, inspection, test, and failure analysis steps to comprehend the issues, and to provide feedback data from experiments and qualifications for process and design improvements.

References

[1] M. Buffolo et al., “Review and Outlook on GaN and SiC Power Devices: Industrial State-of-the-Art, Applications, and Perspectives,” in IEEE Transactions on Electron Devices, March 2024, open access, https://ieeexplore.ieee.org/document/10388225

[2] High electron mobility transistor (HEMT) https://en.wikipedia.org/wiki/High-electron-mobility_transistor

[3] Guideline to specify a transient off-state withstand voltage robustness indicated in datasheets for lateral GaN power conversion devices, JEP186, version 1.0, December 2021. https://www.jedec.org/standards-documents/docs/jep186

Related Stories

Ramping Up Power Electronics For EVs

SiC Growth For EVs Is Stressing Manufacturing

GaN ICs Wanted For Power, EV Markets

Architecting Chips For High-Performance Computing

Power Semiconductors: 2023

The post Driving Cost Lower and Power Higher With GaN appeared first on Semiconductor Engineering.

Quantum Leap: Sydney’s Leading Role in the Next Tech Wave

Od: BESydney


This is a sponsored article brought to you by BESydney.

Australia plays a crucial role in global scientific endeavours, with a significant contribution recognized and valued worldwide. Despite comprising only 0.3 percent of the world’s population, it has contributed over 4 percent of the world’s published research.

Renowned for collaboration, Australian scientists work across disciplines and with international counterparts to achieve impactful outcomes. Notably excelling in medical sciences, engineering, and biological sciences, Australia also has globally recognized expertise in astronomy, physics and computer science.

As the country’s innovation hub and leveraging its robust scientific infrastructure, world-class universities and vibrant ecosystem, Sydney is making its mark on this burgeoning industry.

The city’s commitment to quantum research and development is evidenced by its groundbreaking advancements and substantial government support, positioning it at the forefront of the quantum revolution.

Sydney’s blend of academic excellence, industry collaboration and strategic government initiatives is creating a fertile ground for cutting-edge quantum advancements.

Sydney’s quantum ecosystem

Sydney’s quantum industry is bolstered by the Sydney Quantum Academy (SQA), a collaboration between four top-tier universities: University of NSW Sydney (UNSW Sydney), the University of Sydney (USYD), University of Technology Sydney (UTS), and Macquarie University. SQA integrates over 100 experts, fostering a dynamic quantum research and development environment.

With strong government backing Sydney is poised for significant growth in quantum technology, with a projected A$2.2 billion industry value and 8,700 jobs by 2030. The SQA’s mission is to cultivate a quantum-literate workforce, support industry partnerships and accelerate the development of quantum technology.

Professor Hugh Durrant-Whyte, NSW Chief Scientist and Engineer, emphasizes Sydney’s unique position: “We’ve invested in quantum for 20 years, and we have some of the best people at the Quantum Academy in Sydney. This investment and talent pool make Sydney an ideal place for pioneering quantum research and attracting global talent.”

Key institutions and innovations

UNSW’s Centre of Excellence for Quantum Computation and Communication Technology is at the heart of Sydney’s quantum advancements. Led by Scientia Professor Michelle Simmons AO, the founder and CEO of Silicon Quantum Computing, this centre is pioneering efforts to develop the world’s first practical supercomputer. This team is at the vanguard of precision atomic electronics, pioneering the fabrication of devices in silicon that are pivotal for both conventional and quantum computing applications and they have created the narrowest conducting wires and the smallest precision transistors.

“We can now not only put atoms in place but can connect complete circuitry with atomic precision.” —Michelle Simmons, Silicon Quantum Computing

Simmons was named 2018 Australian of the Year and won the 2023 Prime Minister’s Prize for Science for her work in creating the new field of atomic electronics. She is an Australian Research Council Laureate Fellow, a Fellow of the Royal Society of London, the American Academy of Arts and Science, the American Association of the Advancement of Science, the UK Institute of Physics, the Australian Academy of Technology and Engineering and the Australian Academy of Science.

In response to her 2023 accolade, Simmons said: “Twenty years ago, the ability to manipulate individual atoms and put them where we want in a device architecture was unimaginable. We can now not only put atoms in place but can connect complete circuitry with atomic precision—a capability that was developed entirely in Australia.”

Standing in a modern research lab with glass walls and wooden lab benches, a man grasps a cylindrical object attached to a robot arm's gripper while a woman operates a control touch-interface tablet. The Design Futures Lab at UNSW in Sydney, Australia, is a hands-on teaching and research lab that aims to inspire exploration, innovation, and research into fabrication, emerging technologies, and design theories.UNSW

Government and industry support

In April 2024, the Australian Centre for Quantum Growth program, part of the National Quantum Strategy, provided a substantial four-year grant to support the quantum industry’s expansion in Australia. Managed by the University of Sydney, the initiative aims to establish a central hub that fosters industry growth, collaboration, and research coordination.

This centre will serve as a primary resource for the quantum sector, enhancing Australia’s global competitiveness by promoting industry-led solutions and advancing technology adoption both domestically and internationally. Additionally, the centre will emphasise ethical practices and security in the development and application of quantum technologies.

Additionally, Sydney hosts several leading quantum startups, such as Silicon Quantum Computing, Quantum Brilliance, Diraq and Q-CTRL, which focus on improving the performance and stability of quantum systems.

Educational excellence

Sydney’s universities are globally recognized for their contributions to quantum research. They nurture future quantum leaders, and their academic prowess attracts top talent and fosters a culture of innovation and collaboration.

Sydney hosts several leading quantum startups, such as Silicon Quantum Computing, Quantum Brilliance, Diraq, and Q-CTRL, which focus on improving the performance and stability of quantum systems.

The UNSW Sydney is, one of Sydney’s universities, ranked among the world’s top 20 universities, and boasts the largest concentration of academics working in AI and quantum technologies in Australia.

UNSW Sydney Professor Toby Walsh is Laureate Fellow and Scientia Professor of Artificial Intelligence at the Department of Computer Science and Engineering at the University of New South Wales. He explains the significance of this academic strength: “Our students and researchers are at the cutting edge of quantum science. The collaborative efforts within Sydney’s academic institutions are creating a powerhouse of innovation that is driving the global quantum agenda.”

Sydney’s strategic investments and collaborative efforts in quantum technology have propelled the city to the forefront of this transformative field. With its unique and vibrant ecosystem, a blend of world-leading institutions, globally respected talent and strong government and industry support, Sydney is well-positioned to lead the global quantum revolution for the benefit of all. For more information on Sydney’s science and engineering industries visit besydney.com.au.

AI's Cozy Crony Capitalism

An image generated using the prompt, “Gutenberg’s printing press incorporating artificial intelligence as an etching." | Illustration: Joanna Andreasson/Midjourney
Joanna Andreasson/DALL-E4

In May 2023, OpenAI founder Sam Altman testified before the Senate Judiciary Committee about ChatGPT. Altman demonstrated how his company's tool could massively reduce the cost of retrieving, processing, conveying, and perhaps even modifying the collective knowledge of mankind as stored in computer memories worldwide. A user with no special equipment or access can request a research report, story, poem, or visual presentation and receive in a matter of seconds a written response.

Because of ChatGPT's seemingly vast powers, Altman called for government regulation to "mitigate the risks of increasingly powerful AI systems" and recommended that U.S. or global leaders form an agency that would license AI systems and have the authority to "take that license away and ensure compliance with safety standards." Major AI players around the world quickly roared approval of Altman's "I want to be regulated" clarion call.

Welcome to the brave new world of AI and cozy crony capitalism, where industry players, interest groups, and government agents meet continuously to monitor and manage investor-owned firms.

Bootleggers and Baptists Have a 'Printing Press Moment'

ChatGPT has about 100 million weekly users worldwide, according to Altman. Some claim it had the most successful launch of a consumer product in history, and Altman anticipates far more future users. He's now seeking U.S. government approval to raise billions from United States, Middle East, and Asian investors to build a massive AI-chip manufacturing facility.

In his testimony, Altman referred to Johannes Gutenberg's printing press, which enabled the Enlightenment, revolutionized communication, and—following the dictum "knowledge is power"—destabilized political and religious regimes worldwide. Altman suggested that, once again, the world faces a "printing press moment": another time of profound change that could bring untold benefits as well as unimaginable disturbances to human well-being. A related letter signed by Altman, other AI industry executives, and scores of other leaders in the field, underlined their profound concern and said: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Altman's call for regulation also has a parallel in the early history of printing. Something similar to the licensing control suggested by Altman was used by Queen Elizabeth I in 16th century England. She assigned transferable printing rights to a particular printing guild member in a frustrated effort to regulate and censor printing.

Altman's moral appeal rests on the notion of preserving a nation of people or their way of life. In that way, it satisfies the "Baptist" component of the bootleggers and Baptists theory of regulation, which I developed decades ago and which explains why calls to "regulate me" may be seen as just about businesses earning extra profits.

An especially durable form of government regulation takes hold when there are at least two interest groups supporting it, but for decidedly different reasons. One supports the pending regulation for widely held moral reasons (like old-school Baptists who want to bar legal Sunday liquor sales). The other is in it for the money (like the bootleggers who see opportunity in a day without any legal competition).

Altman's "Baptist" altar call may well be altruistic—who can know?—but it shows his hand as a potential bootlegger, too. After all, a government-sheltered AI market could provide a first-mover advantage as his entity helps to determine the appropriate standards that will be applied to everyone else. It could yield a newer, cozier crony capitalism that has not previously existed in quite the same form.

Apparently, some other potential bootleggers heard the altar call too and liked the idea of being in a cozy cartel. Soon, Microsoft insisted that thoroughgoing government regulation of AI would be necessary. In another Baptist-like appeal, Microsoft President Brad Smith said:"A licensing regime is fundamentally about ensuring a certain baseline of safety, of capability. We have to prove that we can drive before we get a license. If we drive recklessly, we can lose it. You can apply those same concepts, especially to AI uses that will implicate safety."

Google felt the call as well and provided a statement recommending AI regulation on a global and cross-agency basis. Google CEO Sundar Pichai emphasized, "AI is too important not to regulate and too important not to regulate well." Another February 2024 policy statement from Google contains a litany of plans to cooperate with competitors and government agencies to advance a safe and effective generative AI environment.

How AI Will Regulate AI

In a December 2023 report on U.S. government agency activity for FY 2022, the Government Accountability Office indicated that 20 of the 23 agencies surveyed reported some 1,200 AI activities, including everything from analyzing data from cameras and radar to preparing for planetary explorations.

Bootlegger and Baptist–inspired regulatory episodes in the past typically involved in-depth studies and hearings and ended up with lasting rules. That was the case through most of the 20th century, when the Interstate Commerce Commission regulated prices, entry, and service in specific industries, and in the 1970s, when the Environmental Protection Agency was equipped with command-and-control regulations in an alleged attempt to correct market failures.

Sometimes it took years to develop regulations to address a particular environmental problem. By the time the final rules were announced and frozen in time, the situation to be resolved may have changed fundamentally.

Cheap information enabled by generative AI changes all this. By using AI, the generative AI regulatory process—pending at the level of California, the federal government, and other governments worldwide—so far favors ongoing, never-ending governance processes. These will focus on major generative AI producers but involve collaboration among industry leaders, consumer and citizen groups, scientists, and government officials all engaged in a newly blossomed cozy cronyism.Under the European Union's AI Act, an AI Office will preside over processes that continually steer and affect generative AI outcomes.

If more traditional command-and-control or public utility regulation were used, which raises its own challenges, AI producers would be allowed to operate within a set of regulatory guardrails while responding to market incentives and opportunities. They would not be required to engage in cooperative engagement with regulators, their own competitors, and other supposed stakeholders in this larger enterprise. In this AI world, bootleggers and Baptists now sit together in the open.

Specifically, this burgeoning approach to AI regulation requires the implementation of "sandboxes," where regulated parties join with regulators and advisers in exploring new algorithms and other AI products. While sandboxes have some beneficial things to offer industries in new or uncertain regulatory environments, members of the unusually collaborative AI industry are poised to learn what competitors are developing as government referees look on.

The risk, should this environment persist, is that radically new products and approaches in the arena never get a chance to be developed and benefit consumers. The incentives to discover new AI products and profit from them will be blunted by cartel-like behavior in a new bootleggers and Baptists world. Countries and firms that refuse to play by these rules would likely become the only fountainhead for major new AI developments.

The End of the AI Wild West

No matter what form of regulation holds sway, generative AI is out of the box and will not go away. Already, computer capacity requirements for processing the software are falling, and already, generative AI's applications to new information challenges are exploding. At the same time, government agents' ability to regulate is improving and the payoff to the regulated for working hand-in-hand with government and organized interest groups is growing.

Generative AI's Wild West days may be drawing to a close. This is when invention, expansion, and growth can occur unconstrained by regulation while within the bounds of local courts, judges, property rights, and common law. Novel developments will still occur, but at a slower pace. The generative AI printing press moment may expand to become an era. But just as the Gutenberg invention led initially to the regulation and even outlawing of the printing press across many major countries, new information technologies emerged anyway and effective regulation of real knowledge became impossible.

A renaissance was set in motion despite official attempts to stymie the technology. Freer presses emerged, and eventually the telegraph, telephones, typewriters, mimeograph, Xerox, fax, and the internet itself. Knowledge flows cannot be stopped. We moderns may learn again that the human spirit cannot be forever bottled up, and over the long run market competition will be allowed, if not encouraged, to move mankind closer to a more prosperous era.

America's Founders were enlightened and well aware of Europe's troublesome efforts to regulate Gutenberg's printing press. They insisted on freedom of speech and a free press. We may eventually see similar wisdom applied to generative AI control—but don't hold your breath.

The post AI's Cozy Crony Capitalism appeared first on Reason.com.

Stephen Wolfram on the Powerful Unpredictability of AI

An AI-generated image of | Photo: Julian Dufort/Midjourney
Joanna Andreasson/DALL-E4

Stephen Wolfram is, strictly speaking, a high school and college dropout: He left both Eton and Oxford early, citing boredom. At 20, he received his doctorate in theoretical physics from Caltech and then joined the faculty in 1979. But he eventually moved away from academia, focusing instead on building a series of popular, powerful, and often eponymous research tools: Mathematica, WolframAlpha, and the Wolfram Language. He self-published a 1,200-page work called A New Kind of Science arguing that nature runs on ultrasimple computational rules. The book enjoyed surprising popular acclaim.

Wolfram's work on computational thinking forms the basis of intelligent assistants, such as Siri. In an April conversation with Reason's Katherine Mangu-Ward, he offered a candid assessment of what he hopes and fears from artificial intelligence, and the complicated relationship between humans and their technology.

Reason: Are we too panicked about the rise of AI or are we not panicked enough?

Wolfram: Depends who "we" is. I interact with lots of people and it ranges from people who are convinced that AIs are going to eat us all to people who say AIs are really stupid and won't be able to do anything interesting. It's a pretty broad range.

Throughout human history, the one thing that's progressively changed is the development of technology. And technology is often about automating things that we used to have to do ourselves. I think the great thing technology has done is provide this taller and taller platform of what becomes possible for us to do. And I think the AI moment that we're in right now is one where that platform just got ratcheted up a bit.

You recently wrote an essay asking, "Can AI Solve Science?" What does it mean to solve science?

One of the things that we've come to expect is, science will predict what will happen. So can AI jump ahead and figure out what will happen, or are we stuck with this irreducible computation that has to be done where we can't expect to jump ahead and predict what will happen?

AI, as currently conceived, typically means neural networks that have been trained from data about what humans do. Then the idea is, take those training examples and extrapolate from those in a way that is similar to the way that humans would extrapolate.

Now can you turn that on science and say, "Predict what's going to happen next, just like you can predict what the next word should be in a piece of text"? And the answer is, well, no, not really.

One of the things we've learned from the large language models [LLMs] is that language is easier to predict than we thought. Scientific problems run right into this phenomenon I call computational irreducibility—to know what's going to happen, you have to explicitly run the rules.

Language is something we humans have created and use. Something about the physical world just delivered that to us. It's not something that we humans invented. And it turns out that neural nets work well on things that we humans invented. They don't work very well on things that are just sort of wheeled in from the outside world.

Probably the reason that they work well on things that we humans invented is that their actual structure and operation is similar to the structure and operation of our brains. It's asking a brainlike thing to do brainlike things. So yes, it works, but there's no guarantee that brainlike things can understand the natural world.

That sounds very simple, very straightforward. And that explanation is not going to stop entire disciplines from throwing themselves at that wall for a little while. This feels like it's going to make the crisis in scientific research worse before it gets better. Is that too pessimistic?

It used to be the case that if you saw a big, long document, you knew that effort had to be put into producing it. That suddenly became not the case. They could have just pressed a button and got a machine to generate those words.

So now what does it mean to do a valid piece of academic work? My own view is that what can be most built upon is something that is formalized.

For example, mathematics provides a formalized area where you describe something in precise definitions. It becomes a brick that people can expect to build on.

If you write an academic paper, it's just a bunch of words. Who knows whether there's a brick there that people can build on?

In the past we've had no way to look at some student working through a problem and say, "Hey, here's where you went wrong," except for a human doing that. The LLMs seem to be able to do some of that. That's an interesting inversion of the problem. Yes, you can generate these things with an LLM, but you can also have an LLM understand what was happening.

We are actually trying to build an AI tutor—a system that can do personalized tutoring using LLM. It's a hard problem. The first things you try work for the two-minute demo and then fall over horribly. It's actually quite difficult.

What becomes possible is you can have the [LLM] couch every math problem in terms of the particular thing you are interested in—cooking or gardening or baseball—which is nice. It's a sort of a new level of human interface.

So I think that's a positive piece of what becomes possible. But the key thing to understand is the idea that an essay means somebody committed to write an essay is no longer a thing.

We're going to have to let that go.

Right. I think the thing to realize about AIs for language is that what they provide is kind of a linguistic user interface. A typical use case might be you are trying to write some report for some regulatory filing. You've got five points you want to make, but you need to file a document.

So you make those five points. You feed it to the LLM. The LLM puffs out this whole document. You send it in. The agency that's reading it has their own LLM, and they're asking their LLM, "Find out the two things we want to know from this big regulatory filing." And it condenses it down to that.

So essentially what's happened is you've used natural language as a sort of transport layer that allows you to interface one system to another.

I have this deeply libertarian desire to say, "Could we skip the elaborate regulatory filing, and they could just tell the five things directly to the regulators?"

Well, also it's just convenient that you've got these two systems that are very different trying to talk to each other. Making those things match up is difficult, but if you have this layer of fluffy stuff in the middle, that is our natural language, it's actually easier to get these systems to talk to each other.

I've been pointing out that maybe 400 years ago was sort of a heyday of political philosophy and people inventing ideas about democracy and all those kinds of things. And I think that now there is a need and an opportunity for a repeat of that kind of thinking, because the world has changed.

As we think about AIs that end up having responsibilities in the world, how do we deal with that? I think it's an interesting moment when there should be a bunch of thinking going on about this. There is much less thinking than I think there should be.

An interesting thought experiment is what you might call the promptocracy model of government. One approach is everybody writes a little essay about how they want the world to be, and you feed all those essays into an AI. Then every time you want to make a decision, you just ask the AI based on all these essays that you read from all these people, "What should we do?"

One thing to realize is that in a sense, the operation of government is an attempt to make something like a machine. And in a sense, you put an AI in place rather than the human-operated machine, not sure how different it actually is, but you have these other possibilities.

The robot tutor and the government machine sound like stuff from the Isaac Asimov stories of my youth. That sounds both tempting and so dangerous when you think about how people have a way of bringing their baggage into their technology. Is there a way for us to work around that?

The point to realize is the technology itself has nothing. What we're doing with AI is kind of an amplified version of what we humans have.

The thing to realize is that the raw computational system can do many, many things, most of which we humans do not care about. So as we try and corral it to do things that we care about, we necessarily are pulling it in human directions.

What do you see as the role of competition in resolving some of these concerns? Does the intra-AI competition out there curb any ethical concerns, perhaps in the way that competition in a market might constrain behavior in some ways?

Interesting question. I do think that the society of AIs is more stable than the one AI that rules them all. At a superficial level it prevents certain kinds of totally crazy things from happening, but the reason that there are many LLMs is because once you know ChatGPT is possible, then it becomes not that difficult at some level. You see a lot of both companies and countries stepping up to say, "We'll spend the money. We'll build a thing like this." It's interesting what the improvement curve is going to look like from here. My own guess is that it goes in steps.

How are we going to screw this up? And by "we," I mean maybe people with power, maybe just general human tendencies, and by "this," I mean making productive use of AI.

The first thing to realize is AIs will be suggesting all kinds of things that one might do just as a GPS gives one directions for what one might do. And many people will just follow those suggestions. But one of the features it has is you can't predict everything about what it will do. And sometimes it will do things that aren't things we thought we wanted.

The alternative is to tie it down to the point where it will only do the things we want it to do and it will only do things we can predict it will do. And that will mean it can't do very much.

We arguably do the same thing with human beings already, right? We have lots of rules about what we don't let people do, and sometimes we probably suppress possible innovation on the part of those people.

Yes, that's true. It happens in science. It's a "be careful what you wish for" situation because you say, "I want lots of people to be doing this kind of science because it's really cool and things can be discovered." But as soon as lots of people are doing it, it ends up getting this institutional structure that makes it hard for new things to happen.

Is there a way to short circuit that? Or should we even want to?

I don't know. I've thought about this for basic science for a long time. Individual people can come up with original ideas. By the time it's institutionalized, that's much harder. Having said that: As the infrastructure of the world, which involves huge numbers of people, builds up, you suddenly get to this point where you can see some new creative thing to do, and you couldn't get there if it was just one person beavering away for decades. You need that collective effort to raise the whole platform.

This interview has been condensed and edited for style and clarity.

The post Stephen Wolfram on the Powerful Unpredictability of AI appeared first on Reason.com.

SpaceX Edges Closer to the Moon

A photo of a space ship | Photo: Creative Market
Joanna Andreasson/DALL-E4

In the June 2024 issue, we explore the ways that artificial intelligence is shaping our economy and culture. The stories and art are about AI—and occasionally by AI. (Throughout the issue, we have rendered all text generated by AI-powered tools in blue.) To read the rest of the issue, go here.

ARTEMIS II is a crewed moon flyby mission, the first in a series of missions meant to get American astronauts back to the moon and eventually to Mars. In early January, NASA announced that it would be delayed until September 2025—a year later than originally planned. The announcement came after Lockheed Martin's Orion capsule, which will carry the crew through space, burned more than anticipated during a reentry test. The project continues to balloon in price with seemingly endless delays.

The latest delay means Artemis III, a crewed lunar landing mission, will be delayed until 2026—at least. It seems doubtful NASA will be able to apply what it learns from Artemis II to an Artemis III mission in less than a year.

Orion is not the only element that could hold up Artemis indefinitely. The later Artemis missions are relying on SpaceX's Starship—but Federal Aviation Administration (FAA) investigations can add weeks or months between Starship tests. So far SpaceX has been able to test the spacecraft three times, most recently on March 14.

The test did not completely fulfill its mission as intended but was still deemed a success by both SpaceX founder Elon Musk and NASA Administrator Bill Nelson. Both rocket and spacecraft were supposed to have controlled splashdowns in wide-open waters, but the rocket disintegrated about 1,500 feet over the Gulf of Mexico, while the separate Starship spacecraft burned up during reentry.

The good news is that none of the previous issues flagged by the FAA occurred on this latest test. But with every rocket mishap, the FAA will open a new investigation, with new parameters for SpaceX to meet before the next test.

In January, Jessica Jensen, a vice president at SpaceX, told NASA's Nelson that the company will need "ten-ish" tests to get the rocket perfect for Artemis III. It took the FAA two years to approve the first test of Starship and its second test wasn't approved until nearly five months later. The FAA investigation into the second Starship flight test (which resulted in a successful separation of rocket and spacecraft but ended in both exploding soon after) took more than three months.

Musk is hoping to run six Starship test launches this year, but there's no telling how long the FAA will drag its feet between tests. Each FAA delay is another delay for SpaceX, Artemis, and man's return to the moon.


(Illustration: Joanna Andreasson/DALL-E4)
  • What we asked for:
    A photo of the Artemis II rocket on launchpad in an open landscape at dawn from slightly above
  • What DALL-E says it gave us:
    Here's the Artemis II rocket on the launchpad in an open landscape at dawn, with the first light of the day illuminating the scene, captured from a slight elevation.

Launch Approved? Not So Fast, Says Sluggish FAA

Reason asked Claude 3 Sonnet to write an article about the same topic as the human-written article above. All AI-generated text is shown in blue. Read more about what we asked for.
By Claude 3 Sonnet

Most Americans are eager to see NASA astronauts return to the moon and push humanity's boundaries with future exploration of Mars. But those sky-high ambitions are being severely grounded by the plodding pace of rocket launch approvals from the Federal Aviation Administration (FAA).

Every vehicle soaring into space must first secure licensing from the FAA, an oversight process intended to ensure safety. For SpaceX, the current industry leader launching crews and payloads roughly every four days, the government's bureaucratic inertia has become a highly problematic bottleneck.

This red tape has directly impacted testing of SpaceX's Starship, the vehicle that NASA is relying upon to carry cargo and crew for the Artemis program's later missions to the lunar surface. To date, there have been three orbital test flights of the massive rocket–in April and November 2023, and March of this year.

Each of those Starship test launches prompted the FAA to launch a mishap investigation stalling subsequent flights. After the first test in April 2023, the FAA probe lasted five months before approving the next launch[1]. The investigation following November's flight dragged on over three months[2]. Incredibly, it took the FAA more than two years just to greenlight that initial Starship test back in 2023[3].

Jessica Jensen, a SpaceX vice president, has stated that approximately "10-ish" Starship test launches are needed before the system will be proven and ready to support the crewed Artemis III moon mission[4]. At the FAA's current pace of reviewing and approving those test flights, NASA's hope of returning Americans to the lunar surface by 2025 is looking increasingly unlikely.

"We'd very much like the government to be able to move as quickly as we are," Tim Hughes, SpaceX's senior vice president, told the Washington Post[5]. "If you're able to build a rocket faster than the government can regulate it, that's upside down, and that needs to be addressed. So we think some regulatory reforms are needed."

SpaceX has suggested doubling the FAA's licensing staff as one possible solution. At this point, any efforts to "streamline the cumbersome approval process," as Hughes put it, would help accelerate both the private space industry and NASA's exploration roadmap[5].

"Next year could be a pretty dynamic time with lots of providers in spaceflight," a SpaceX official told Ars Technica[6]. But unless the FAA addresses its pacing problems soon, the agency will continue irreparably hindering those new aerospace opportunities—and keep kicking the can down the road on NASA's bid to take astronauts farther into the cosmos.

CITATION KEY:
[1] SpaceX internal record
[2] FAA public statement, 
Dec 2023
[3] FAA memo, April 2023
[4] Jensen quote to 
NASA Admin.
[5] Hughes interview, Washington Post
[6] SpaceX statement to 
Ars Technica.

CLAUDE 3 SONNET is a free large language model developed by Anthropic, an AI startup. 

  • What we asked for:
    An article about FAA mishap investigations delaying launch tests of SpaceX's Starship, and, as a result, delaying NASA's Artemis missions and man's return to the moon.
  • What Claude asked for and we provided:
    1. What specific SpaceX mishaps or incidents is the FAA investigating that are causing delays?

    2. How much of a delay are we talking about for Starship test launches and the Artemis missions?
    3. Do you have any quotes from officials at SpaceX, NASA, or the FAA commenting on the situation?
  • What else we gave Claude:
    1. A writing sample for analysis and replication: Natalie Dowzicky's October 2023 reason.com article headlined "SpaceX: FAA Is Slowing Progress to the Moon"

    2. For the headline: six headlines from the TOPICS section of the May issue.

—JASON RUSSELL

The post SpaceX Edges Closer to the Moon appeared first on Reason.com.

DeSantis Signs Bill Banning Lab-Grown Meat in Florida

Od: Emma Camp
Ron DeSantis and lab-grown meat | Pedro Portal/TNS/Newscom; Just Eat, Inc.

On Wednesday, Florida Gov. Ron DeSantis (R) signed a bill banning the sale or production of lab-grown meat in the state. While a press release framed the bill as an attempt to advance Floridans' freedom by protecting them from the "World Economic Forum's goal of forcing the world to eat lab-grown meat and insects," all the legislation really does is stile competition for the state's meat producers. 

"Today, Florida is fighting back against the global elite's plan to force the world to eat meat grown in a petri dish or bugs to achieve their authoritarian goals," DeSantis said in a Wednesday press release. "Our administration will continue to focus on investing in our local farmers and ranchers, and we will save our beef."

Cultivated, or "lab-grown," meat has been available in the United States on an extremely limited basis, generally limited to individual restaurants, since last year, after the Food and Drug Administration approved two different kinds of cultivated chicken for sale.

However, despite DeSantis' supposed fears about a lab-grown meat takeover, the small cultivated meat industry is struggling. The product isn't currently available anywhere in the United States, let alone in Florida.

Nonetheless, the governor signed Senate Bill 1084, which enacts a wide-ranging ban on cultivated meat, making it illegal "for any person to manufacture for sale, sell, hold or offer for sale, or distribute cultivated meat in" Florida. Violators of the law face misdemeanor penalties, and businesses caught selling the product could have their licenses suspended.

"We must protect our incredible farmers and the integrity of American agriculture," Florida Commissioner of Agriculture Wilton Simpson said in the press release. "Lab-grown meat is a disgraceful attempt to undermine our proud traditions and prosperity, and is in direct opposition to authentic agriculture."

However, it seems DeSantis is the real authoritarian in this situation. Instead of letting Floridians decide for themselves whether they want to try lab-grown meat, DeSantis is having the state step in, all in the name of protecting Floridians from an imaginary threat to their freedom.

Florida's lab-grown meat ban is a perfect marriage of protectionism and the culture war. By framing the tiny lab-grown meat industry as a left-wing threat, DeSantis can justify giving government kickbacks to the meat industry, all while protecting meat producers from a source of future competition. Wednesday's press release goes so far as to brag about a litany of recently passed legislation that "support[s] the state's agriculture and meat industry."

Unfortunately, Florida isn't the first state to ban cultivated meat. Alabama passed a ban on lab-grown meat last month, and legislation to ban the product is pending in Arizona and Tennessee. Italy banned it last year.

The post DeSantis Signs Bill Banning Lab-Grown Meat in Florida appeared first on Reason.com.

We Can't Imagine the Future of AI

Ed Note June 2024 | Illustration: Joanna Andreasson
Joanna Andreasson/DALL-E4

In the June 2024 issue, we explore the ways that artificial intelligence is shaping our economy and culture. The stories and art are about AI—and occasionally by AI. (Throughout the issue, we have rendered all text generated by AI-powered tools in blue.) To read the rest of the issue, go here.

Vernor Vinge was the bard of artificial intelligence, a novelist and mathematician who devoted his career to imagining the nearly unimaginable aftermath of the moment when technology outpaces human capability. He died in March, as we were putting together Reason's first-ever AI issue, right on the cusp of finding out which of his fanciful guesses would turn out to be right.

In 2007, Reason interviewed Vinge about the Singularity—the now slightly out-of-favor term he popularized for that greater-than-human intelligence event horizon. By that time the author of A Fire Upon the Deep and A Deepness in the Sky had, for years, been pinning the date of the Singularity somewhere between 2005 and 2030. To Reason, he offered a softer prediction: If the rapid doubling of processing power known as Moore's law "continues for a decade or two," that "makes it plausible that very interesting A.I. developments might occur before 2030."

That prophecy, at least, has already come true.

Innovation in AI is happening so quickly that the landscape changed dramatically even from the time Reason conceived this issue to the time you are reading it. As a consequence, this particular first draft of history is likely to become rapidly, laughably outdated. (You can read some selections from our archives on the topic.) As we worked on this issue, new large language models (LLMs) and chatbots cropped up every month, image generation went from producing amusing curiosities with the wrong number of fingers to creating stunningly realistic video from text prompts, and the ability to outsource everything from coding tasks to travel bookings went from a hypothetical to a reality. And those were just the free or cheap tools available to amateurs and journalists.

Throughout the issue, we have rendered all text generated by AI-powered tools in blue. Why? Because when we asked ChatGPT to tell us the color of artificial intelligence, that's what it picked:

The color that best encapsulates the idea of artificial intelligence in general is a vibrant shade of blue. Blue is often associated with intelligence, trust, and reliability, making it an ideal color to represent the concept of AI. It also symbolizes the vast potential and endless possibilities that AI brings to the world of technology.

Yet the very notion that any kind of bright line can be drawn between human- and machine-generated content is almost certainly already obsolete.

Reason has a podcast read by a version of my voice that is generated entirely artificially. Our producers use dozens of AI tools to tweak, tidy, and improve our video. A few images generated using AI have appeared in previous issues—though they run rampant in this issue, with captions indicating how they were made. I suspect one of our web developers is just three AIs in a trenchcoat. In this regard, Reason is utterly typical in how fast we have incorporated AI into our daily business.

The best we can offer is a view from our spot, nestled in the crook of an exponential curve. Vinge and others like him long believed themselves to be at such an inflection point. In his 1993 lecture "The Coming Technological Singularity: How To Survive in the Post-Human Era," Vinge said: "When I began writing science fiction in the middle '60s, it seemed very easy to find ideas that took decades to percolate into the cultural consciousness; now the lead time seems more like 18 months." That lead time is now measured in minutes, so he may have been onto something. This issue is an attempt to capture this moment when the possibilities of AI are blooming all around us—and before regulators have had a chance to screw it up.

"Except for their power to blow up the world," Vinge mused in 2007, "I think governments would have a very hard time blocking the Singularity. The possibility of governments perverting the Singularity is somewhat more plausible to me."

They are certainly trying. As Greg Lukianoff of the Foundation for Individual Rights and Expression testified at a February congressional hearing about AI regulation: "Yes, we may have some fears about the proliferation of AI. But what those of us who care about civil liberties fear more is a government monopoly on advanced AI. Or, more likely, regulatory capture and a government-empowered oligopoly that privileges a handful of existing players….Far from reining in the government's misuse of AI to censor, we will have created the framework not only to censor but also to dominate and distort the production of knowledge itself."

Those new pathways for knowledge production and other unexpected outcomes are the most exciting prospects for AI, and the ones Vinge toyed with for decades. What's most interesting is not what AI will do to us, or for us; it's what AI will do that we can barely imagine.

As the physicist and engineer Stephen Wolfram says, "One of the features [AI] has is you can't predict everything about what it will do. And sometimes it will do things that aren't things we thought we wanted. The alternative is to tie it down to the point where it will only do the things we want it to do and it will only do things we can predict it will do. And that will mean it can't do very much."

Even as we worry about the impact of AI on art, sex, education, health care, labor, science, movies, and war, it is Vinge's imaginative, nonjudgmental vision that should inspire us.

"I think that if the Singularity can happen, it will," Vinge told Reason in 2007. "There are lots of very bad things that could happen in this century. The Technological Singularity may be the most likely of the noncatastrophes."


An image generated using the prompt, "Illustration of AI as a doctor, teacher, poet, scientist,
warlord, actor, journalist, artist, and coder." (Illustration: Joanna Andreasson/DALL-E4)

Key AI Terms

By Claude 3 Opus

AI (Artificial Intelligence): The simulation of human intelligence processes by machines, especially computer systems, including learning, reasoning, and self-correction.

Gen AI (Generative AI): A subset of AI that creates new content, such as text, images, audio, and video, based on patterns learned from training data.

Prompt: In the context of AI, a prompt is a piece of text, an image, or other input data provided to an AI system to guide its output or response.

LLM (Large Language Model): A type of AI model trained on vast amounts of text data, capable of understanding and generating human-like text based on the input it receives.

Neural Net (Neural Network): A computing system inspired by the biological neural networks in the human brain, consisting of interconnected nodes that process and transmit information, enabling the system to learn and make decisions.

GPT (Generative Pre-trained Transformer): A type of large language model developed by OpenAI, trained on a diverse range of internet text to generate human-like text, answer questions, and perform various language tasks.

Hallucination: In AI, hallucination refers to an AI system generating output that is not grounded in reality or its training data, often resulting in nonsensical or factually incorrect statements.

Compute: Short for computational resources, such as processing power and memory, required to run AI models and perform complex calculations.

Turing Test: A test proposed by Alan Turing to determine whether a machine can exhibit intelligent behavior indistinguishable from that of a human, where a human evaluator engages in a conversation with both a human and a machine and tries to distinguish between them based on their responses.

Machine Learning: A subset of AI that focuses on the development of algorithms and statistical models that enable computer systems to improve their performance on a specific task through experience and data, without being explicitly programmed.

CLAUDE 3 OPUS is a subscription-supported large language model developed by Anthropic, an AI startup. 

The post We Can't Imagine the Future of AI appeared first on Reason.com.

The Best of Reason: In the AI Economy, There Will Be Zero Percent Unemployment

The Best of Reason Magazine logo | Joanna Andreasson

This week's featured article is "In the AI Economy, There Will Be Zero Percent Unemployment" by Andrew Mayne.

This audio was generated using AI trained on the voice of Katherine Mangu-Ward.

Music credits: "Deep in Thought" by CTRL and "Sunsettling" by Man with Roses

The post <I>The Best of Reason</I>: In the AI Economy, There Will Be Zero Percent Unemployment appeared first on Reason.com.

💾

© Joanna Andreasson

Palworld Creator Loves That Others Are Trying To Clone The Game

We’ve had several posts on the video game sensation that is Palworld in the past. Given that the game has been described by others as “Pokémon, but with guns”, we kicked things off both wondering if Nintendo was going to try to take some kind of misguided legal action on the game, while also pointing out that the game is an excellent case study in copyright’s idea/expression dichotomy. After all, the game does not do any direct copying of any Pokémon IP, but does draw obvious inspiration from some of the base ideas behind that IP. In fact, highlighting the dichotomy further was a mod that injected actual Pokémon IP into Palworld, which Nintendo then managed to get taken down.

One of the things writers of this sort of content like me tend to fret about, however, is how often rank hypocrisy suddenly shows up among subjects such as the creators behind Palworld. It’s not uncommon to see a content creator attempt to go after folks doing to them exactly what the creator did in drawing inspiration from others. If you were worried the people behind Palworld would fall into this category, however, it appears very much that you were worried for nothing.

With the success of the game, it was only a matter of time before someone, or many someones, tried to cash in on its success by making similar games, or “clones.” PocketPair CEO Takuro Mizobe noticed this was happening with Palworld and reacted thusly.

“Tencent is already making a Palworld clone game!” PocketPair CEO Takuro Mizobe recently tweeted,” according to a translation by Automaton. He seemed happy about it. “These are incredible times,” he wrote. Some initially interpreted Mizobe as being critical of these moves. An IGN story described him as accusing other companies of ripping off Palworld, a framing the CEO rejected.

“To ‘accuse’ someone of something, means to say they are doing something wrong,” Mizobe wrote in a follow-up tweet responding to the IGN story. “I don’t think what Tencent is doing is wrong. I’m proud that other companies want to make games like Palworld. The industry historically innovates when we borrow ideas from games we love. I’m surprised that many high-quality mobile games are already in development.”

No going legal. No threats. Not even a hint of a complaint. Instead, Mizobe acknowledged what we all already know to be true: video games, like other forms of culture, are and have always been built on what came before it. If the success of Palworld spawns similar games after the fact, that’s not only not a problem, it’s a good thing for gaming culture. Hell, Mizobe even went so far as to praise some of these games’ quality.

Imagine Nintendo doing anything like this. You simply can’t. In fact, when Palworld was released, Nintendo made some vague comments about looking into the game to see if it wanted to pursue any legal action. You know, the exact opposite of the route Mizobe took.

Who knows if these new Palworld clones that Tencent and others are apparently developing will ever see the light of day. We won’t know if they’re actually rip-offs until they’re out, but Mizobe doesn’t seem to mind either way.

And why should he? I imagine he’s far too busy counting all the money his company is making by focusing on making a successful game rather than wringing his hands over some clones that may or may not ever gain any traction.

Why Chiplets Are So Critical In Automotive

Od: John Koon

Chiplets are gaining renewed attention in the automotive market, where increasing electrification and intense competition are forcing companies to accelerate their design and production schedules.

Electrification has lit a fire under some of the biggest and best-known carmakers, which are struggling to remain competitive in the face of very short market windows and constantly changing requirements. Unlike in the past, when carmakers typically ran on five- to seven-year design cycles, the latest technology in vehicles today may well be considered dated within several years. And if they cannot keep up, there is a whole new crop of startups producing cheap vehicles with the ability to update or change out features as quickly as a software update.

But software has speed, security, and reliability limitations, and being able to customize the hardware is where many automakers are now putting their efforts. This is where chiplets fit in, and the focus now is on how to build enough interoperability across large ecosystems to make this a plug-and-play market. The key factors to enable automotive chiplet interoperability include standardization, interconnect technologies, communication protocols, power and thermal management, security, testing, and ecosystem collaboration.

Similar to non-automotive applications at the board level, many design efforts are focusing on a die-to-die approach, which is driving a number of novel design considerations and tradeoffs. At the chip level, the interconnects between various processors, chips, memory, and I/O are becoming more complex due to increased design performance requirements, spurring a flurry of standards activities. Different interconnect and interface types have been proposed to serve varying purposes, while emerging chiplet technologies for dedicated functions — processors, memories, and I/Os, to name a few — are changing the approach to chip design.

“There is a realization by automotive OEMs that to control their own destiny, they’re going to have to control their own SoCs,” said David Fritz, vice president of virtual and hybrid systems at Siemens EDA. “However, they don’t understand how far along EDA has come since they were in college in 1982. Also, they believe they need to go to the latest process node, where a mask set is going to cost $100 million. They can’t afford that. They also don’t have access to talent because the talent pool is fairly small. With all that together comes the realization by the OEMs that to control their destiny, they need a technology that’s developed by others, but which can be combined however needed to have a unique differentiated product they are confident is future-proof for at least a few model years. Then it becomes economically viable. The only thing that fits the bill is chiplets.”

Chiplets can be optimized for specific functions, which can help automakers meet reliability, safety, security requirements with technology that has been proven across multiple vehicle designs. In addition, they can shorten time to market and ultimately reduce the cost of different features and functions.

Demand for chips has been on the rise for the past decade. According to Allied Market Research, global automotive chip demand will grow from $49.8 billion in 2021 to $121.3 billion by 2031. That growth will attract even more automotive chip innovation and investment, and chiplets are expected to be a big beneficiary.

But the marketplace for chiplets will take time to mature, and it will likely roll out in phases.  Initially, a vendor will provide different flavors of proprietary dies. Then, partners will work together to supply chiplets to support each other, as has already happened with some vendors. The final stage will be universally interoperable chiplets, as supported by UCIe or some other interconnect scheme.

Getting to the final stage will be the hardest, and it will require significant changes. To ensure interoperability, large enough portions of the automotive ecosystem and supply chain must come together, including hardware and software developers, foundries, OSATs, and material and equipment suppliers.

Momentum is building
On the plus side, not all of this is starting from scratch. At the board level, modules and sub-systems always have used onboard chip-to-chip interfaces, and they will continue to do so. Various chip and IP providers, including Cadence, Diode, Microchip, NXP, Renesas, Rambus, Infineon, Arm, and Synopsys, provide off-the-shelf interface chips or IP to create the interface silicon.

The Universal Chiplet Interconnect Express (UCIe) Consortium is the driving force behind the die-to-die, open interconnect standard. The group released its latest UCIe 1.1 specification in August 2023. Board members include Alibaba, AMD, Arm, ASE, Google Cloud, Intel, Meta, Microsoft, NVIDIA, Qualcomm, Samsung, and others. Industry partners are showing widespread support. AIB and Bunch of Wires (BoW) also have been proposed. In addition, Arm just released its own Chiplet System Architecture, along with an updated AMBA spec to standardize protocols for chiplets.

“Chiplets are already here, driven by necessity,” said Arif Khan, senior product marketing group director for design IP at Cadence. “The growing processor and SoC sizes are hitting the reticle limit and the diseconomies of scale. Incremental gains from process technology advances are lower than rising cost per transistor and design. The advances in packaging technology (2.5D/3D) and interface standardization at a die-to-die level, such as UCIe, will facilitate chiplet development.”

Nearly all of the chiplets used today are developed in-house by big chipmakers such as Intel, AMD, and Marvell, because they can tightly control the characteristics and behavior of those chiplets. But there is work underway at every level to open this market to more players. When that happens, smaller companies can begin capitalizing on what the high-profile trailblazers have accomplished so far, and innovating around those developments.

“Many of us believe the dream of having an off-the-shelf, interoperable chiplet portfolio will likely take years before becoming a reality,” said Guillaume Boillet, senior director strategic marketing at Arteris, adding that interoperability will emerge from groups of partners who are addressing the risk of incomplete specifications.

This also is raising the attractiveness of FPGAs and eFPGAs, which can provide a level of customization and updates for hardware in the field. “Chiplets are a real thing,” said Geoff Tate, CEO of Flex Logix. “Right now, a company building two or more chiplets can operate much more economically than a company building near-reticle-size die with almost no yield. Chiplet standardization still appears to be far away. Even UCIe is not a fixed standard yet. Not all agree on UCIe, bare die testing, and who owns the problem when the integrated package doesn’t work, etc. We do have some customers who use or are evaluating eFPGA for interfaces where standards are in flux like UCIe. They can implement silicon now and use the eFPGA to conform to standards changes later.”

There are other efforts supporting chiplets, as well, although for somewhat different reasons — notably, the rising cost of device scaling and the need to incorporate more features into chips, which are reticle-constrained at the most advanced nodes. But those efforts also pave the way for chiplets in automotive, and there is strong industry backing to make this all work. For example, under the sponsorship of SEMI, ASME, and three IEEE Societies, the new Heterogeneous Integration Roadmap (HIR) looks at various microelectronics design, materials, and packaging issues to come up with a roadmap for the semiconductor industry. Their current focus includes 2.5D, 3D-ICs, wafer-level packaging, integrated photonics, MEMS and sensors, and system-in-package (SiP), aerospace, automotive, and more.

At the recent Heterogeneous Integration Global Summit 2023, representatives from AMD, Applied Materials, ASE, Lam Research, MediaTek, Micron, Onto Innovation, TSMC, and others demonstrated strong support for chiplets. Another group that supports chiplets is the Chiplet Design Exchange (CDX) working group , which is part of the Open Domain Specific Architecture (ODSA) and the Open Compute Project Foundation (OCP). The Chiplet Design Exchange (CDX) charter focuses on the various characteristics of chiplet and chiplet integration, including electrical, mechanical, and thermal design exchange standards of the 2.5D stacked, and 3D Integrated Circuits (3D-ICs). Its representatives include Ansys, Applied Materials, Arm, Ayar Labs, Broadcom, Cadence, Intel, Macom, Marvell, Microsemi, NXP, Siemens EDA, Synopsys, and others.

“The things that automotive companies want in terms of what each chiplet does in terms of functionality is still in an upheaval mode,” Siemens’ Fritz noted. “One extreme has these problems, the other extreme has those problems. This is the sweet spot. This is what’s needed. And these are the types of companies that can go off and do that sort of work, and then you could put them together. Then this interoperability thing is not a big deal. The OEM can make it too complex by saying, ‘I have to handle that whole spectrum of possibilities.’ The alternative is that they could say, ‘It’s just like a high speed PCIe. If I want to communicate from one to the other, I already know how to do that. I’ve got drivers that are running my operating system. That would solve an awful lot of problems, and that’s where I believe it’s going to end up.”

One path to universal chiplet development?

Moving forward, chiplets are a focal point for both the automotive and chip industries, and that will involve everything from chiplet IP to memory interconnects and customization options and limitations.

For example, Renesas Electronics announced in November 2023 plans for its next-generation SoCs and MCUs. The company is targeting all major applications across the automotive digital domain, including advance information about its fifth-generation R-Car SoC for high-performance applications with advanced in-package chiplet integration technology, which is meant to provide automotive engineers greater flexibility to customize their designs.

Renesas noted that if more AI performance is required in Advanced Driver Assistance Systems (ADAS), engineers will have the capability to integrate AI accelerators into a single chip. The company said this roadmap comes after years of collaboration and discussions with Tier 1 and OEM customers, which have been clamoring for a way to accelerate development without compromising quality, including designing and verifying the software even before the hardware is available.

“Due to the ever increasing needs to increase compute on demand, and the increasing need for higher levels of autonomy in the cars of tomorrow, we see challenges in monolithic solutions scaling and providing the performance needs of the market in the upcoming years,” said Vasanth Waran, senior director for SoC Business & Strategies at Renesas. “Chiplets allows for the compute solutions to scale above and beyond the needs of the market.”

Renesas announced plans to create a chiplet-based product family specifically targeted at the automotive market starting in 2025.

Standard interfaces allow for SoC customization
It is not entirely clear how much overlap there will be between standard processors, which is where most chiplets are used today, and chiplets developed for automotive applications. But the underlying technologies and developments certainly will build off each other as this technology shifts into new markets.

“Whether it is an AI accelerator or ADAS automotive application, customers need standard interface IP blocks,” noted David Ridgeway, senior product manager, IP accelerated solutions group at Synopsys. “It is important to provide fully verified IP subsystems around their IP customization requirements to support the subsystem components used in the customers’ SoCs. When I say customization, you might not realize how customizable IP has become over the course of the last 10 to 20 years, on the PHY side as well as the controller side. For example, PCI Express has gone from PCIe Gen 3 to Gen 4 to Gen 5 and now Gen 6. The controller can be configured to support multiple bifurcation modes of smaller link widths, including one x16, two x8, or four x4. Our subsystem IP team works with customers to ensure all the customization requirements are met. For AI applications, signal and power integrity is extremely important to meet their performance requirements. Almost all our customers are seeking to push the envelope to achieve the highest memory bandwidth speeds possible so that their TPU can process many more transactions per second. Whenever the applications are cloud computing or artificial intelligence, customers want the fastest response rate possible.”

Fig 1: IP blocks including processor, digital, PHY, and verification help developers implement the entire SoC. Source: Synopsys

Fig 1: IP blocks including processor, digital, PHY, and verification help developers implement the entire SoC. Source: Synopsys

Optimizing PPA serves the ultimate goal of increasing efficiency, and this makes chiplets particularly attractive in automotive applications. When UCIe matures, it is expected to improve overall performance exponentially. For example, UCIe can deliver a shoreline bandwidth of 28 to 224 GB/s/mm in a standard package, and 165 to 1317 GB/s/mm in an advanced package. This represents a performance improvement of 20- to 100-fold. Bringing latency down from 20ns to 2ns represents a 10-fold improvement. Around 10 times greater power efficiency, at 0.5 pJ/b (standard package) and 0.25 pJ/b (advanced package), is another plus. The key is shortening the interface distance whenever possible.

To optimize chiplet designs, the UCIe Consortium provides some suggestions:

  • Careful planning consideration of architectural cut-lines (i.e. chiplet boundaries), optimizing for power, latency, silicon area, and IP reuse. For example, customizing one chiplet that needs a leading-edge process node while re-using other chiplets on older nodes may impact cost and time.
  • Thermal and mechanical packaging constraints need to be planned out for package thermal envelopes, hot spots, chiplet placements and I/O routing and breakouts.
  • Process nodes need to be carefully selected, particularly in the context of the associated power delivery scheme.
  • Test strategy for chiplets and packaged/assembled parts need to be developed up front to ensure silicon issues are caught at the chiplet-level testing phase rather than after they are assembled into a package.

Conclusion
The idea of standardizing die-to-die interfaces is catching on quickly but the path to get there will take time, effort, and a lot of collaboration among companies that rarely talk with each other. Building a vehicle takes one determine carmaker. Building a vehicle with chiplets requires an entire ecosystem that includes the developers, foundries, OSATs, and material and equipment suppliers to work together.

Automotive OEMs are experts at putting systems together and at finding innovative ways to cut costs. But it remains to seen how quickly and effectively they can build and leverage an ecosystem of interoperable chiplets to shrink design cycles, improve customization, and adapt to a world in which the leading edge technology may be outdated by the time it is fully designed, tested, and available to consumers.

— Ann Mutschler contributed to this report.

Related Reading
Automotive Relationships Shifting With Chiplets
As the automotive ecosystem balances the best approaches for designing in increasingly advanced features, how companies interact is still evolving.

The post Why Chiplets Are So Critical In Automotive appeared first on Semiconductor Engineering.

❌