FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Ars Technica - All content
  • This year’s summer COVID wave is big; FDA may green-light COVID shots earlyBeth Mole
    Enlarge (credit: Getty | Thomas Trutschel) With the country experiencing a relatively large summer wave of COVID-19, the Food and Drug Administration is considering signing off on this year's strain-matched COVID-19 vaccines as soon as this week, according to a report by CNN that cited unnamed officials familiar with the matter. Last year, the FDA gave the green light for the 2023–2024 COVID shots on September 11, close to the peak of SARS-CoV-2 transmission in that year's su
     

This year’s summer COVID wave is big; FDA may green-light COVID shots early

Od: Beth Mole
20. Srpen 2024 v 01:53
This year’s summer COVID wave is big; FDA may green-light COVID shots early

Enlarge (credit: Getty | Thomas Trutschel)

With the country experiencing a relatively large summer wave of COVID-19, the Food and Drug Administration is considering signing off on this year's strain-matched COVID-19 vaccines as soon as this week, according to a report by CNN that cited unnamed officials familiar with the matter.

Last year, the FDA gave the green light for the 2023–2024 COVID shots on September 11, close to the peak of SARS-CoV-2 transmission in that year's summer wave. This year, the summer wave began earlier and, by some metrics, is peaking at much higher levels than in previous years.

Currently, wastewater detection of SARS-CoV-2 shows "very high" virus levels in 32 states and the District of Columbia. An additional 11 states are listed as having "high" levels. Looking at trends, the southern and western regions of the country are currently reporting SARS-CoV-2 levels in wastewater that rival the 2022–2023 and 2023–2024 winter waves, which both peaked at the very end of December.

Read 8 remaining paragraphs | Comments

  • ✇Latest
  • Code GamesChristian Britschgi
    Happy Tuesday and welcome to another edition of Rent Free. This week's stories include: A federal appeals court slaps down the federal government's odd argument that it doesn't have to compensate landlords for its eviction moratorium because the moratorium was illegal. Vice President Kamala Harris sets a first-term goal of building 3 million middle-class homes. A Michigan judge sides with property owners trying to build a "green cemetery." But
     

Code Games

20. Srpen 2024 v 17:30
housing | Seemitch/Dreamstime.com

Happy Tuesday and welcome to another edition of Rent Free. This week's stories include:

  • A federal appeals court slaps down the federal government's odd argument that it doesn't have to compensate landlords for its eviction moratorium because the moratorium was illegal.
  • Vice President Kamala Harris sets a first-term goal of building 3 million middle-class homes.
  • A Michigan judge sides with property owners trying to build a "green cemetery."

But first, a look at an under-the-radar federal regulation change that might make it easier for builders to create more small multifamily "missing middle" homes.


Code Games

In his 1942 book Capitalism, Socialism, and Democracy Joseph Schumpeter praised capitalist mass production for bringing almost every basic commodity, from food to clothing, within the affordable reach of the working man. The one exception he highlighted was housing, which he confidently predicted would soon see a similar collapse in prices due to mass-produced manufactured housing.

As it happens, manufactured housing production—which is built in factories and then shipped and installed on-site—peaked in the mid-1970s and has been limping along as a small share of overall home construction ever since.

Nevertheless, the dream that cheap, factory-built homes can deliver lower-cost housing has never died.

It's certainly alive and well in the current White House.

This past week, the Biden-Harris administration released a "fact sheet" of actions it was taking to lower housing costs. It included an in-progress regulatory change that would allow two-, three-, and four-unit homes to be built under the federal manufactured housing code set by the U.S. Department of Housing and Urban Development (HUD).

"The HUD Code creates economies of scale for manufacturers, resulting in significantly lower costs for buyers," says the White House in that fact sheet. Letting small multifamily housing be built under the HUD code will extend "the cost-saving benefits of manufactured housing to denser urban and suburban infill contexts," it says.

IRC, IBC, IDK

The proposed change comes at an interesting time for small multifamily housing construction.

Across the country, more and more states and localities are allowing more two-, three-, and four-unit homes to be built in formerly single-family-only areas.

That liberalization of the zoning code (which regulates what types of buildings can be built where) has set off a follow-on debate about which building code (which regulates construction standards) newly legal multiplexes should be regulated under.

Currently, the options are either the International Building Code (IBC) or the International Residential Code (IRC).

The IBC and IRC are model codes created by the non-profit International Code Council, which are then adopted (often with tweaks and changes) by states and localities.

The IBC typically covers apartment buildings of three or more units, while the IRC covers single-family homes. Neither is particularly well-suited for the regulation of smaller multi-family buildings that cities are now legalizing.

The IBC, for instance, requires expensive sprinkler systems that don't do much to improve fire safety in smaller buildings but can make their construction cost-prohibitive.

Zoning reformers have responded by trying to shift the regulation of smaller apartments into the IRC. But that raises its own problems, says Stephen Smith of the Center for Building in North America.

"It's a complicated thing to do because the IRC is not written for small multi-family. It's written for detached single-family," he says. "For traditional apartment buildings with a single entrance and stairs and halls and stuff, it's not really clear how the IRC would work with that."

The White House's proposed changes open the possibility of sidestepping this IRC-IBC dilemma entirely by letting builders of manufactured, multifamily housing opt into a single, national set of regulations.

A Floor or a Ceiling?

The question then is whether this will actually make life easier for builders.

The effect of HUD regulation on the production of single-family manufactured housing is a topic of intense debate.

Prior to the 1970s, manufactured housing was governed by a patchwork of state and local building codes. In 1974 Congress passed legislation that gives HUD the power to regulate manufactured housing.

Critics of HUD regulation argue that its initial implementation caused the steep decline in manufactured housing production in the 1970s.

In particular, they point to the HUD requirement that manufactured housing must sit on a steel chassis as a regulation that increases costs and decreases production.

Brian Potter, a senior fellow at the Institute for Progress and writer of the Construction Physics Substack, contrastingly argues that HUD regulation has actually helped keep the cost of building manufactured housing down.

The production of all housing, not just manufactured housing, plummeted in the 1970s, he notes. Since the 1970s, the costs of non-manufactured, site-built housing have skyrocketed while the costs of building manufactured housing have risen much less, he points out. Potter argues that the effect of the steel chassis requirement is also overstated.

To this day, manufactured housing is the cheapest type of housing to produce when comparing smaller manufactured housing units to smaller site-built single-family housing units. The HUD code has less expensive requirements and allows builders more flexibility in the construction of units.

"The most interesting and attractive thing about the HUD code is that HUD code homes tend to be much, much less expensive than single-family homes," says Potter.

The hope is that allowing newly legal duplexes, triplexes, and fourplexes to be built under HUD standards would reduce costs compared to building them under IBC or IRC regulations.

Degrees of Change

While the HUD code has been in existence since the 1970s, its explicit exclusion of manufactured, multifamily housing is a relatively recent development. In 2014, HUD issued a memorandum saying that only single-family housing can be built under the department's manufactured housing standards.

In a 2022 public comment on the proposed updates, the Manufactured Housing Association for Regulatory Reform argues that the 2014 memorandum was in error and that HUD actually has no regulatory authority to cap the number of units that can be built under the code.

According to the White House fact sheet, the Biden administration's proposed updates to the HUD code would once again allow up to four units of housing to be built under the code once again.

If the HUD code critics are correct, then this will make a minimal difference. Under this theory, builders would just have another cost-increasing building code to choose from. If folks like Potter are correct, however, this should allow builders to opt into less demanding regulations. We might therefore see an increase in the number of two-, three-, and four-unit homes built.

Building code liberalization will still only be effective in places where zoning code liberalization has already happened. Cities and states still have every power to zone out multifamily housing and ban the placement of manufactured housing.

Where cities have made those "missing middle" reforms, however, it's possible the White House's proposed regulatory changes will increase the production of manufactured, multifamily housing while policymakers figure out whether how to change the IBC or IRC to allow more site-built multiplexes.


If the CDC's Eviction Moratorium Was Illegal, Do the Feds Have To Pay for It?

When the Centers for Disease Control and Prevention (CDC) banned residential evictions for non-payment of rent in 2020, property owners responded with a flurry of lawsuits, arguing that the federal government owed them compensation for what amounted to a physical taking of their property.

While those lawsuits were ongoing, the U.S. Supreme Court ruled in August 2021 that the CDC moratorium was an illegal overstepping of the agency's authority.

This armed the federal government with an audacious response to all those property owners' claims for compensation: Because the CDC's eviction moratorium was illegal and lacked federal authorization, the federal government wasn't required to pay any compensation.

Incredibly, the Court of Federal Claims agreed with this argument—citing past cases that immunized the government from having to pay compensation for clearly illegal, unsanctioned acts of its agents—and dismissed a property owners' lawsuit in the case of Darby Development Co. v. United States.

But this past week, the United States Court of Appeals for the Federal Circuit sided with property owners and reversed that dismal.

The appeals court ruled that the CDC eviction moratorium, while illegal, clearly did have the endorsement of both Congress and the executive branch.

"Taken to its logical conclusion, [the government's] position is that government agents can physically occupy private property for public use, resist for months the owner's legal attempts to make them leave, and then, when finally made to leave, say they need not pay for their stay because they had no business being there in the first place," wrote Judge Armando O. Bonilla in an opinion issued earlier this month.

The case is now remanded back to the federal claims court.

"The government should not be able to hide behind its own illegality to avoid paying damages for that very illegality," Greg Dolin, a senior litigation counsel at the New Civil Liberties Alliance (which filed an amicus brief in the Darby case) told Reason.


Kamala Harris, Supply Sider?

In a speech this past Friday laying out her economic agenda, Vice President and Democratic presidential candidate Kamala Harris criticized state and local restrictions on homebuilding for driving up prices.

"There's a serious housing shortage in many places.  It's too difficult to build, and it's driving prices up. As president, I will work in partnership with industry to build the housing we need, both to rent and to buy. We will take down barriers and cut red tape, including at the state and local levels," said Harris, promising to deliver 3 million units of housing that's affordable to middle-class families by the end of her first term.

It's always refreshing to hear a politician accurately diagnose the cause of America's high housing costs as a matter of restricted supply. It's even better when politicians promise to do something about those supply restrictions. Harris' remarks are rhetorically a lot better than the explicit NIMBYism coming from Republican presidential contender Donald Trump.

Nevertheless, Harris' actual housing policies, including downpayment subsidies and rent control, will only make the problem worse. Downpayment subsidies will drive up demand and prices while leaving supply restrictions in place. Rent control has a long, long record of reducing the quality and quantity of housing.

Harris' speech was also peppered with lines attacking institutional housing investors who are providing much-needed capital for housing production.


Town's Ban on 'Green Cemetery' Is Dead

If the government doesn't like your cemetery, can it just ban all cemeteries? The answer, at least in Michigan, is no, no it can't.

In the case of Quakenbush et al v. Brooks Township et al, a state circuit court judge sided with a married couple who'd sued their local government when it passed a ban on new cemeteries with an eye toward stopping their development of the state's first "conservation burial forest."

"We're excited and feel vindicated by this ruling. We are delighted that the judge understood that Brooks Township's ordinance violated our right to use our property," said Peter and Annica Quakenbush, the plaintiffs in the case. They were represented by the Institute for Justice.


Quick Links

  • Jim Burling, the Pacific Legal Foundation's vice president of legal affairs, has a new book Nowhere to Live covering the legal history of zoning in America, the courts' acquiescence to this restriction on property rights, and all the attendant consequences of high housing costs and homelessness that have flowed from it.
  • A new paper published on SSRN estimates that a 25 percent reduction in permitting times in Los Angeles leads to a 33 percent increase in housing production.
  • Calmatters covers the killing, or severe injuring, of various bills introduced in the California Legislature this year that aimed to pair back the California Coastal Commission's powers to shoot down new housing production. Read Reason's past coverage of the Coastal Commission here and here.
  • Hawaii has legalized accessory dwelling units statewide, but they haven't made building them easy.
  • If you build it, prices drop.

*UPDATED* (and still true)

When you build "luxury" new apartments in big numbers, the influx of supply puts downward pressure on rents at all price points -- even in the lowest-priced Class C rentals. Here's evidence of that happening right now:

There are 21 U.S. markets where… pic.twitter.com/BF9GY0YiFY

— Jay Parsons (@jayparsons) August 13, 2024

The post Code Games appeared first on Reason.com.

  • ✇Game Rant
  • DC: All Lantern Corps, Ranked By StrengthC.M Edwards
    The DC Universe is a gargantuan collection of characters, factions, nemeses, and heroes. Among the factions in the DC Comics Universe rests a collection of powerful individuals known as lanterns. The concept started with the first Green Lantern, Alan Scott, who was created by Bill Finger and Martin Nodell. Soon after, Hal Jordan was brought into existence, and the Green Lantern Corps made their first appearance in Showcase Vol 1 #22 by John Broome and Gil Kane.
     

DC: All Lantern Corps, Ranked By Strength

6. Srpen 2024 v 10:45

The DC Universe is a gargantuan collection of characters, factions, nemeses, and heroes. Among the factions in the DC Comics Universe rests a collection of powerful individuals known as lanterns. The concept started with the first Green Lantern, Alan Scott, who was created by Bill Finger and Martin Nodell. Soon after, Hal Jordan was brought into existence, and the Green Lantern Corps made their first appearance in Showcase Vol 1 #22 by John Broome and Gil Kane.

  • ✇Game Rant
  • Best Cameos in DC MoviesJake Fillery
    Whilst DC barely dipped its toes into the concepts of the Multiverse before heading into a bold new reboot with James Gunn and Peter Safrans DCU, there were some surprising faces seen across various movies. Most DC Cameos were not all that exciting and featured characters too obscure, or actors, celebrities, and politicians that feature as mere easter eggs rather than for comic fans to enjoy.
     

Best Cameos in DC Movies

6. Srpen 2024 v 08:30

Whilst DC barely dipped its toes into the concepts of the Multiverse before heading into a bold new reboot with James Gunn and Peter Safrans DCU, there were some surprising faces seen across various movies. Most DC Cameos were not all that exciting and featured characters too obscure, or actors, celebrities, and politicians that feature as mere easter eggs rather than for comic fans to enjoy.

  • ✇Ars Technica - All content
  • Troubling bird flu study suggests human cases are going undetectedBeth Mole
    Enlarge (credit: Tony C. French/Getty) A small study in Texas suggests that human bird flu cases are being missed on dairy farms where the H5N1 virus has taken off in cows, sparking an unprecedented nationwide outbreak. The finding adds some data to what many experts have suspected amid the outbreak. But the authors of the study, led by researchers at the University of Texas Medical Branch in Galveston, went further, stating bluntly why the US is failing to fully surveil, let
     

Troubling bird flu study suggests human cases are going undetected

Od: Beth Mole
2. Srpen 2024 v 01:17
Troubling bird flu study suggests human cases are going undetected

Enlarge (credit: Tony C. French/Getty)

A small study in Texas suggests that human bird flu cases are being missed on dairy farms where the H5N1 virus has taken off in cows, sparking an unprecedented nationwide outbreak.

The finding adds some data to what many experts have suspected amid the outbreak. But the authors of the study, led by researchers at the University of Texas Medical Branch in Galveston, went further, stating bluntly why the US is failing to fully surveil, let alone contain, a virus with pandemic potential.

"Due to fears that research might damage dairy businesses, studies like this one have been few," the authors write in the topline summary of their study, which was posted online as a pre-print and had not been peer-reviewed.

Read 12 remaining paragraphs | Comments

  • ✇Liliputing
  • “Apple Intelligence” is bringing AI features to every Apple device with an A17 Pro or faster chipBrad Linder
    Apple may have helped popularize the idea of a virtual assistant with its Siri software, but in recent years Siri has become the butt of jokes for its limited capabilities while rivals including Google, Microsoft, and Meta have been quicker to bring generative AI capabilities to their users. Now Apple is introducing its own version […] The post “Apple Intelligence” is bringing AI features to every Apple device with an A17 Pro or faster chip appeared first on Liliputing.
     

“Apple Intelligence” is bringing AI features to every Apple device with an A17 Pro or faster chip

10. Červen 2024 v 21:34

Apple may have helped popularize the idea of a virtual assistant with its Siri software, but in recent years Siri has become the butt of jokes for its limited capabilities while rivals including Google, Microsoft, and Meta have been quicker to bring generative AI capabilities to their users. Now Apple is introducing its own version […]

The post “Apple Intelligence” is bringing AI features to every Apple device with an A17 Pro or faster chip appeared first on Liliputing.

  • ✇Ars Technica - All content
  • Bird flu virus from Texas human case kills 100% of ferrets in CDC studyBeth Mole
    Enlarge (credit: Getty | Yui Mok) The strain of H5N1 bird flu isolated from a dairy worker in Texas was 100 percent fatal in ferrets used to model influenza illnesses in humans. However, the virus appeared inefficient at spreading via respiratory droplets, according to newly released study results from the Centers for Disease Control and Prevention. The data confirms that H5N1 infections are significantly different from seasonal influenza viruses that circulate in humans. Tho
     

Bird flu virus from Texas human case kills 100% of ferrets in CDC study

Od: Beth Mole
10. Červen 2024 v 19:19
Bird flu virus from Texas human case kills 100% of ferrets in CDC study

Enlarge (credit: Getty | Yui Mok)

The strain of H5N1 bird flu isolated from a dairy worker in Texas was 100 percent fatal in ferrets used to model influenza illnesses in humans. However, the virus appeared inefficient at spreading via respiratory droplets, according to newly released study results from the Centers for Disease Control and Prevention.

The data confirms that H5N1 infections are significantly different from seasonal influenza viruses that circulate in humans. Those annual viruses make ferrets sick but are not deadly. They have also shown to be highly efficient at spreading via respiratory droplets, with 100 percent transmission rates in laboratory settings. In contrast, the strain from the Texas man (A/Texas/37/2024) appeared to have only a 33 percent transmission rate via respiratory droplets among ferrets.

"This suggests that A/Texas/37/2024-like viruses would need to undergo changes to spread efficiently by droplets through the air, such as from coughs and sneezes," the CDC said in its data summary. The agency went on to note that "efficient respiratory droplet spread, like what is seen with seasonal influenza viruses, is needed for sustained person-to-person spread to happen."

Read 7 remaining paragraphs | Comments

  • ✇Semiconductor Engineering
  • Reset Domain Crossing VerificationSiemens EDA
    By Reetika and Sulabh Kumar Khare, Siemens EDA DI SW To meet low-power and high-performance requirements, system on chip (SoC) designs are equipped with several asynchronous and soft reset signals. These reset signals help to safeguard software and hardware functional safety as they can be asserted to speedily recover the system onboard to an initial state and clear any pending errors or events. By definition, a reset domain crossing (RDC) occurs when a path’s transmitting flop has an asynchrono
     

Reset Domain Crossing Verification

13. Květen 2024 v 09:01

By Reetika and Sulabh Kumar Khare, Siemens EDA DI SW

To meet low-power and high-performance requirements, system on chip (SoC) designs are equipped with several asynchronous and soft reset signals. These reset signals help to safeguard software and hardware functional safety as they can be asserted to speedily recover the system onboard to an initial state and clear any pending errors or events.

By definition, a reset domain crossing (RDC) occurs when a path’s transmitting flop has an asynchronous reset, and the receiving flop either has a different asynchronous reset than the transmitting flop or has no reset. The multitude of asynchronous reset sources found in today’s complex automotive designs means there are a large number of RDC paths, which can lead to systematic faults and hence cause data-corruption, glitches, metastability, or functional failures — along with other issues.

This issue is not covered by standard, static verification methods, such as clock domain crossing (CDC) analysis. Therefore, a proper reset domain crossing verification methodology is required to prevent errors in the reset design during the RTL verification stage.

A soft reset is an internally generated reset (register/latch/black-box output is used as a reset) that allows the design engineer to reset a specific portion of the design (specific module/subsystem) without affecting the entire system. Design engineers frequently use a soft reset mechanism to reset/restart the device without fully powering it off, as this helps to conserve power by selectively resetting specific electronic components while keeping others in an operational state. A soft reset typically involves manipulating specific registers or signals to trigger the reset process. Applying soft resets is a common technique used to quickly recover from a problem or test a specific area of the design. This can save time during simulation and verification by allowing the designer to isolate and debug specific issues without having to restart the entire simulation. Figure 1 shows a simple soft reset and its RTL to demonstrate that SoftReg is a soft reset for flop Reg.

Fig. 1: SoftReg is a soft reset for register Reg.

This article presents a systematic methodology to identify RDCs, with different soft resets, that are unsafe, even though the asynchronous reset domain is the same on the transmitter and receiver ends. Also, with enough debug aids, we will identify the safe RDCs (safe from metastability only if it meets the static timing analysis), with different asynchronous reset domains, that help to avoid silicon failures and minimize false crossing results. As a part of static analysis, this systematic methodology enables designers to intelligently identify critical reset domain bugs associated with soft resets.

A methodology to identify critical reset domain bugs

With highly complex reset architectures in automotive designs, there arises the need for a proper verification method to detect RDC issues. It is essential to detect unsafe RDCs systematically and apply appropriate synchronization techniques to tackle the issues that may arise due to delays in reset paths caused by soft resets. Thus designers can ensure proper operation of their designs and avoid the associated risks. By handling RDCs effectively, designers can mitigate potential issues and enhance the overall robustness and performance of a design. This systematic flow involves several steps to assist in RDC verification closure using standard RDC verification tools (see figure 2).

Fig. 2: Flowchart of methodology for RDC verification.

Specification of clock and reset signals

Signals that are intended to generate a clock and reset pulse should be specified by the user as clock or reset signals, respectively, during the set-up step in RDC verification. By specifying signals as clocks or resets (according to their expected behavior), designers can perform design rule checking and other verification checks to ensure compliance with clock and reset related guidelines and standards as well as best practices. This helps identify potential design issues and improve the overall quality of the design by reducing noise in the results.

Clock detection

Ideally, design engineers should define the clock signals and then the verification tool should trace these clocks down to the leaf clocks. Unfortunately, with complex designs, this is not possible as the design might have black boxes that originate clocks, or it may have some combinational logic in the clock signals that do not cover all the clocks specified by the user. All the un-specified clocks need to be identified and mapped to the user-specified primary clocks. An exhaustive detection of clocks is required in RDC verification, as potential metastability may occur if resets are used in different clock domains than the sequential element itself, leading to critical bugs.

Reset detection

Ideally, design engineers should define the reset signals, but again, due to the complexity of automotive and other modern designs, it is not possible to specify all the reset signals. Therefore a specialized verification tool is required for detection of resets. All the localized, black-box, gated, and primary resets need to be identified, and based on their usage in the RTL, they should be classified as synchronous, asynchronous, or dual type and then mapped to the user-specified primary resets.

Soft reset detection

The soft resets — i.e., the internally generated resets by flops and latches — need to be systematically detected as they can cause critical metastability issues when used in different clock domains, and they require static timing analysis when used in the same clock domain. Detecting soft resets helps identify potential metastability problems and allows designers to apply proper techniques for resolving these issues.

Reset tree analysis

Analysis of reset trees helps designers identify issues early in the design process, before RDC analysis. It helps to highlight some important errors in the reset design that are not commonly caught by lint tools. These include:

  • Dual synchronicity reset signals, i.e., the reset signal with a sample synchronous reset flop and a sample asynchronous reset flop
  • An asynchronous set/reset signal used as a data signal can result in incorrect data sampling because the reset state cannot be controlled

Reset domain crossing analysis

This step involves analyzing a design to determine the logic across various reset domains and identify potential RDCs. The analysis should also identify common reset sequences of asynchronous and soft reset sources at the transmitter and receiver registers of the crossings to avoid detection of false crossings that might appear as potential issues due to complex combinations of reset sources. False crossings are where a transmitter register and receiver register are asserted simultaneously due to dependencies among the reset assertion sequences, and as a result, any metastability that might occur on the receiver end is mitigated.

Analyze and fix RDC issues

The concluding step is to analyze the results of the verification steps to verify if data paths crossing reset domains are safe from metastability. For the RDCs identified as unsafe — which may occur either due to different asynchronous reset domains at the transmitter and receiver ends or due to the soft reset being used in a different clock domain than the sequential element itself — design engineers can develop solutions to eliminate or mitigate metastability by restructuring the design, modifying reset synchronization logic, or adjusting the reset ordering. Traditionally safe RDCs — i.e., crossings where a soft reset is used in the same clock domain as the sequential element itself — need to be verified using static timing analysis.

Figure 3 presents our proposed flow for identifying and eliminating metastability issues due to soft resets. After implementing the RDC solutions, re-verify the design to ensure that the reset domain crossing issues have been effectively addressed.

Fig. 3: Flowchart for proposed methodology to tackle metastability issues due to soft resets.

This methodology was used on a design with 374,546 register bits, 8 latch bits, and 45 RAMs. The Questa RDC verification tool using this new methodology identified around 131 reset domains, which consisted of 19 asynchronous domains defined by the user, as well as 81 asynchronous reset domains inferred by the tool.

The first run analyzed data paths crossing asynchronous reset domains without any soft reset analysis. It reported nearly 40,000 RDC crossings (as shown in table 1).

Reset domain crossings without soft reset analysis Severity Number of crossings
Reset domain crossing from a reset to a reset Violation 28408
Reset domain crossing from a reset to non-reset Violation 11235

Table 1: RDC analysis without soft resets.

In the second run, we did soft reset analysis and detected 34 soft resets, which resulted in additional violations for RDC paths with transmitter soft reset sources in different clock domains. These were critical violations that were missed in the initial run. Also, some RDC violations were converted to cautions (RDC paths with a transmitter soft reset in the same clock domain) as these paths would be safe from metastability as long as they meet the setup time window (as shown in table 2).

Reset domain crossings with soft reset analysis Severity Number of crossings
Reset domain crossing from a reset to a reset Violation 26957
Reset domain crossing from a reset to non-reset Violation 10523
Reset domain crossing with tx reset source in different clock Violation 880
Reset domain crossing from a reset to Rx with same clock Caution 2412

Table 2: RDC analysis with soft resets.

To gain a deeper understanding of RDC, metastability, and soft reset analysis in the context of this new methodology, please download the full paper Techniques to identify reset metastability issues due to soft resets.

The post Reset Domain Crossing Verification appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Predicting And Preventing Process DriftGregory Haley
    Increasingly tight tolerances and rigorous demands for quality are forcing chipmakers and equipment manufacturers to ferret out minor process variances, which can create significant anomalies in device behavior and render a device non-functional. In the past, many of these variances were ignored. But for a growing number of applications, that’s no longer possible. Even minor fluctuations in deposition rates during a chemical vapor deposition (CVD) process, for example, can lead to inconsistencie
     

Predicting And Preventing Process Drift

22. Duben 2024 v 09:05

Increasingly tight tolerances and rigorous demands for quality are forcing chipmakers and equipment manufacturers to ferret out minor process variances, which can create significant anomalies in device behavior and render a device non-functional.

In the past, many of these variances were ignored. But for a growing number of applications, that’s no longer possible. Even minor fluctuations in deposition rates during a chemical vapor deposition (CVD) process, for example, can lead to inconsistencies in layer uniformity, which can impact the electrical isolation properties essential for reliable circuit operation. Similarly, slight variations in a photolithography step can cause alignment issues between layers, leading to shorts or open circuits in the final device.

Some of these variances can be attributed to process error, but more frequently they stem from process drift — the gradual deviation of process parameters from their set points. Drift can occur in any of the hundreds of process steps involved in manufacturing a single wafer, subtly altering the electrical properties of chips and leading to functional and reliability issues. In highly complex and sensitive ICs, even the slightest deviations can cause defects in the end product.

“All fabs already know drift. They understand drift. They would just like a better way to deal with drift,” said David Park, vice president of marketing at Tignis. “It doesn’t matter whether it’s lithography, CMP (chemical mechanical polishing), CVD or PVD (chemical/physical vapor deposition), they’re all going to have drift. And it’s all going to happen at various rates because they are different process steps.”

At advanced nodes and in dense advanced packages, where a nanometer can be critical, controlling process drift is vital for maintaining high yield and ensuring profitability. By rigorously monitoring and correcting for drift, engineers can ensure that production consistently meets quality standards, thereby maximizing yield and minimizing waste.

“Monitoring and controlling hundreds of thousands of sensors in a typical fab requires the ability to handle petabytes of real-time data from a large variety of tools,” said Vivek Jain, principal product manager, smart manufacturing at Synopsys. “Fabs can only control parameters or behaviors they can measure and analyze. They use statistical analysis and error budget breakdowns to define upper control limits (UCLs) and lower control limits (LCLs) to monitor the stability of measured process parameters and behaviors.”

Dialing in legacy fabs
In legacy fabs — primarily 200mm — most of the chips use 180nm or older process technology, so process drift does not need to be as precisely monitored as in the more advanced 300mm counterparts. Nonetheless, significant divergence can lead to disparities in device performance and reliability, creating a cascade of operational challenges.

Manufacturers operating at older technology nodes might lack the sophisticated, real-time monitoring and control methods that are standard in cutting-edge fabs. While the latter have embraced ML to predict and correct for drift, many legacy operations still rely heavily on periodic manual checks and adjustments. Thus, the management of process drift in these settings is reactive rather than proactive, making changes after problems are detected rather than preventing them.

“There is a separation between 300-millimeter and 200-millimeter fabs,” said Park. “The 300-millimeter guys are all doing some version of machine learning. Sometimes it’s called advanced process control, and sometimes it’s actually AI-powered process control. For some of the 200-millimeter fabs with more mature process nodes, they basically have a recipe they set and a bunch of technicians looking at machines and looking at the CDs. When the drift happens, they go through their process recipe and manually adjust for the out-of-control processes, and that’s just what they’ve always done. It works for them.”

For these older fabs, however, the repercussions of process drift can be substantial. Minor deviations in process parameters, such as temperature or pressure during the deposition or etching phases, gradually can lead to changes in the physical structure of the semiconductor devices. Over time, these minute alterations can compound, resulting in layers of materials that deviate from their intended characteristics. Such deviations affect critical dimensions and ultimately can compromise the electrical performance of the chip, leading to slower processing speeds, higher power consumption, or outright device failure.

The reliability equation is equally impacted by process drift. Chips are expected to operate consistently over extended periods, often under a range of environmental conditions. However, when process-induced variability can weaken the device’s resilience, precipitating early wear-out mechanisms and reducing its lifetime. In situations where dependability is non-negotiable, such as in automotive or medical applications, those variations can have dire consequences.

But with hundreds of process steps for a typical IC, eliminating all variability in fabs is simply not feasible.

“Process drift is never going to not happen, because the processes are going to have some sort of side effect,” Park said. “The machines go out of spec and things like pumps and valves and all sorts of things need to be replaced. You’re still going to have preventive maintenance (PM). But if the critical dimensions are being managed correctly, which is typically what triggers the drift, you can go a longer period of time between cleanings or the scheduled PMs and get more capacity.”

Process drift pitfalls
Managing process drift in semiconductor manufacturing presents several complex challenges. Hysteresis, for example, is a phenomenon where the output of a process varies not solely because of current input conditions, but also based on the history of the states through which the process already has passed. This memory effect can significantly complicate precision control, as materials and equipment might not reset to a baseline state after each operational cycle. Consequently, adjustments that were effective in previous cycles may not yield the same outcomes due to accumulated discrepancies.

One common cause of hysteresis is thermal cycling, where repeated heating and cooling create mechanical stresses. Those stresses can be additive, releasing inconsistently based on temperature history.  That, in turn, can lead to permanent changes in the output of a circuit, such as a voltage reference, which affects its precision and stability.

In many field-effect transistors (FETs), hysteresis also can occur due to charge trapping. This happens when charges are captured in ‘trap states’ within the semiconductor material or at the interface with another material, such as an oxide layer. The trapped charges then can modulate the threshold voltage of the device over time and under different electrical biases, potentially leading to operational instability and variability in device performance.

Human factors also play a critical role in process drift, with errors stemming from incorrect settings adjustments, mishandling of materials, misinterpretation of operational data, or delayed responses to process anomalies. Such errors, though often minor, can lead to substantial variations in manufacturing processes, impacting the consistency and reliability of semiconductor devices.

“Once in production, the biggest source of variability is human error or inconsistency during maintenance,” said Russell Dover, general manager of service product line at Lam Research. “Wet clean optimization (WCO) and machine learning through equipment intelligence solutions can help address this.”

The integration of new equipment into existing production lines introduces additional complexities. New machinery often features increased speed, throughput, and tighter tolerances, but it must be integrated thoughtfully to maintain the stringent specifications required by existing product lines. This is primarily because the specifications and performance metrics of legacy chips have been long established and are deeply integrated into various applications with pre-existing datasheets.

“From an equipment supplier perspective, we focus on tool matching,” said Dover. “That includes manufacturing and installing tools to be identical within specification, ensuring they are set up and running identically — and then bringing to bear systems, tooling, software and domain knowledge to ensure they are maintained and remain as identical as possible.”

The inherent variability of new equipment, even those with advanced capabilities, requires careful calibration and standardization.

“Some equipment, like transmission electron microscopes, are incredibly powerful,” said Jian-Min Zuo, a materials science and engineering professor at the University of Illinois’ Grainger College of Engineering. “But they are also very finicky, depending on how you tune the machine. How you set it up under specific conditions may vary slightly every time. So there are a number of things that can be done when you try to standardize those procedures, and also standardize the equipment. One example is to generate a curate, like a certain type of test case, where you can collect data from different settings and make sure you’re taking into account the variability in the instruments.”

Process drift solutions
As semiconductor manufacturers grapple with the complexities of process drift, a diverse array of strategies and tools has emerged to address the problem. Advanced process control (APC) systems equipped with real-time monitoring capabilities can extract patterns and predictive insights from massive data sets gathered from various sensors throughout the manufacturing process.

By understanding the relationships between different process variables, APC can predict potential deviations before they result in defects. This predictive capability enables the system to make autonomous adjustments to process parameters in real-time, ensuring that each process step remains within the defined control limits. Essentially, APC acts as a dynamic feedback mechanism that continuously fine-tunes the production process.

Fig. 1: Reduced process drift with AI/ML advanced process control. Source: Tignis

Fig. 1: Reduced process drift with AI/ML advanced process control. Source: Tignis

While APC proactively manages and optimizes the process to prevent deviations, fault detection and classification (FDC) reacts to deviations by detecting and classifying any faults that still occur.

FDC data serves as an advanced early-warning system. This system monitors the myriad parameters and signals during the chip fabrication process, rapidly detecting any variances that could indicate a malfunction or defect in the production line. The classification component of FDC is particularly crucial, as it does more than just flag potential issues. It categorizes each detected fault based on its characteristics and probable causes, vastly simplifying the trouble-shooting process. This allows engineers to swiftly pinpoint the type of intervention needed, whether it’s recalibrating instruments, altering processing recipes, or conducting maintenance repairs.

Statistical process control (SPC) is primarily focused on monitoring and controlling process variations using statistical methods to ensure the process operates efficiently and produces output that meets quality standards. SPC involves plotting data in real-time against control limits on control charts, which are statistically determined to represent the expected normal process behavior. When process measurements stray outside these control limits, it signals that the process may be out of control due to special causes of variation, requiring investigation and correction. SPC is inherently proactive and preventive, aiming to detect potential problems before they result in product defects.

“Statistical process control (SPC) has been a fundamental methodology for the semiconductor industry almost from its very foundation, as there are two core factors supporting the need,” said Dover. “The first is the need for consistent quality, meaning every product needs to be as near identical as possible, and second, the very high manufacturing volume of chips produced creates an excellent workspace for statistical techniques.”

While SPC, FDC, and APC might seem to serve different purposes, they are deeply interconnected. SPC provides the baseline by monitoring process stability and quality over time, setting the stage for effective process control. FDC complements SPC by providing the tools to quickly detect and address anomalies and faults that occur despite the preventive measures put in place by SPC. APC takes insights from both SPC and FDC to adjust process parameters proactively, not just to correct deviations but also to optimize process performance continually.

Despite their benefits, integrating SPC, FDC and APC systems into existing semiconductor manufacturing environments can pose challenges. These systems require extensive configuration and tuning to adapt to specific manufacturing conditions and to interface effectively with other process control systems. Additionally, the success of these systems depends on the quality and granularity of the data they receive, necessitating high-fidelity sensors and a robust data management infrastructure.

“For SPC to be effective you need tight control limits,” adds Dover. “A common trap in the world of SPC is to keep adding control charts (by adding new signals or statistics) during a process ramp, or maybe inheriting old practices from prior nodes without validating their relevance. The result can be millions of control charts running in parallel. It is not a stretch to state that if you are managing a million control charts you are not really controlling much, as it is humanly impossible to synthesize and react to a million control charts on a daily basis.”

This is where AI/ML becomes invaluable, because it can monitor the performance and sustainability of the new equipment more efficiently than traditional methods. By analyzing data from the new machinery, AI/ML can confirm observations, such as reduced accumulation, allowing for adjustments to preventive maintenance schedules that differ from older equipment. This capability not only helps in maintaining the new equipment more effectively but also in optimizing the manufacturing process to take full advantage of the technological upgrades.

AI/ML also facilitate a smoother transition when integrating new equipment, particularly in scenarios involving ‘copy exact’ processes where the goal is to replicate production conditions across different equipment setups. AI and ML can analyze the specific outputs and performance variations of the new equipment compared to the established systems, reducing the time and effort required to achieve optimal settings while ensuring that the new machinery enhances production without compromising the quality and reliability of the legacy chips being produced.

AI/ML
Being more proactive in identifying drift and adjusting parameters in real-time is a necessity. With a very accurate model of the process, you can tune your recipe to minimize that variability and improve both quality and yield.

“The ability to quickly visualize a month’s worth of data in seconds, and be able to look at windows of time, is a huge cost savings because it’s a lot more involved to get data for the technicians or their process engineers to try and figure out what’s wrong,” said Park. “AI/ML has a twofold effect, where you have fewer false alarms, and just fewer alarms in general. So you’re not wasting time looking at things that you shouldn’t have to look at in the first place. But when you do find issues, AI/ML can help you get to the root cause in the diagnostics associated with that much more quickly.”

When there is a real alert, AI/ML offers the ability to correlate multiple parameters and inputs that are driving that alert.

“Traditional process control systems monitor each parameter separately or perform multivariate analysis for key parameters that require significant effort from fab engineers,” adds Jain. “With the amount of fab data scaling exponentially, it is becoming humanly impossible to extract all the actionable insights from the data. Machine learning and artificial intelligence can handle big data generated within a fab to provide effective process control with minimal oversight.”

AI/ML also can look for more other ways of predicting when the drift is going to take your process out of specification. Those correlations can be bivariate and multivariate, as well as univariate. And a machine learning engine that is able to sift through tremendous amounts of data and a larger number of variables than most humans also can turn up some interesting correlations.

“Another benefit of AI/ML is troubleshooting when something does trigger an alarm or alert,” adds Park. “You’ve got SPC and FDC that people are using, and a lot of them have false positives, or false alerts. In some cases, it’s as high as 40% of the alerts that you get are not relevant for what you’re doing. This is where AI/ML becomes vital. It’s never going to take false alerts to zero, but it can significantly reduce the amount of false alerts that you have.”

Engaging with these modern drift solutions, such as AI/ML-based systems, is not mere adherence to industry trends but an essential step towards sustainable semiconductor production. Going beyond the mere mitigation of process drift, these technologies empower manufacturers to optimize operations and maintain the consistency of critical dimensions, allowed by the intelligent analysis of extensive data and automation of complex control processes.

Conclusion
Monitoring process drift is essential for maintaining quality of the device being manufactured, but it also can ensure that the entire fabrication lifecycle operates at peak efficiency. Detecting and managing process drift is a significant challenge in volume production because these variables can be subtle and may compound over time. This makes identifying the root cause of any drift difficult, particularly when measurements are only taken at the end of the production process.

Combating these challenges requires a vigilant approach to process control, regular equipment servicing, and the implementation of AI/ML algorithms that can assist in predicting and correcting for drift. In addition, fostering a culture of continuous improvement and technological adaptation is crucial. Manufacturers must embrace a mindset that prioritizes not only reactive measures, but also proactive strategies to anticipate and mitigate process drift before it affects the production line. This includes training personnel to handle new technologies effectively and to understand the dynamics of process control deeply. Such education enables staff to better recognize early signs of drift and respond swiftly and accurately.

Moreover, the integration of comprehensive data analytics platforms can revolutionize how fabs monitor and analyze the vast amounts of data they generate. These platforms can aggregate data from multiple sources, providing a holistic view of the manufacturing process that is not possible with isolated measurements. With these insights, engineers can refine their process models, enhance predictive maintenance schedules, and optimize the entire production flow to reduce waste and improve yields.

Related Reading
Tackling Variability With AI-Based Process Control
How AI in advanced process control reduces equipment variability and corrects for process drift.

The post Predicting And Preventing Process Drift appeared first on Semiconductor Engineering.

GDC 2024: Work graphs and draw calls – a match made in heaven!

18. Březen 2024 v 22:31

AMD GPUOpen - Graphics and game developer resources

Introducing "mesh nodes", which make draw calls an integral part of the work graph, providing a higher perf alternative to ExecuteIndirect dispatches.

The post GDC 2024: Work graphs and draw calls – a match made in heaven! appeared first on AMD GPUOpen.

GDC 2024: Work graphs, mesh shaders, FidelityFX™, dev tools, CPU optimization, and more.

Od: GPUOpen
12. Březen 2024 v 15:00

AMD GPUOpen - Graphics and game developer resources

Our GDC 2024 presentations this year include work graphs, mesh shaders, AMD FSR 3, GI with AMD FidelityFX Brixelizer, AMD Ryzen optimization, RGD, RDTS, and GPU Reshape!

The post GDC 2024: Work graphs, mesh shaders, FidelityFX™, dev tools, CPU optimization, and more. appeared first on AMD GPUOpen.

  • ✇Kotaku
  • Everything We Learned From The Latest Fallout TrailerMoises Taveras
    It’s the end of the world as we know it in the latest trailer for Amazon’s hotly anticipated TV adaptation of the Falloutvideo game series. Developed by Lisa Joy and Jonathan Nolan (the duo responsible for HBO’s Westworld), the show will follow vault dweller Lucy (Ella Purnell) as she makes her way out of her…Read more...
     

Everything We Learned From The Latest Fallout Trailer

7. Březen 2024 v 20:45

It’s the end of the world as we know it in the latest trailer for Amazon’s hotly anticipated TV adaptation of the Falloutvideo game series. Developed by Lisa Joy and Jonathan Nolan (the duo responsible for HBO’s Westworld), the show will follow vault dweller Lucy (Ella Purnell) as she makes her way out of her…

Read more...

  • ✇Semiconductor Engineering
  • Accellera Preps New Standard For Clock-Domain CrossingBrian Bailey
    Part of the hierarchical development flow is about to get a lot simpler, thanks to a new standard being created by Accellera. What is less clear is how long will it take before users see any benefit. At the register transfer level (RTL), when a data signal passes between two flip flops, it initially is assumed that clocks are perfect. After clock-tree synthesis and place-and-route are performed, there can be considerable timing skew between the clock edges arriving those adjacent flops. That mak
     

Accellera Preps New Standard For Clock-Domain Crossing

29. Únor 2024 v 09:06

Part of the hierarchical development flow is about to get a lot simpler, thanks to a new standard being created by Accellera. What is less clear is how long will it take before users see any benefit.

At the register transfer level (RTL), when a data signal passes between two flip flops, it initially is assumed that clocks are perfect. After clock-tree synthesis and place-and-route are performed, there can be considerable timing skew between the clock edges arriving those adjacent flops. That makes timing sign-off difficult, but at least the clocks are still synchronous.

But if the clocks come from different sources, are at different frequencies, or a design boundary exists between the flip flops — which would happen with the integration of IP blocks — it’s impossible to guarantee that no clock edges will arrive when the data is unstable. That can cause the output to become unknown for a period of time. This phenomenon, known as metastability, cannot be eliminated, and the verification of those boundaries is known as clock-domain crossing (CDC) analysis.

Special care is required on those boundaries. “You have to compensate for metastability by ensuring that the CDC crossings follow a specific set of logic design principles,” says Prakash Narain, president and CEO of Real Intent. “The general process in use today follows a hierarchical approach and requires that the clock-domain crossing internal to an IP is protected and safe. At the interface of the IP, where the system connects with the IP, two different teams share the problem. An IP provider may recommend an integration methodology, which often is captured in an abstraction model. That abstraction model enables the integration boundary to be verified while the internals of it will not be checked for CDC. That has already been verified.”

In the past, those abstract models differentiated the CDC solutions from veracious vendors. That’s no longer the case. Every IP and tool vendor has different formats, making it costly for everyone. “I don’t know that there’s really anything new or differentiating coming down the pipe for hierarchical modeling,” says Kevin Campbell, technical product manager at Siemens Digital Industries Software. “The creation of the standard will basically deliver much faster results with no loss of quality. I don’t know how much more you can differentiate in that space other than just with performance increases.”

While this has been a problem for the whole industry for quite some time, Intel decided it was time for a solution. The company pushed Accellera to take up the issue, and helped facilitate the creation of the standard by chairing the committee. “I’m going to describe three methods of building a product,” says Iredamola “Dammy” Olopade, chair of the Accellera working group, and a principal engineer at Intel. “Method number one is where you build everything in a monolithic fashion. You own every line of code, you know the architecture, you use the tool of your choice. That is a thing of the past. The second method uses some IP. It leverages reuse and enables the quick turnaround of new SoCs. There used to be a time when all IPs came from the same source, and those were integrating into a product. You could agree upon the tools. We are quickly moving to a world where I need to source IPs wherever I can get them. They don’t use the same tools as I do. In that world, common standards are critical to integrating quickly.”

In some cases, there is a hierarchy of IP. “Clock-domain crossings are a central part of our business,” says Frank Schirrmeister, vice president of solutions and business development at Arteris. “A network-on-chip (NoC) can be considered as ‘CDC central’ because most blocks connected to the NoC have different clocks. Also, our SoC integration tools see all of the blocks to be integrated, and those touch various clock domains and therefore need to deal with the CDC code that is inserted.”

This whole thing can become very messy. “While every solution supports hierarchical modeling, every tool has its own model solution and its own model representation,” says Siemens’ Campbell. “Vendors, or users, are stuck with a CDC solution, because the models were created within a certain solution. There’s no real transportability between any of the hierarchical modeling solutions unless they want to go regenerate models for another solution.”

That creates a lot of extra work. “Today, when dealing with customer CDC issues, we have to consider the customer’s specific environment, and for CDC, a potential mix of in-house flows and commercial tools from various vendors,” says Arteris’ Schirrmeister. “The compatibility matrix becomes very complex, very fast. If adopted, the new Accellera CDC standard bears the potential to make it easier for IP vendors, like us, to ensure compatibility and reduce the effort required to validate IP across multiple customer toolsets. The intent, as specified in the requirements is that ‘every IP provider can run its tool of choice to verify and produce collateral and generate the standard format for SoCs that use a different tool.'”

Everyone benefits. “IP providers will not need to provide extra documentation of clock domains for the SoC integrator to use in their CDC analysis,” says Ahmed Nasr, digital design manager at Mixel. “The standard CDC attributes generated by the EDA tool will be self-contained.”

The use model is relatively simple. “An IP developer signs off on CDC and then exports the abstract model,” says Real Intent’s Narain. “It is likely they will write this out in both the Accellera format and the native format to provide backward compatibility. At the next level of hierarchy, you read in the abstract model instead of reading in the full view of the design. They have various views of the IP, including the CDC view of the IP, which today is on the basis of whatever tool they use for CDC sign-off.”

The potential is significant. “If done right and adopted, the industry may arrive at a common language to describe CDC aspects that can streamline the validation process across various tools and environments used by different users,” says Schirrmeister. “As a result, companies will be able to integrate and validate IP more efficiently than before, accelerating development cycles and reducing the complexity associated with SoC integration.”

The standard
Intel’s Olopade describes the approach that was taken during the creation of the standard. “You take the most complex situations you are likely to find, you box them, and you co-design them in order to reduce the risk of bugs,” he said. “The boundaries you create are supposed to be simple boundaries. We took that concept, and we brought it into our definition to say the following: ‘We will look at all kinds of crossings, we will figure out the simple common uses, and we will cover that first.’ That is expected to cover 95% to 98% of the community. We are not trying to handle 700 different exceptions. It is common. It is simple. It is what guarantees production quality, not just from a CDC standpoint, but just from a divide-and-conquer standpoint.”

That was the starting point. “Then we added elements to our design document that says, ‘This is how we will evaluate complexity, and this is how we’ll determine what we cover first,'” he says. “We broke things down into three steps. Step one is clock-domain crossing. Everyone suffers from this problem. Step two is reset-domain crossing (RDC). As low power is getting into more designs, there are a lot more reset domains, and there is risk between these reset domains. Some companies care, but many companies don’t because they are not in a power-aware environment. It became a secondary consideration. Beyond the basic CDC in phase one, and RDC in phase two, all other interesting, small usage complexities will be handled in phase three as extensions to the standard. We are not going to get bogged down supporting everything under the sun.”

Within the standards group there are two sub-groups — a mapping team and a format team. Common standards, such as AMBA, UCIe, and PCIe have been looked at to make sure that these are fully covered by the standard. That means that the concepts should be useful for future markets.

“The concepts contained in the standard are extensible to hardened chiplets,” says Mixel’s Nasr. “By providing an accurate standard CDC view for the chiplet, it will enable integration with other chiplets.”

Some of those issues have yet to be fully explored. “The standard’s current documentation primarily focuses on clock-domain crossing within an SoC itself,” says Schirrmeister. “Its direct applicability to the area of chiplets would depend on further developments. The interfaces between fully hardened IP blocks on chiplets would communicate through standard interfaces like UCIe, BoW, or XSR, so the synchronization issues between chiplets on substrates would appear to be elevated to the protocol levels.”

Reset-domain crossings have yet to appear in the standard. “The genesis of CDC is asynchronous clocks,” says Narain. “But the genesis for reset-domain crossing is asynchronous resets. While the destination is due to the clock, the source of the problem is somewhere else. And as a result, the nature of the problem, the methodology that people use to manage that problem, are very different. The kind of information that you need to retain, and the kind of information that you can throw away, is different for every problem. Hence, abstractions are actually very customized for the application.”

Does the standard cover enough ground? That is part of the purpose of the review period that was used to collect information. “I can see some room for future improvement — for example, making some attributes mandatory like logic, associated_clocks, clock_period for clock ports,” says Nasr. “Another proposed improvement is adding reconvergence information, to be able to detect reconverging outputs of parallel synchronizers.”

The impact of all of this, if realized, is enormous. “If you truly run a collaborative, inclusive, development cycle, two things will happen,” says Olopade. “One, you are going to be able to find multiple ways to solve each problem. You need to understand the pros and cons against the real problems you are trying to solve and agree on the best way we should do it together. For each of those, we record the options, the pros and cons, and the reason one was selected. In a public review, those that couldn’t be part of that discussion get to weigh in. We weigh it against what they are suggesting versus why did we choose it. In the cases where it is part of what we addressed, and we justified it, we just respond, and we do not make a change. If you’re truly inclusive, you do allow that feedback to cause you to change your mind. We received feedback on about three items that we had debated, where the feedback challenged the decisions and got us to rehash things.”

The big challenge
Still, the creation of a standard is just the first step. Unless a standard is fully adopted, its value becomes diminished. “It’s a commendable objective and a worthy endeavor,” says Schirrmeister. “It will make interoperability easier and eventually allow us, and the whole industry, to reduce the compatibility matrix we maintain to deal with vendor tools individually. It all will depend on adoption by the vendors, though.”

It is off to a good start. “As with any standard, good intentions sometimes get severed by reality,” says Campbell. “There has been significant collaboration and agreements on how the standard is being pushed forward. We did not see self-submarining, or some parties playing nice just to see what’s going on but not really supporting it. This does seem like good collaboration and good decision making across the board.”

Implementation is another hurdle. “Will it actually provide the benefit that it is supposed to provide?” asks Narain. “That will depend upon how completely and how quickly EDA tool vendors provide support for the standard. From our perception, the engineering challenge for implementing this is not that large. When this is standardized, we will provide support for it as soon as we can.”

Even then, adoption isn’t a slam dunk. “There are short- and long-term problems,” warns Campbell. “IP vendors already have to support multiple formats, but now you have to add Accellera on top of that. There’s going to be some pain both for the IP vendors and for EDA vendors. We are going to have to be backward-compatible and some programs go on for decades. There’s a chance that some of these models will be around for a very long time. That’s the short-term pain. But the biggest hurdle to overcome for a third-party IP vendor, and EDA vendor, is quality assurance. The whole point of a hierarchical development methodology is faster CDC closure with no loss in quality. The QA load here is going to be big, because no customer is going to want to take the risk if they’ve got a solution that is already working well.”

Some of those issues and fears are expected to be addressed at the upcoming DVCon conference. “We will be providing a tutorial on CDC,” says Olopade. “The first 30 minutes covers the basics of CDC for those who haven’t been doing this for the last 10 years. The next hour will talk about the Accellera solution. It will concentrate on those topics which were hotly debated, and we need to help people understand, or carry people along with what we recommend. Then it may become more acceptable and more adoptive.”

Related Reading
Design And Verification Methodologies Breaking Down
As chips become more complex, existing tools and methodologies are stretched to the breaking point.

The post Accellera Preps New Standard For Clock-Domain Crossing appeared first on Semiconductor Engineering.

  • ✇Ars Technica - All content
  • CDC ditches 5-day COVID isolation, argues COVID is becoming flu-likeBeth Mole
    Enlarge / A view of the Centers for Disease Control and Prevention headquarters in Atlanta. (credit: Getty | Nathan Posner) COVID-19 is becoming more like the flu and, as such, no longer requires its own virus-specific health rules, the Centers for Disease Control and Prevention said Friday alongside the release of a unified "respiratory virus guide." In a lengthy background document, the agency laid out its rationale for consolidating COVID-19 guidance into general guidance
     

CDC ditches 5-day COVID isolation, argues COVID is becoming flu-like

Od: Beth Mole
2. Březen 2024 v 01:16
A view of the Centers for Disease Control and Prevention headquarters in Atlanta.

Enlarge / A view of the Centers for Disease Control and Prevention headquarters in Atlanta. (credit: Getty | Nathan Posner)

COVID-19 is becoming more like the flu and, as such, no longer requires its own virus-specific health rules, the Centers for Disease Control and Prevention said Friday alongside the release of a unified "respiratory virus guide."

In a lengthy background document, the agency laid out its rationale for consolidating COVID-19 guidance into general guidance for respiratory viruses—including influenza, RSV, adenoviruses, rhinoviruses, enteroviruses, and others, though specifically not measles. The agency also noted the guidance does not apply to health care settings and outbreak scenarios.

"COVID-19 remains an important public health threat, but it is no longer the emergency that it once was, and its health impacts increasingly resemble those of other respiratory viral illnesses, including influenza and RSV," the agency wrote.

Read 14 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Measles erupts in Florida school where 11% of kids are unvaccinatedBeth Mole
    Face of boy after three days with measles rash. (credit: CDC) Florida health officials on Sunday announced an investigation into a cluster of measles cases at an elementary school in the Fort Lauderdale area with a low vaccination rate, a scenario health experts fear will become more and more common amid slipping vaccination rates nationwide. On Friday, Broward County Public School reported a confirmed case of measles in a student at Manatee Bay Elementary School in the city
     

Measles erupts in Florida school where 11% of kids are unvaccinated

Od: Beth Mole
20. Únor 2024 v 00:14
Face of boy after three days with measles rash.

Face of boy after three days with measles rash. (credit: CDC)

Florida health officials on Sunday announced an investigation into a cluster of measles cases at an elementary school in the Fort Lauderdale area with a low vaccination rate, a scenario health experts fear will become more and more common amid slipping vaccination rates nationwide.

On Friday, Broward County Public School reported a confirmed case of measles in a student at Manatee Bay Elementary School in the city of Weston. A local CBS affiliate reported that the case was in a third-grade student who had not recently traveled. On Saturday, the school system announced that three additional cases at the same school had been reported, bringing the current reported total to four cases.

On Sunday, the Florida Department of Health in Broward County (DOH-Broward) released a health advisory about the cases and announced it was opening an investigation to track contacts at risk of infection.

Read 8 remaining paragraphs | Comments

❌
❌