FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Guayota launch trailer

Od: Brian

Courtesy of Dear Villagers and Team Delusion, we have the launch trailer for Guayota. The puzzle adventure game made it to Switch just recently. Find a bunch of information about it in the following eShop description: Inspired by legends related to the Canary Islands and the Guanches mythology, Guayota depicts the story of a group of explorers, sent by the Spanish...

The post Guayota launch trailer appeared first on Nintendo Everything.

Disney Dreamlight Valley details Dapper Delights update

Od: Brian

August 14: Disney Dreamlight Valley is gearing up for its next update, which is known as Dapper Delights. It’ll come to Switch on August 21, 2024. On the same day, Treasures of Time – Act III for the paid Expansion Pass, Disney Dreamlight Valley: A Rift in Time, arrives on the same day. Here’s some additional information about Dapper Delights:...

The post Disney Dreamlight Valley details Dapper Delights update appeared first on Nintendo Everything.

Farewell North launch trailer

Od: Brian

Mooneye Studios distributed a final trailer for Farewell North, an open world adventure experience. This one just recently arrived on Switch. Here’s an overview of the game, courtesy of the eShop listing: In Farewell North you take on the unique role of a border collie restoring color to your human’s world. Explore the hand-crafted, atmospheric islands inspired by the Scottish...

The post Farewell North launch trailer appeared first on Nintendo Everything.

Floatopia announced for Switch

Od: Brian

During today’s Gamescom: Opening Night Live presentation, Floatopia was revealed for multiple platforms, including Switch. It’s set to arrive in 2025. Floatopia, coming from NetEase Games, is a life sim. Players can travel among super-powered worlds to craft their island and engage with friends. Lots of details can be found in the following overview: It’s time to step into a...

The post Floatopia announced for Switch appeared first on Nintendo Everything.

Little Nightmares III delayed to 2025

Od: Brian

May 31: Little Nightmares III has been delayed, and will not be making it out this year as originally planned. Bandai Namco now intends to publish the title in 2025. Little Nightmares III was first unveiled at Gamescom last August. It’s a notable shift for the series as the first two titles were made by Tarsier Studios. This one is...

The post Little Nightmares III delayed to 2025 appeared first on Nintendo Everything.

Civilization VII release date set for February, first gameplay trailer

Od: Brian

Following its announcement in June, we now have a release date and proper look at Civilization VII with a fresh gameplay trailer. The video just debuted at Gamescom: Opening Night Live 2024. Civilization VII now has a release date of February 11, 2025. The game will come in the form of a standard version and Deluxe Edition – if you...

The post Civilization VII release date set for February, first gameplay trailer appeared first on Nintendo Everything.

When Attacks on Anarchists Accidentally Improved Free Speech Law

A portion of the book cover or 'American Anarchy' | Basic Books

American Anarchy: The Epic Struggle between Immigrant Radicals and the US Government at the Dawn of the Twentieth Century, by Michael Willrich, Basic Books, 480 pages, $35

The lawmaking and policing powers of late 19th and early 20th century America did not think anarchist agitators deserved the protective penumbra of our Constitution. After Emma Goldman immigrated to the United States in 1885 from czarist Russia, she became a dynamic and hugely popular traveling lecturer on anarchism and other rebellious causes, such as draft resistance and contraception. Consequently, she was arrested a lot—and in 1919, along with hundreds of other accused anarchists, she was deported to what was now Bolshevik Russia. (Goldman's version of anarchism was not the free market kind; she wanted to eliminate private property as well as the state.)

Many anarchists saw a bright side to these legal fights: an opportunity to preach their beliefs in a courtroom setting, where the press often amplified their message. The anarchists sentenced to death in the notorious 1886 Chicago Haymarket bombing case spent three days in court laying out their beliefs; in one of their own trials, Goldman and her sometime consort and lifelong comrade, Alexander Berkman, settled for five hours of speaking their anarchist minds.

Berkman did more than lecture against the state and capitalism; in 1892 he decided to try to kill a murderously strikebreaking Carnegie Steel factory manager, Henry Frick. (While he shot and stabbed Frick, he failed to kill him.) This did not help public opinion of their cause. Neither did the fact that Leon Czolgosz, the 1901 assassin of President William McKinley, was a self-proclaimed anarchist who claimed that Goldman's rhetoric had "set me on fire."

In American Anarchy, the Brandeis historian Michael Willrich argues that those legal battles surrounding anarchism in America forged two distinct and opposing elements of modern American policing and law.

On one hand, the anarchists' enemies, from New York City cops to military intelligence to the departments of Labor and Justice, built a wider and more intrusive system of political surveillance and repression to quell and expel the anarchists. These systems' techniques—often relying on frequently unreliable, nativist, and paranoid citizen snoops and snitches—might seem quaint in the post–Edward Snowden age. They also seem especially brutal, given the cops of that era's habit of giving "the third degree" (that is, terrible beatings) to seditious radicals, and to people the officers merely assumed were seditious radicals. Many prosecutions hinged on the accuracy, or not, of some cop's written notes on what a suspect had allegedly said in public.

This repressive apparatus, Willrich writes, was "cobbled…together by putting public power in the hands of private civilian operatives, harnessing local police to national purposes, and drawing upon surveillance technologies developed both in the U.S.-ruled Philippines and in the internal immigrant 'colonies' of New York." The result was "an inefficient and stunningly violent operation that foiled few actual plots, put thousands of people on trial for speaking out against capitalism or the war….and showed an almost total disregard for…constitutional liberties."

And that planted the seeds of these battles' second great effect: Ironically, they ultimately made First Amendment doctrine more respectful of free expression. After the crackdown on the anarchists died down, and past the Cold War repressions under the Smith Act, it became more difficult to imagine anyone could go to jail in America solely for saying or writing a political heresy. Even when people are targeted for their speech, propriety requires that a more substantial charge be added. (The modern inheritor of the mantle of "enemy for whom constitutional protections can be ignored" is the drug seller and user, though different amendments are implicated.)

Three prosecutions during the World War I–era crackdown on political dissidents under the Espionage Act ended up before the Supreme Court. Free expression lost every time. But in Abrams v. United States, based on a 1918 expansion of the Espionage Act known as the Sedition Act, a dissent signed by two justices established an attitude toward the First Amendment's reach that became standard over the course of the 20th century.

In August 1918, the Army Corps of Intelligence Police had arrested a group of Russian immigrants in New York for distributing allegedly seditious pamphlets. The defendants insisted that the literature—many copies of which were tossed out windows for passersby on the street—was not meant to impede the ongoing U.S. war efforts against Germany, that being the basis for many of the charges. The literature was rather opposed to U.S. interference in revolutionary Russia, with whom we were not at constitutionally declared war.

The Abrams defendants were represented by Goldman's lawyer, Harry Weinberger. His role in Willrich's narrative is as central as hers and Berkman's. (Willrich argues that the war on anarchists essentially created the modern figure of the civil liberties lawyer.) The Supreme Court upheld the convictions, 7–2. But a dissent authored by Justice Oliver Wendell Holmes (who had written the earlier, bad decisions in the Espionage Act cases) laid out a First Amendment vision that more strictly limits when government could constitutionally punish expression: only if said expression represents a "present danger of immediate evil or an intent to bring it about."

After reading the dissent, a future founder of the American Civil Liberties Union wrote to Weinberger that "we are going to put it to some use all right." Civil libertarians in and out of the judiciary have been doing so ever since, in ways that have expanded Americans' expressive rights.

***

Things got predictably worse for civil liberties and for anarchists as the war went on. The 1918 Immigration Act, as Willrich sums it up, "authorized the secretary of labor to deport any person identified as a noncitizen and an anarchist." Even your individual beliefs could be elided, since "being a member of an organization that advocated 'anarchistic' ideas was now sufficient cause for deportation." Having built your life here productively for decades and having a family was not enough to save you from being grabbed and shipped out, if a government official thought you didn't believe the state should exist. (In 1903, during the post-Czolgosz wave of anti-anarchist action, Congress passed an immigration law that barred entry to anarchists, though it was difficult to enforce and in its first seven years caught a mere 10 anarchists among millions of immigrants entering.)

The story of the anarchist crackdown is, for good reasons, often used as a crackerjack historical example of the anti-liberty madness that even the supposed land of the free can descend to. This wave of anarchist repression was indeed destructive to many people and organizations—the Industrial Workers of the World, for example, were nearly annihilated by mass raids and arrests.

But the aftermath of these authoritarian spasms suggests we should give at least half a cheer for the Constitution. The rights it lays out were sorely dishonored, but at least they could be called upon eventually.

After World War I ended, President Woodrow Wilson commuted sentences for more than 125 Espionage Act prisoners. One assistant secretary of labor—Louis Post, who actually respected the Constitution—canceled 1,140 deportation orders, nearly three-quarters of the cases he was able to review when briefly in command of the process. The notorious 1919 and 1920 Palmer Raids sent 500 accused radicals to Ellis Island for deportation, but as public opinion and the grinding of the courts turned against the mania, only 23 of them were actually deported. And in 1933, President Franklin Roosevelt gave a general amnesty to the remaining World War I–era political prisoners.

Contrast that with Russia, where many of the anarchists were deported. The Bolshevik state murdered many of them, including two of the Abrams defendants.

Willrich's richly detailed study is especially relevant today, as that expansive sense of First Amendment rights that Willrich traces back to Holmes' Abrams dissent is under fresh fire from legal academics who see the amendment as a barrier to progressive change, from young Americans who think certain possibly hurtful things ought not be legally spoken, and from a culture that in general seems increasingly and angrily eager to shut opponents up. This valuable book shows one big reason why an expansive reading of the First Amendment is important: Without it, human beings have been beaten by cops and exiled from their home, just for saying or writing things the authorities don't like.

Goldman, for one, thought America was better than that. She once told a huge crowd in New York City that when people like her denounced war and conscription, they did this not because "we are foreigners and don't care." They had come here "looking to America as the promised land," and they grappled with the country's errors "precisely because we love America."

The post When Attacks on Anarchists Accidentally Improved Free Speech Law appeared first on Reason.com.

The Best Computer Vacuums and Dust Blowers for Your Desk

Whether you work from home or in an office, your desk will quickly become messy and cluttered with all sorts of things. Bits of paper, crumbs, pet hair—you name it. Thankfully, there are tiny vacuum cleaners, dust blowers and dusters that are made for these small surfaces, and maybe even your laptop and accessories. You can use brushes and wipes, but you’ll likely get a more powerful clean with a compact cleaner or tool.

If you aren't sure an actual handheld or tiny vacuum is what you need, another practical cleaning device to consider is a duster or electric dust blower. These are stronger and less wasteful than compressed air cans and can double up as a keyboard or PC cleaning option. The below picks are small enough to fit at your desk and strong enough to fight the mess!

Best Computer Vacuums and Dust Blowers

🔥 Editor's Pick: DataVac Dust Blower

IGN's Commerce Manager, Eric Song recommends this environmentally-friendly dust blower, which draws over 450W from the kilowatt. Plus, he said it's been working well since he bought it (back in 2012). Eric also mentioned a bonus use for this one: using it to blow dry his dog after a bath! Who knew? You can check out his favorite model below, which is on sale right now at 30% off:

More Small Vacuums for Cleaning Your Computer and Desk Space

If you want something lightweight, powerful and easy to use, the below choices provide options for whatever your needs are right now. Most of these vacs are cordless and rechargeable, so there's no extra hassle. Whether you're looking for an all-in-one duster and vacuum or one that comes with different nozzles, you'll find something to help clean up your desk space and beyond.

Tiny, Cute Contenders:

If the above options don't do the trick, these three small desk vacuums are perfect for surface area messes on your desk, coffee table and beyond! Plus, they're super affordable.

Lastly, if you do want to go the compressed air route, but make it fancier, there's a deal on an electric compressed air duster right now:

SGJ Podcast #465 – Favorite Gaming Vacation Spots

Hey friends, welcome to the show! This week, Spaz, Julie, Thorston, Jacob, David and I talk about video game locations at which we’d like to take a vacation or holiday. Independently, three of us came up with The Citadel from Mass Effect, but there were a lot of other great entries too, as you can...

The post SGJ Podcast #465 – Favorite Gaming Vacation Spots appeared first on Space Game Junkie.

💾

SGJ Podcast #464 – Favorite Gaming Soundtracks

Hey friends, welcome to this week’s show! This week, Spaz, Julie, Thorston, Jacob, David and I discuss our favorite video game soundtracks, the list of which you can see below! It’s a pretty wide-ranging list, and we had a lot of fun talking about all this wonderful music. Next week on the show, we’ll talk...

The post SGJ Podcast #464 – Favorite Gaming Soundtracks appeared first on Space Game Junkie.

💾

Miscellany: Changes to the Streaming Schedule

Hello my friends, long time no post! I’m writing this because, honestly, I feel like my streaming schedule has gotten, for me, a little stale. I love space game Mondays and flight sim Fridays, but the other three days? I find myself getting less and less interested in doing random games, and there are some...

The post Miscellany: Changes to the Streaming Schedule appeared first on Space Game Junkie.

SGJ Podcast #461 – One-Hit Wonders, Part Two

Hey friends, welcome to this week’s show! This week, Spaz, Julie, Thorston, Jacob, David and I revisit the topic of one-hit wonders in gaming, and we have even more fun games to talk about, the list of which you can see below. I’d not heard about many of these, some I’d forgotten about (like SimEarth),...

The post SGJ Podcast #461 – One-Hit Wonders, Part Two appeared first on Space Game Junkie.

💾

SGJ Podcast #458 – Gaming Cinderella Stories

Hey friends, welcome to this week’s show! This week, Spaz, Julie, Thorston, Jacob, David, and I talked about games that had a Cinderella story. You know, they had a rough start and have since flourished. We came up with a pretty fun list, which you can see below! Next week, we’ll do our regular check-in!...

The post SGJ Podcast #458 – Gaming Cinderella Stories appeared first on Space Game Junkie.

💾

Another Crab’s Treasure has sold over 500,000 copies

Od: Brian

Another Crab’s Treasure reached another milestone and developer Aggro Crab shared the news on social media. We now know that total sales have surpassed 500,000 copies since launch. That is based on data across all platforms. Another Crab’s Treasure hit Switch towards the end of April. About a week later, it was confirmed that over 500,000 copies had been sold....

The post Another Crab’s Treasure has sold over 500,000 copies appeared first on Nintendo Everything.

Donkey Kong Country icons added to Nintendo Switch Online

Od: Brian

A new retro game is now being featured on Nintendo Switch Online, and subscribers can pick up icons based on Donkey Kong Country. Seven icons are being distributed, and each one is a character. You can claim designs for Donkey Kong, Diddy Kong, King K. Rool, and more. Aside from an active Nintendo Switch Online membership, there are a couple...

The post Donkey Kong Country icons added to Nintendo Switch Online appeared first on Nintendo Everything.

Lollipop Chainsaw Repop announced for Switch

Od: Brian

June 13: Lollipop Chainsaw Repop is getting a release on Switch, Dragami Games just revealed. A worldwide release is planned for September 26, 2024. Lollipop Chainsaw was originally developed by Grasshopper Manufacture, the studio behind No More Heroes and more. It never released on a Nintendo console previously. Those that players the original can look forward to various improvements and...

The post Lollipop Chainsaw Repop announced for Switch appeared first on Nintendo Everything.

Bare Butt Boxing gameplay

Od: Brian

Bare Butt Boxing recently released on Switch, and new gameplay shows off the final build. You can watch 13 minutes of footage from the multiplayer brawler. For more information about what to expect from the title, read the following overview: Fight in completely unregulated alien boxing bouts and punch out players online in Bare Butt Boxing, a chaotic multiplayer brawler...

The post Bare Butt Boxing gameplay appeared first on Nintendo Everything.

EvoMon launch trailer

Od: Brian

Courtesy of RedDeerGames and Beowulfus Universum, we’ve got a launch trailer for EvoMon. The creature collecting game just came to Switch recently. Learn more about it in the following overview: Here they are – EvoMons – extraordinary creatures. Prepare them to become true EvoMon Champions! Take care of their needs, feed and train them while watching how a tiny, vulnerable...

The post EvoMon launch trailer appeared first on Nintendo Everything.

Puzzle platformer Escape from the Pharaoh’s Tomb hitting Switch this week

Od: Brian

Escape from the Pharaoh’s Tomb, the latest release from Ratalaika Games, has been announced for Switch. The title is due out this week – specifically August 9, 2024. Escape from the Pharaoh’s Tomb is a puzzle platformer in which players aim beams of light to open a locked door and proceed to the next level. More information can be found...

The post Puzzle platformer Escape from the Pharaoh’s Tomb hitting Switch this week appeared first on Nintendo Everything.

Europe’s top 15 downloads on the Switch eShop for July 2024

Od: Brian

In a recent news post sent out to Switch owners, Nintendo provided a listing of the top 15 European eShop downloads for July 2024. Nintendo World Championships: NES Edition and Teenage Mutant Ninja Turtlers: Splintered Fate make the list in their debuts at #6 and #10 respectively. Luigi’s Mansion 2 HD is still doing well, and only slips to second...

The post Europe’s top 15 downloads on the Switch eShop for July 2024 appeared first on Nintendo Everything.

Cult of the Lamb reveals Pilgrim Pack

Od: Brian

Cult of the Lamb is gearing up for the release of new content, and the Pilgrim Pack was just revealed. The DLC will be live on all platforms, including Switch, on August 12, 2024. On the same day, the previously-announced Unholy Alliance update will be available featuring local co-op and more – read about it here. According to the description,...

The post Cult of the Lamb reveals Pilgrim Pack appeared first on Nintendo Everything.

Verne: The Shape of Fantasy out on Switch this month, new trailer

Od: Brian

Years after it was originally announced for the system, publisher Assemble Entertainment and developer Gametopia shared the release date for Verne: The Shape of Fantasy on Switch. It’ll be ready to go on August 22, 2024. Verne: The Shape of Fantasy was first confirmed for Switch in 2020. The original launch target was 2021, so needless to say things took ...

The post Verne: The Shape of Fantasy out on Switch this month, new trailer appeared first on Nintendo Everything.

‘Accelerate Everything,’ NVIDIA CEO Says Ahead of COMPUTEX

“Generative AI is reshaping industries and opening new opportunities for innovation and growth,” NVIDIA founder and CEO Jensen Huang said in an address ahead of this week’s COMPUTEX technology conference in Taipei.

“Today, we’re at the cusp of a major shift in computing,” Huang told the audience, clad in his trademark black leather jacket. “The intersection of AI and accelerated computing is set to redefine the future.”

Huang spoke ahead of one of the world’s premier technology conferences to an audience of more than 6,500 industry leaders, press, entrepreneurs, gamers, creators and AI enthusiasts gathered at the glass-domed National Taiwan University Sports Center set in the verdant heart of Taipei.

The theme: NVIDIA accelerated platforms are in full production, whether through AI PCs and consumer devices featuring a host of NVIDIA RTX-powered capabilities or enterprises building and deploying AI factories with NVIDIA’s full-stack computing platform.

“The future of computing is accelerated,” Huang said. “With our innovations in AI and accelerated computing, we’re pushing the boundaries of what’s possible and driving the next wave of technological advancement.”
 

‘One-Year Rhythm’

More’s coming, with Huang revealing a roadmap for new semiconductors that will arrive on a one-year rhythm. Revealed for the first time, the Rubin platform will succeed the upcoming Blackwell platform, featuring new GPUs, a new Arm-based CPU — Vera — and advanced networking with NVLink 6, CX9 SuperNIC and the X1600 converged InfiniBand/Ethernet switch.

“Our company has a one-year rhythm. Our basic philosophy is very simple: build the entire data center scale, disaggregate and sell to you parts on a one-year rhythm, and push everything to technology limits,” Huang explained.

NVIDIA’s creative team used AI tools from members of the NVIDIA Inception startup program, built on NVIDIA NIM and NVIDIA’s accelerated computing, to create the COMPUTEX keynote. Packed with demos, this showcase highlighted these innovative tools and the transformative impact of NVIDIA’s technology.

‘Accelerated Computing Is Sustainable Computing’

NVIDIA is driving down the cost of turning data into intelligence, Huang explained as he began his talk.

“Accelerated computing is sustainable computing,” he emphasized, outlining how the combination of GPUs and CPUs can deliver up to a 100x speedup while only increasing power consumption by a factor of three, achieving 25x more performance per Watt over CPUs alone.

“The more you buy, the more you save,” Huang noted, highlighting this approach’s significant cost and energy savings.

Industry Joins NVIDIA to Build AI Factories to Power New Industrial Revolution

Leading computer manufacturers, particularly from Taiwan, the global IT hub, have embraced NVIDIA GPUs and networking solutions. Top companies include ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Supermicro, Wistron and Wiwynn, which are creating cloud, on-premises and edge AI systems.

The NVIDIA MGX modular reference design platform now supports Blackwell, including the GB200 NVL2 platform, designed for optimal performance in large language model inference, retrieval-augmented generation and data processing.

AMD and Intel are supporting the MGX architecture with plans to deliver, for the first time, their own CPU host processor module designs. Any server system builder can use these reference designs to save development time while ensuring consistency in design and performance.

Next-Generation Networking with Spectrum-X

In networking, Huang unveiled plans for the annual release of Spectrum-X products to cater to the growing demand for high-performance Ethernet networking for AI.

NVIDIA Spectrum-X, the first Ethernet fabric built for AI, enhances network performance by 1.6x more than traditional Ethernet fabrics. It accelerates the processing, analysis and execution of AI workloads and, in turn, the development and deployment of AI solutions.

CoreWeave, GMO Internet Group, Lambda, Scaleway, STPX Global and Yotta are among the first AI cloud service providers embracing Spectrum-X to bring extreme networking performance to their AI infrastructures.

NVIDIA NIM to Transform Millions Into Gen AI Developers

With NVIDIA NIM, the world’s 28 million developers can now easily create generative AI applications. NIM — inference microservices that provide models as optimized containers — can be deployed on clouds, data centers or workstations.

NIM also enables enterprises to maximize their infrastructure investments. For example, running Meta Llama 3-8B in a NIM produces up to 3x more generative AI tokens on accelerated infrastructure than without NIM.


Nearly 200 technology partners — including Cadence, Cloudera, Cohesity, DataStax, NetApp, Scale AI, and Synopsys — are integrating NIM into their platforms to speed generative AI deployments for domain-specific applications, such as copilots, code assistants, digital human avatars and more. Hugging Face is now offering NIM — starting with Meta Llama 3.

“Today we just posted up in Hugging Face the Llama 3 fully optimized, it’s available there for you to try. You can even take it with you,” Huang said. “So you could run it in the cloud, run it in any cloud, download this container, put it into your own data center, and you can host it to make it available for your customers.”

NVIDIA Brings AI Assistants to Life With GeForce RTX AI PCs

NVIDIA’s RTX AI PCs, powered by RTX technologies, are set to revolutionize consumer experiences with over 200 RTX AI laptops and more than 500 AI-powered apps and games.

The RTX AI Toolkit and newly available PC-based NIM inference microservices for the NVIDIA ACE digital human platform underscore NVIDIA’s commitment to AI accessibility.

Project G-Assist, an RTX-powered AI assistant technology demo, was also announced, showcasing context-aware assistance for PC games and apps.

And Microsoft and NVIDIA are collaborating to help developers bring new generative AI capabilities to their Windows native and web apps with easy API access to RTX-accelerated SLMs that enable RAG capabilities that run on-device as part of Windows Copilot Runtime.

NVIDIA Robotics Adopted by Industry Leaders

NVIDIA is spearheading the $50 trillion industrial digitization shift, with sectors embracing autonomous operations and digital twins — virtual models that enhance efficiency and cut costs. Through its Developer Program, NVIDIA offers access to NIM, fostering AI innovation.

Taiwanese manufacturers are transforming their factories using NVIDIA’s technology, with Huang showcasing Foxconn’s use of NVIDIA Omniverse, Isaac and Metropolis to create digital twins, combining vision AI and robot development tools for enhanced robotic facilities.

“The next wave of AI is physical AI. AI that understands the laws of physics, AI that can work among us,” Huang said, emphasizing the importance of robotics and AI in future developments.

The NVIDIA Isaac platform provides a robust toolkit for developers to build AI robots, including AMRs, industrial arms and humanoids, powered by AI models and supercomputers like Jetson Orin and Thor.

“Robotics is here. Physical AI is here. This is not science fiction, and it’s being used all over Taiwan. It’s just really, really exciting,” Huang added.

Global electronics giants are integrating NVIDIA’s autonomous robotics into their factories, leveraging simulation in Omniverse to test and validate this new wave of AI for the physical world. This includes over 5 million preprogrammed robots worldwide.

“All the factories will be robotic. The factories will orchestrate robots, and those robots will be building products that are robotic,” Huang explained.

Huang emphasized NVIDIA Isaac’s role in boosting factory and warehouse efficiency, with global leaders like BYD Electronics, Siemens, Teradyne Robotics and Intrinsic adopting its advanced libraries and AI models.

NVIDIA AI Enterprise on the IGX platform, with partners like ADLINK, Advantech and ONYX, delivers edge AI solutions meeting strict regulatory standards, essential for medical technology and other industries.

Huang ended his keynote on the same note he began it on, paying tribute to Taiwan and NVIDIA’s many partners there. “Thank you,” Huang said. “I love you guys.”

The Black Panther Who Was Banned From the Ballot

topicshistory | Photo: Contraband Collection/Alamy

Donald Trump was not the first celebrity presidential candidate who could reasonably be accused of insurrection against the United States. Many decades before Trump, another best-selling author and charismatic leader in a rowdy movement to upend dominant American political mores aimed for the U.S. presidency—Eldridge Cleaver, the Black Panthers' minister of information and the author of Soul on Ice.

Unlike Trump, who this year overcame challenges from Colorado, Maine, and Illinois about his eligibility due to the Constitution's Insurrection Clause, Cleaver couldn't be caught up by the 14th Amendment, Section 3, since that explicitly only bars insurrectionists who had already been government officials. But Cleaver faced his own eligibility hurdles.

In 1968, as the first presidential nominee of the Peace and Freedom Party (PFP), formed mostly by antiwar radicals disenchanted with Lyndon Johnson's Democratic Party, Cleaver was below the constitutionally mandated age of 35 and would have been so still on Inauguration Day in 1969. At least three states did eliminate his name, if not his party, from the ballot for this reason.

Many states, however, allowed someone absolutely constitutionally disqualified to remain on their ballot; in Iowa, as reported in the Davenport Times-Democrat, the secretary of state "ruled that he must accept the certification in the absence of positive proof that Cleaver is not of eligible age."

While the various charges haunting Trump during his current campaign involve less violent crimes, Cleaver, four months before receiving the PFP nomination with 74 percent of the delegates' votes, engaged in a firefight with Oakland police that resulted in another Panther's death. He was thus campaigning while out on bail, pending trial for three counts of assault and attempted murder.

As the PFP's candidate, Cleaver certainly sounded like an insurrectionist, not that there was anything (constitutionally) wrong with that. In a campaign speech, as printed in a 1968 issue of the North American Review, Cleaver said: "What we need is a revolution in the white mother country and national liberation for the black colony. To achieve these ends we believe that political and military machinery that does not exist now and has never existed must be created."

The PFP, aligning with the Panthers, pushed Cleaver as its presidential hopeful with a dual agenda, as expressed by member Richard Yanowitz in an online memoir of PFP history: "immediate withdrawal from Vietnam and support for black liberation and self-determination."

During the PFP's inaugural California convention, Cleaver said that he regarded black members of the PFP as "misguided political freaks," but he eventually embraced the alliance and accepted the PFP's national nomination, saying on the campaign trail that "we believe that all black colonial subjects should be members of the Black Panther Party, and that all American citizens should be members of the Peace and Freedom Party." The Panthers' intention, he said, was to "use our papier-mâché right to vote to help strengthen the Peace and Freedom Party and to help it attain its objectives within the framework of political realities in the mother country."

The leftist political tumult out of which the PFP arose in 1968 had many elements that echo modern-day political dynamics. Debates raged about whether black activists should have influence above their numbers and whether the movement should explicitly oppose Zionism. The same sorts of petition barricades to getting a new party on the ballot existed then, though the PFP's campaign in California in particular was a huge success, with 105,000 signatures gathered when only 66,000 were needed.

But rumors persisted about how clearly petitioners informed signers that they were officially registering with the party. PFPers insisted they let signers know they could change their registration back after the PFP got ballot access and before the election. And indeed, the PFP got over 70 percent fewer votes for the presidential race in California than it did petition signatures.

Despite his patent ineligibility and being knocked off the ballot in a few states, Cleaver's PFP campaign garnered over 36,000 votes nationwide. In late September, he polled at 2 percent in California but received far fewer votes on Election Day—a common fate for third-party candidates. Shortly after his electoral defeat, Cleaver fled the U.S. rather than face trial for the Oakland incident, not returning until 1975, after which he served less than a year in jail along with lots of probation and community service.

The cases of Trump and Cleaver illustrate a persistent American theme. Whether because they are mad at the perverted communists dominating the Democratic Party (as per MAGA) or the colonialist and imperialist white power structure (as per the PFP), a segment of American voters want insurrectionist candidates. Who are election officials to deny them?

The post The Black Panther Who Was Banned From the Ballot appeared first on Reason.com.

David Boaz, RIP

David Boaz | Illustration: Lex Villena

David Boaz, longtime executive vice president at the Cato Institute, died this week at age 70 in hospice after a battle with cancer.

Boaz was born in Kentucky in 1953 to a political family, with members holding the offices of prosecutor, congressman, and judge. He was thus the type "staying up to watch the New Hampshire primary when I was 10 years old," as he said in a 1998 interview for my book Radicals for Capitalism: A Freewheeling History of the Modern American Libertarian Movement.

In the early to mid-1970s, Boaz was a young conservative activist, working on conservative papers at Vanderbilt University, where he was a student from 1971 to 1975. After graduation, he worked with Young Americans for Freedom (YAF), in whose national office he served in various capacities from 1975 to 1978, including editing its magazine, New Guard.

In the 1970s, he recalls, YAF saw themselves as not merely College Republicans but were instead "organized around a set of ideas." When he started with YAF he already thought of himself as a libertarian but saw libertarianism "as a brand of conservatism. But during my tenure at YAF, as I got to know people in the libertarian movement, I came to believe that conservatives and libertarians were not the same thing and it became uncomfortable for me to work in the YAF office."

Now fully understanding libertarianism as something distinct from right-wing conservatism, "I badgered Ed Crane to find me a job and take me away from all this." Boaz had met him when Crane was representing the Libertarian Party (L.P.) at the Conservative Political Action Conference in the mid-'70s and kept in touch with him when Crane was running Cato from San Francisco from 1977 to 1981. Via his relationship with Crane, Boaz became one of two staffers on Ed Clark's campaign for governor of California in 1978, which earned over 5 percent of the popular vote. (Clark was officially an independent because of ballot access requirements but was a member of the L.P. and ran with L.P. branding.)

Boaz then worked with the now-defunct Council for a Competitive Economy (CCE) from 1978 to 1980, which he described as "a free market group of businessmen opposed not only to regulations and taxes but to subsidies and tariffs…in effect it was to be a business front group for the libertarian movement." He left CCE to work on Ed Clark's 1980 L.P. presidential campaign, where Boaz wrote, commissioned, and edited campaign issue papers as well as the chapters written by the various ghosts for Clark's official campaign book. Boaz also did speech writing and road work with Clark.

The campaign Boaz worked on earned slightly over 1 percent, 920,000 total votes—records for the L.P. that were not beaten until Gary Johnson's 2012 run (in raw votes) and 2016 run (in percentages). "The Clark campaign was organized around getting ideas across in a way that is not outside the bounds of what was politically plausible," Boaz reminisced in a 2022 interview. "When John Anderson got in [the 1980 presidential race as an independent], we recognized he was going to provide a more prominent third-party choice, maybe taking away our socially liberal, fiscally conservative, well-educated vote, and he ended up getting 6 percent. We just barely got 1 percent. And although we said, 'This is unprecedented, blah blah,' in fact we were very disappointed."

Boaz began working at the Cato Institute when it moved to D.C. in 1981, where he became executive vice president and stayed until his retirement in 2023. He was Cato's leading editorial voice for decades, setting the tone for what was among the most well-financed and widely distributed institutional voices for libertarian advocacy. Cato, with Boaz's guidance, provided a stream of measured, bourgeois outreach policy radicalism intended to appeal to a wide-ranging audience of normal Americans, not just those marinated in specifically libertarian movement heroes, styles, and concerns.

Boaz was, for example, an early voice getting drug legalization taken seriously in citadels of American cultural power with a forward-thinking 1988 New York Times op-ed that concluded presciently: "We can either escalate the war on drugs, which would have dire implications for civil liberties and the right to privacy, or find a way to gracefully withdraw. Withdrawal should not be viewed as an endorsement of drug use; it would simply be an acknowledgment that the cost of this war—billions of dollars, runaway crime rates and restrictions on our personal freedom—is too high."

Boaz wrote what remains the best one-volume discussion of libertarian philosophy and practice for an outward-facing audience, one that while not losing track of practical policy issues also provided a tight, welcoming sense of the philosophical reasons behind libertarian beliefs in avoiding violence as much as possible to settle social or political disputes, published as Libertarianism: A Primer in 1997.

Boaz's book rooted its explanatory style in the American founding, cooperation, personal responsibility, charity, and uncoerced civil society in all its glories. He explained the necessity and purpose of property, profits, entrepreneurship, and how liberty is conducive to an economically healthy and wealthy society, and how government interferes with the growth-producing properties of the system of natural liberty. He discusses the nature and excesses of government in practice and applies libertarian perspectives to many specific policy issues: health care, poverty, the budget, crime, education, even "family values." Boaz's book is thorough, even-toned, erudite, and thoughtful and intended for mass persuasion, not the sour delights of freaking out the normies with your radicalism.

Meeting Boaz in 1991 when I was an intern at Cato (and later an employee until 1994) was bracing to this wet-behind-the-ears young libertarian who arose from a more raffish, perhaps less civilized branch of activism. As a supervisor and colleague, Boaz was a civilized adult, stylish, nearly suave, but was patient nonetheless with wilder young libertarians, of whom he'd dealt with many.

His very institutional continuity—though it was barely two decades long at that point—was influential in a quiet way to the younger crew. It imbued a sense that one needn't frantically demand instant victory, no matter how morally imperative the cause of freedom was. Boaz's calm sense of historical sweep both as a living person and in his capacious knowledge of the history of classical liberal ideas was an antidote to both despair and opportunism for the young libertarians he worked with.

His edited anthology The Libertarian Reader: Classic & Contemporary Writings from Lao-Tzu to Milton Friedman—which came out accompanying his primer in 1997—was a compact proof of libertarianism's rich, long tradition, showing how it was in many ways the core animating principle of the American Founding and to a large extent the entire Enlightenment and everything good, just, and rich about the whole Western tradition. The anthology featured the best of libertarian heroes both old and modern, such as Thomas Paine, Adam Smith, Thomas Jefferson, John Stuart Mill, Herbert Spencer, Lysander Spooner, and Benjamin Constant from previous centuries and Milton Friedman, Friedrich Hayek, Ayn Rand, Murray Rothbard, and Ludwig von Mises from the 20th, as well as providing even wider context with more ancient sources ranging from the Bible to Lao Tzu. He also placed the libertarian tradition rightly as core to the fights for liberation for women and blacks, with entries from Frederick Douglass, William Lloyd Garrison, and Angelina and Sarah Grimké.

Asked in 1998 why he chose a career pushing often unpopular and derided ideas up a huge cultural and political hill, Boaz told me: "I think it's satisfying and fun. I believe strongly in these values and at some level I believe it's right to devote your life to fighting for these values, though particularly if you're a libertarian you can't say it's morally obligatory to be fighting for these values—but it does feel right, and at some other level more than just being right, it is fun, it's what I want to do.

"I like intellectual combat, polishing arguments, and I also hate people who want to use force against other people, so a part of it is I am motivated to try to fight these people. I wake up listening to NPR every morning and my partner says, 'Why do you want to wake up angry every morning?' In the first place, I need to know what's going on in the world, and in the second place, dammit, I want to know what these people are up to! It's an outrage what they're up to and I don't want them to get away with it. I want to fight." For decades, at the forefront of the mainstream spread of libertarian attitudes, ideas, and notions, David Boaz did.

The post David Boaz, RIP appeared first on Reason.com.

The best Father's Day gift ideas under $50

Buying a good Father’s Day gift can be tough if you’re on a budget, especially if your dad is already on the tech-savvy side. Sometimes they may claim they don’t want anything, other times they might buy the thing you’re looking to gift without telling anyone. If you need help jogging your brain, we’ve rounded up a few of the better gadgets we’ve tested that cost less than $50. From mechanical keyboards and security cameras to luggage trackers and power banks, each has the potential to make your dad’s day-to-day life a little more convenient.

This article originally appeared on Engadget at https://www.engadget.com/best-gifts-for-dad-under-50-113033738.html?src=rss

© Engadget

The best Father's Day gift ideas under $50

DAC Panel Could Spark Fireworks

Panels can often become love fests. While a title may sound controversial, it turns out that everyone quickly finds that all the panelists agree on the major points. This is sometimes the result of how the panel was put together – the proposal came from one company, and they wanted to get their customers or clients onto the panel. They are unlikely to ask a major competitor to be part of the event.

These panels can become livelier if they have a moderator who opens up a panel to audience questions and they decide to throw the spanner in the works. This tends to happen a lot more in the technical panels, because each researcher, who may have taken a different approach to a problem, wants to introduce the audience to their alternative solution. But the pavilion panels tend to be a little more sedate – in part because nobody wants to burn bridges within such a tight industry.

It is quite common for me to moderate a panel each DAC, and this year is no exception. I will be moderating a technical panel whose title is directly confrontational: “Why Is EDA Playing Catchup to Disruptive Technologies Like AI? What Can We Do to Change This?”

The abstract for the panel talks about EDA having a closed mindset, consistently missing disruptive changes by choosing incremental approaches. I know that when I first read it – when I was invited to be the chair for it – I was immediately up in arms.

Twenty years ago, while working at an EDA company, I attempted to drive such disruptive changes in the verification industry. Several times a year, I would go out and talk to our customers and exchange ideas with them about the problems they were facing. We would present ideas about both incremental and disruptive developments we had underway. The message was always the same. “Can we have the incremental changes yesterday? And we don’t have time to think about the longer-term ideas.” It reminded me of the cartoon where a stone-age person is pulling a cart with square wheels and doesn’t have time to listen to the person offering him round ones.

Even so, we did go ahead and develop some of them, and a few of them did achieve an element of success. But to go from first adopters to more mainstream interests often took 10 years. Even today, many of those are still niche tools, and probably money sinks for the companies that developed them. Examples are high-level synthesis and virtual prototypes, the only two pieces of the whole ESL movement that survived. Still, they believe that long term, the industry will need them. Many other pieces completely fell by the wayside, such as hardware/software co-design. That, however, may start to resurface thanks to RISC-V.

Many of the tools associated with ESL were direct collaborations between EDA companies and researchers. I established a research collaboration program with the University of Washington that looked at multi-abstraction simulation, protocol checking and had elements of system synthesis. The only thing that came out of that was hardware software co-verification. Protocol checking, in the form of VIP, also has become popular, although not directly because of this program. Co-verification had a useful life of about five years before SystemC made the solution obsolete.

Many disruptive innovations actually have come from industry, then were commercialized by EDA companies. SystemC is one example of that. Constrained random verification is another. Portable Stimulus, while still nascent, also was developed within industry. These solutions have an advantage in that they were developed to solve a significant enough problem within the industry that they have broader appeal. There is little that has actually come from academia in recent decades.

The panel title also talks specifically about AI and accuses EDA of being behind already. It is not clear that they are. Thirty years ago, you could go to DAC and see all the new tools and flows that EDA companies were working on. Many of them might be ready within a year or two. But today, EDA companies will make no announcements until at least a few of their customers, that they chose as development partners, have had silicon success.

A typical chip cycle is 18 months. Given that we are beginning to hear about some of these tools today means they may have been in use for a good part of that 18 months. Plus, development of those tools must have started about a year before that. Let’s remember that ChatGPT only came to the fore 18 months ago, and it should be quite obvious why few generative AI products have yet been announced. The fact that there are so many EDA AI announcements would make me think that EDA companies were very quick off the starting blocks.

The panelists are Prith Banerjee – Ansys, who has written a book about disruption; Jan Rabaey – professor in the Graduate School of in the Electrical Engineering and Computer Sciences at the University of California, Berkeley, who also serves as the CTO of the Systems Technology Co-Optimization division at imec; Samir Mittal, corporate VP for Silicon Systems AI at Micron Technology; James Scapa, founder and CEO of Altair; and Charles Alpert, fellow at Cadence Design Systems.

If you are going to be at DAC and have access to the technical program, this 90-minute panel may be worth your time. Wednesday June 26th at 10:30am. Come ready with your questions because I will certainly be opening this panel up to the audience very quickly. While sparks may fly, please try and keep your cool and be respectful.

The post DAC Panel Could Spark Fireworks appeared first on Semiconductor Engineering.

RISC-V Heralds New Era Of Cooperation

RISC-V is paving the way for open source to become accepted within the hardware community, creating a level of industry collaboration never seen in the past, while revitalizing the connection between academia and industry.

The big question is whether this arrangement is just a placeholder while the industry re-learns how to develop processors, or whether this processor architecture is something very different. In either case, there is a clear and pressing need for more flexible processor architectures, and at least for now, RISC-V has filled a void.

“RISC-V was born out of academia and has had strong collaboration within universities from day one,” says Loren Hobbs, vice president of product and business development at Bluespec. “This collaboration continues today, with many of the most popular open-source RISC-V processors having come from universities. Organizations such as OpenHW Group and CHIPS Alliance serve a central and critical role in driving the collaboration, which is bi-directional between the academic community and industry.”

Collaboration of this type has not existed with the industrial community in the past. “We are learning from each other,” says Florian Wohlrab, CEO at OpenHW. “We are learning best practices for verification. At the same time, we are learning what things to avoid. It is growing where people say, ‘Yes, I really get benefit from sharing ideas.'”

The need for processor flexibility exists within industry as well as academia. “There is a need within the industry for diversification on the processor front,” says Neil Hand, director of marketing at Siemens EDA. “In the past, this led to a fragmented set of companies that couldn’t work together. They didn’t see the value of working together. But RISC-V has a cohesive central organization where anyone who wants to get into the processor space can collaborate. They don’t have to expose their secret sauce, but they get to benefit from each other. A rising tide lifts all boats, and that’s really the situation we’re at with RISC-V.”

Longevity
Whether the industry can build upon this success, or whether it fizzles out over time, remains to be seen. But at least for now, RISC-V’s momentum is growing. “We are at the beginning of a revolution in hardware design,” says OpenHW’s Wohlrab. “We saw the same thing for software when Linux came out 20 or so years ago. No one was really thinking about sharing software or collaboratively developing software. There were some small open-source ventures, but working together on a big project took a long time to develop. Now we are all sharing software, all co-working. But for hardware, we’re just at the beginning of this new concept, and a lot of people need to understand that we can do the same for hardware as we did for software.”

Underlying RISC-V’s success is widespread collaboration. “One of the pillars sustaining the success of RISC-V is customization that works with the ecosystem and leverages a well-defined process,” says Sergio Marchese, vice president of application engineering at SmartDV. “RISC-V vendors face the challenge of showing how their processor customization capabilities serve an application and demonstrating the complete process on real hardware. Without strategic partnerships, RISC-V vendors must walk a much more challenging, time-consuming, and resource-intensive road.”

That framework is what makes it unique. “RISC-V has formed this framework for collaboration, and it fixes everything,” says Siemens’ Hand. “Now, when a university has a really cool idea for memory tagging in a processor design, they don’t have to build the compilers, they don’t have to build the reference platform. They already exist. Maybe a compiler optimization startup has this great idea for handling code optimization. They don’t have to build the rest of the ecosystem. When a processor IP company has this great idea, they can become focused within this bigger picture. That’s the unique nature of it. It’s not just a processor specification.”

Historically, one of the problems associated with open-source hardware was quality, because finding bugs in silicon is expensive. OpenHW is an important piece of the puzzle. “Why should everyone reinvent the wheel by themselves?” asks Wohlrab. “Why can’t we get the basic building blocks, some basic chips, take some design from academia, which has reasonably good quality, and build on them, verify them together. We are verifying with different tools, making sure we get a high coverage, and then everyone can go off and use them in their own chips for mass production, for volume shipment.”

This benefits companies both large and small. “There are several processor vendors that have switched to RISC-V,” says Hand. “Synopsys has moved to RISC-V. Andes has moved to RISC-V. MIPS has moved to RISC-V. Why? Because they can leverage the whole ecosystem. The downside of it is commoditization, which as a customer is really beneficial because you can delay choosing a processor till later in the design flow. Your early decision is to use the Arm ecosystem or RISC-V, and then you can work through it. That creates an interesting set of dynamics. You can start to create new opportunities for companies that develop and deliver IP, because you can benchmark them, swap them in and out, and see which one works. On the flip side, it makes it awful from a lock-in perspective once you’re in that socket.”

Fragmentation
Of course, there will be some friction in the system. “In the early days of RISC-V there was nearly a 1:1 balance between contributors and consumers of the technology,” says Geir Eide, director, product management for Siemens EDA. “Today there are thousands of RISC-V consumers, but only a small percentage of those will be contributors. There is a risk that there will be a disconnect between them. If, for instance, a particular market or regional segment is growing at a higher pace than others, or other market segments and regions are more conservative, they tend to stick to established solutions longer. That increases the risk that it could lead to fragmentation.”

Is that likely to impact development long term? “We do not believe that RISC-V will become regionally concentrated, although though there may be regional concentrations of focus within the broad set of implementation choices provided by RISC-V,” says Bluespec’s Hobbs. “A prime example of this is the Barcelona Supercomputer Center, creating a regional focus area for high-performance computing using RISC-V. However, while there may be regional focus areas, this does not mean that the RISC-V standard is, or will become, fragmented. In fact, one of the key tenets of the creation and foundation of RISC-V was preventing fragmentation of the ISA, and it continues to be a key function of RISC-V international.”

China may be a different story. “A lot of companies in China are creating RISC-V cores for internal consumption — for political reasons mostly,” says John Min, vice president of customer service at Arteris. “I think China will go 100% RISC-V for embedded, but it’s a one-way street. They will keep leveraging what the Western companies do and enhance it. China will continue sucking all advancements, such as vectorization, or the special domain-specific acceleration enhancements. They will create their own and make it their own internally, but they will give nothing back.”

Such splits have occurred in the past. “Design languages are the most recent example of that,” says Hand. “There was a regional split, and you had Europe focus on VHDL while America went with Verilog. With RISC-V, there will be that regional split where people will go off and do their things regionally. Europe has focused projects, India has theirs, but they’re still doing it within this framework. It’s this realization that everyone benefits. They’re not doing it to benefit the other people. They’re doing it ultimately to save themselves effort, to save themselves cost, but they realize that by doing it in that framework it is a net benefit to everyone.”

Bi-directionality
An important element is that everyone benefits, and that has to stretch across the academic/commercial boundary. “RISC-V has propelled a new degree of collaboration between academia and commercial organizations,” says Dave Kelf, CEO at Breker. “It’s noticeable that institutions such as Harvey Mudd College in Claremont, California, and ETH in Zurich, Switzerland, to name two, have produced advanced processor designs as a teaching aid, and have collaborated with multiple companies on their verification and design. This has been further advanced by OpenHW Group, which has made these designs accessible to the industry. This bi-directional collaboration benefits the tool providers to further enhance their offerings working on advanced, open devices, while also enabling academia to improve their designs to a commercial quality level. The virtuous circle created is essential if we are to see RISC-V established as a mainstream, industry-wide capability.”

Academia has a lot to offer in hardware advancement. “Researchers in universities are developing innovative new software and hardware to push the limits of RISC-V innovation,” says Dave Miller, head of corporate communications at SiFive. “Many of the RISC-V projects in academia are focused on optimizing performance and energy efficiency for AI workloads, and are open source so the entire ecosystem can benefit. Researchers are also actively contributing to RISC-V working groups to share their knowledge and collaborate with industry participants. These working groups are split evenly between representatives from APAC, Europe, and North America, all working together towards common goals.”

In many cases, industry is willing to fund such projects. “It makes it easy to have research topics that don’t need to boil the ocean,” says Hand. “If you’re a PhD student and you have a great idea, you can go do it. It’s easy for an industry partner to say, ‘I’ll sponsor that. That’s an interesting thing, and I am not required to allocate ridiculous amounts of money into an open-ended project. It’s like I can see the connection of how that research will go into a commercial product later.'”

This feeds back into academia. “The academics have been jumping on board with OpenHW,” says Wohlrab. “By taking their cores and productizing them, they get a chip back that could be shipped in high volume. Then they can do their research on a real commercial product and can see if their idea would fly in real life. They get real numbers and can see real figures for the benefits of a new branch predictor.”

It can also have a long-term benefit for tools. “There are areas where they want to collaborate with us, especially around security,” says Kiran Vittal, executive director for alliances marketing management at Synopsys. “They are building RISC-V based sub-systems using open-source RISC-V processors, and then academia wants to look at not only the AI part, but the security part. There are post-doc students or PhD students looking into using our tools to verify or to implement whatever they’re doing on security.”

That provides an incentive for EDA to offer better tools for use in universities. “Although there has always been collaboration between universities and the industry, where industry provides the universities with access to EDA tools, IP cores, etc., there’s often a bit of a lag,” says Siemens’ Eide. “In many situations (especially outside of the core area of a particular project), universities have access to older versions of the commercial solutions. If you for instance look at a new grad’s resume, where you in the past would see references to old tech, now you see a lot of references to relatively sophisticated use of RISC-V.”

Moving Forward
This collaboration needs to keep pushing forward. “We had an initiative to create a standardized interface for accelerators,” says Wohlrab. “RISC-V International standardized how to add custom instructions in the ISA, but there was no standard for the hardware interface. So we built this. It was a cool discussion. There were people from Silicon Labs, people from NXP, people from Thales, plus several startups. They all came together and asked, ‘How can we make it future proof and put the accelerators inside?'”

The application space for RISC-V is changing. “The big inflection point is Linux and Android,” says Arteris’ Min. “Android already has some support, but when both Android and Linux are really supported, it will change the mobile apps processor game. The number of designs will proliferate. The number of high-end designs will explode. It will take the whole industry to enable that because RISC-V companies are not big enough to create this by themselves. All the RISC-V companies are partners, because we enable this high-end design at the processor levels.”

That would deepen the software community’s engagement. “An embedded software developer needs to understand the underlying hardware if they want to run Linux on a RISC-V processor that uses custom instructions/ accelerators,” says Bluespec’s Hobbs. “To develop complex embedded hardware/software systems, both embedded software developers and embedded hardware developments must possess contextual understanding of the interoperability of hardware and software. The developer must understand how the customized processor is leveraging the custom instructions in hardware for Linux to efficiently manage and execute the accelerated workloads.”

This collaboration could reinvigorate research into EDA, as well. “With AI you can build predictive models,” says Hand. “Could that be used to identify the change effects from making an extension? What does that mean? There’s a cloud of influence — not directly gate-wise, because that immediately explodes — but perhaps based on test suites. ‘I know that something that touches that logic touches this downstream, which touches the rest of the design.’ That’s where AI plays a big role, and it is one of the interesting areas because in verification there are so many unknowns. When AI comes along, any guidance or any visibility that you can give is incredibly powerful. Even if it is not right 100% of the time, that’s okay, as long as it generates false negatives and not false positives.”

There is a great opportunity for EDA companies. “We collaborate with many of the open-source providers, with OpenHW group, with ETH in Zurich,” says Synopsys’ Vittal. “We want to promote our solutions when it comes to any processor design and you need standard tools like synthesis, place and route, simulation. But then there are also other kinds of unique solutions because RISC-V is so customizable, you can build your own custom instructions. You need something specific to verify these custom instructions and that’s why the Imperas golden models are important. We also collaborated with Bluespec to develop a verification methodology to take you through functional verification and debug.”

There are still some wrinkles to be worked out for customizations. “RISC-V gives us predictability,” says Hand. “We can create a compliance test suite, give you a processor optimization package if you’re on the implementation side. We can create analytics and testing solutions because we know what it’s going to look like. But for non-standard processors, it is effectively as a service, because everyone’s processor is a little bit different. The reason you see a lot of focus on the verification, from platform architecture exploration all the way through, is because if you change one little thing, such as an addressing mode, it impacts pretty much 100% of your processor verification. You’ve got to retest the whole processor. Most people aren’t set up like an Arm or an Intel with huge processor verification teams and the infrastructure, and so they need automation to do it for them.”

Conclusion
RISC-V has enabled the industry to create a framework for collaboration, which enables everyone to work together for selfish reasons. It is a symbiotic relationship that continues to build, and it is creating a wider sphere of influence over time.

“It’s unique in the modern era of semiconductor,” says Hand. “You have such a wide degree of collaboration, where you have processor manufacturers, the software industry leaders, EDA companies, all working on a common infrastructure.”

Related Reading
RISC-V Micro-Architectural Verification
Verifying a processor is much more than making sure the instructions work, but the industry is building from a limited knowledge base and few dedicated tools.
RISC-V Wants All Your Cores
It is not enough to want to dominate the world of CPUs. RISC-V has every core in its sights, and it’s starting to take steps to get there.

The post RISC-V Heralds New Era Of Cooperation appeared first on Semiconductor Engineering.

We Fought for Deaf People on Probation and Parole in Georgia — and Won

div class=wp-heading mb-8 h3 id= class=wp-heading-h3 with-standardTHIS ARTICLE HAS BEEN TRANSLATED INTO AMERICAN SIGN LANGUAGE/h3 /div pa href=https://www.youtube.com/watch?v=3D0g-nVqiBcPlay the video/a/p img width=1110 height=740 src=https://www.aclu.org/wp-content/uploads/2024/05/american-sign-language-interpreter-signing-b.jpg class=attachment-16x9_1400 size-16x9_1400 alt=A closeup of an American Sign Language interpreter#039;s hands as they sign. decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/05/american-sign-language-interpreter-signing-b.jpg 1110w, https://www.aclu.org/wp-content/uploads/2024/05/american-sign-language-interpreter-signing-b-768x512.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/05/american-sign-language-interpreter-signing-b-400x267.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/05/american-sign-language-interpreter-signing-b-600x400.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/05/american-sign-language-interpreter-signing-b-800x533.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/05/american-sign-language-interpreter-signing-b-1000x667.jpg 1000w sizes=(max-width: 1110px) 100vw, 1110px / pA five-year effort to get equal access for deaf and hard-of-hearing people on parole and probation in Georgia has ended in victory. The American Civil Liberties Union and our legal partners reached a a href=https://www.aclu.org/documents/settlement-agreement-cobb-v-georgia-department-of-community-supervisiongroundbreaking settlement/a that requires the Georgia agency responsible for supervising people on probation and parole – the Georgia Department of Community Supervision or “GDCS” – to dismantle the discriminatory hurdles that make it harder for deaf and hard-of-hearing people to avoid prison and live safely in their communities. We hope that other states look to this agreement when determining what is required for their supervision agencies to comply with the Americans with Disabilities Act and Section 504 of the Rehabilitation Act./p pFor years, our clients lived in constant fear of reincarceration. Supervision officers often held important meetings with people who used American Sign Language (ASL), but failed to provide ASL interpreters or other needed accommodations. They “explained” the rules of supervision to people who could not hear or understand these rules, but who nonetheless risked prison or jail if they didn’t follow them./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/disability-rights/for-people-with-disabilities-on-parole-and-probation-accessible-communication-is-essential target=_blank tabindex=-1 img width=1200 height=628 src=https://www.aclu.org/wp-content/uploads/2024/05/e067fc19bbc6b7aed77be8129845b4e7.jpg class=attachment-4x3_full size-4x3_full alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/05/e067fc19bbc6b7aed77be8129845b4e7.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/05/e067fc19bbc6b7aed77be8129845b4e7-768x402.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/05/e067fc19bbc6b7aed77be8129845b4e7-400x209.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/05/e067fc19bbc6b7aed77be8129845b4e7-600x314.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/05/e067fc19bbc6b7aed77be8129845b4e7-800x419.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/05/e067fc19bbc6b7aed77be8129845b4e7-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/disability-rights/for-people-with-disabilities-on-parole-and-probation-accessible-communication-is-essential target=_blank For People with Disabilities on Parole and Probation, Accessible Communication is Essential /a /div div class=wp-link__description a href=https://www.aclu.org/news/disability-rights/for-people-with-disabilities-on-parole-and-probation-accessible-communication-is-essential target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletOur clients were repeatedly denied sign language interpretation necessary to understand the conditions of their release. They paid the price with.../p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/disability-rights/for-people-with-disabilities-on-parole-and-probation-accessible-communication-is-essential target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pTwo of our clients had this exact fear realized when ineffective communication resulted in them a href=https://www.aclu.org/news/disability-rights/for-people-with-disabilities-on-parole-and-probation-accessible-communication-is-essentialbeing incarcerated while/a the case was ongoing. Supervision officers also failed to take disability into account in other ways, too. They knocked on the doors of individuals they knew were deaf, and then accused them of failing to cooperate when they didn’t answer a knock at the door that they couldn’t hear./p pOur clients’ heroic and sustained efforts have helped to guarantee equal rights for all deaf and hard-of-hearing people on supervision in Georgia. Starting now, each current and future deaf and hard-of-hearing person on supervision in Georgia will undergo a communication assessment that will allow the state to create a communication plan that considers the range of situations a deaf or hard-of-hearing person may experience while on supervision, and the types of accommodations they may need./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/disability-rights/language-access-is-a-civil-right-for-both-children-and-adults target=_blank tabindex=-1 img width=1200 height=628 src=https://www.aclu.org/wp-content/uploads/2024/05/9636b5e3f45539ce55fb7cedd041ff4b.jpg class=attachment-4x3_full size-4x3_full alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/05/9636b5e3f45539ce55fb7cedd041ff4b.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/05/9636b5e3f45539ce55fb7cedd041ff4b-768x402.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/05/9636b5e3f45539ce55fb7cedd041ff4b-400x209.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/05/9636b5e3f45539ce55fb7cedd041ff4b-600x314.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/05/9636b5e3f45539ce55fb7cedd041ff4b-800x419.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/05/9636b5e3f45539ce55fb7cedd041ff4b-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/disability-rights/language-access-is-a-civil-right-for-both-children-and-adults target=_blank Language Access is a Civil Right, For Both Children and Adults /a /div div class=wp-link__description a href=https://www.aclu.org/news/disability-rights/language-access-is-a-civil-right-for-both-children-and-adults target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletWhy the ACLU supports the right of Deaf and Hard of Hearing children to access language./p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/disability-rights/language-access-is-a-civil-right-for-both-children-and-adults target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pImportantly, GDCS has agreed to provide Deaf interpreters for people who need them. Deaf interpreters are sign language interpreters who are also deaf. A Deaf interpreter will work with a hearing ASL interpreter to provide effective communication, especially for deaf adults who have experienced a href=https://www.aclu.org/news/disability-rights/language-access-is-a-civil-right-for-both-children-and-adultslanguage deprivation/a — a neurodevelopmental disorder with negative and long-lasting effects on the deaf adult’s language, cognitive, and socioemotional development. Long periods of incarceration with no ability to communicate with other people who know ASL can compound the effects of language deprivation. Hearing-sign language interpreters alone are typically unable to bridge the communication gap between deaf adults with language deprivation and their supervision officers. This communication gap can often lead to serious and preventable misunderstandings between the deaf person and the supervision officer that a Deaf interpreter could solve./p pFor example, in one instance a probation officer relied on a single, hearing interpreter — present on a computer — to explain a form with confusing conditions to a client. The client struggled to understand the interpreter and asked to take a photo of the form so he could ask the ACLU’s legal team to provide a Deaf interpreter to translate the form in a way he understood. Had the ACLU not stepped in to secure a Deaf interpreter, our client would not have fully understood what the form said, nor would he have been able to ask several clarifying questions, and would have risked reincarceration. This settlement ensures that any use of video interpretation, known as VRI, is clear, not relegated to a small cell phone screen, and that supervisees actually understand the directions being given./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/cobb-v-georgia-department-of-community-supervision/settlement-agreement target=_blank tabindex=-1 /a /div div class=wp-link__title a href=https://www.aclu.org/cobb-v-georgia-department-of-community-supervision/settlement-agreement target=_blank NOTICE TO THE CLASS: COBB V. GEORGIA DEPARTMENT OF COMMUNITY SUPERVISION › Settlement Agreement /a /div div class=wp-link__description a href=https://www.aclu.org/cobb-v-georgia-department-of-community-supervision/settlement-agreement target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletThis is the ASL translation and plain language version of Cobb v Georgia Department of Community Supervision Settlement Agreement./p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/cobb-v-georgia-department-of-community-supervision/settlement-agreement target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pGDCS will also now provide better accommodations for deaf or hard-of-hearing clients who cannot read and write English. Historically, the agency provided critical information about supervision only in writing. With this a href=https://www.aclu.org/cobb-v-georgia-department-of-community-supervision/settlement-agreementsettlement/a, a lack of fluency in reading or writing English will no longer be a barrier to successfully completing supervision. If the deaf or hard-of-hearing person cannot understand written documents due to their disability, GDCS has agreed to use appropriate accommodations and provide the written information in another accessible format. This will help prevent future incidents of confusion when people receive documents with important instructions that they do not understand. We have also produced a href=https://www.aclu.org/cobb-v-georgia-department-of-community-supervision/georgia-department-of-community-supervision-ada-policyASL and plain language translations of the new ADA Policy/a so that signers and those with limited literacy can access the ADA policy at any time./p pMany people on supervision in Georgia are required to complete programs or classes as a condition of their supervision, but, in the past, the sponsors of many of these programs have refused to provide ASL interpreters and other necessary accommodations to our clients. GDCS will now require that the providers of any classes or programs required for people on supervision, comply with federal disability laws by providing necessary accommodations, such as interpreters, for effective communication./p pWhile we’ve won this fight in Georgia, the work is not yet done. Every parole and probation department in the country has the obligation under federal disability laws to provide not only effective communication to deaf and hard-of-hearing people, but also any a href=https://www.aclu.org/publications/reducing-barriers-a-guide-to-obtaining-reasonable-accommodations-for-people-with-disabilities-on-supervisionreasonable accommodations/a that people with disabilities need to have an equal opportunity to successfully complete supervision. In reality, probation and parole departments regularly fail to determine whether their people with disabilities need accommodations, let alone provide those accommodations./p pRight now, we’re a href=https://www.aclu.org/press-releases/class-action-lawsuit-challenges-discriminatory-post-conviction-supervision-system-in-washington-d-cchallenging this failure/a in Washington, D.C., where people with mental health disabilities are nearly twice as likely to face reincarceration or other punishment for “technical violations,” or minor rule violations like missing an appointment with a supervision officer. And in Georgia, we now begin a four-year period of monitoring the state’s compliance with the agreement. As part of that monitoring, GDCS will provide us with documentation to show that they are complying with the agreement and providing effective communication. If they violate it, we’ll see them back in court./p

Will Domain-Specific ICs Become Ubiquitous?

Questions are surfacing for all types of design, ranging from small microcontrollers to leading-edge chips, over whether domain-specific design will become ubiquitous, or whether it will fall into the historic pattern of customization first, followed by lower-cost, general-purpose components.

Custom hardware always has been a double-edged sword. It can provide a competitive edge for chipmakers, but often requires more time to design, verify, and manufacture a chip, which can sometimes cost a market window. In addition, it’s often too expensive for all but the most price-resilient applications. This is a well-understood equation at the leading edge of design, particularly where new technologies such as generative AI are involved.

But with planar scaling coming to an end, and with more features tailored to specific domains, the chip industry is struggling to figure out whether the business/technical equation is undergoing a fundamental and more permanent change. This is muddied further by the fact that some 30% to 35% of all design tools today are being sold to large systems companies for chips that will never be sold commercially. In those applications, the collective savings from improved performance per watt may dwarf the cost of designing, verifying, and manufacturing a highly optimized multi-chip/multi-chiplet package across a large data center, leaving the debate about custom vs. general-purpose more uncertain than ever.

“If you go high enough in the engineering organization, you’re going to find that what people really want to do is a software-defined whatever it is,” says Russell Klein, program director for high-level synthesis at Siemens EDA. “What they really want to do is buy off-the-shelf hardware, put some software on it, make that their value-add, and ship that. That paradigm is breaking down in a number of domains. It is breaking down where we need either extremely high performance, or we need extreme efficiency. If we need higher performance than we can get from that off-the-shelf system, or we need greater efficiency, we need the battery to last longer, or we just can’t burn as much power, then we’ve got to start customizing the hardware.”

Even the selection of processing units can make a solution custom. “Domain-specific computing is already ubiquitous,” says Dave Fick, CEO and cofounder of Mythic. “Modern computers, whether in a laptop, phone, security camera, or in farm equipment, consist of a mix of hardware blocks co-optimized with software. For instance, it is common for a computer to have video encode or decode hardware units to allow a system to connect to a camera efficiently. It is common to have accelerators for encryption so that we can safely communicate. Each of these is co-optimized with software algorithms to make commonly used functions highly efficient and flexible.”

Steve Roddy, chief marketing officer at Quadric, agrees. “Heterogeneous processing in SoCs has been de rigueur in the vast majority of consumer applications for the past two decades or more.  SoCs for mobile phones, tablets, televisions, and automotive applications have long been required to meet a grueling combination of high-performance plus low-cost requirements, which has led to the proliferation of function-specific processors found in those systems today.  Even low-cost SoCs for mobile phones today have CPUs for running Android, complex GPUs to paint the display screen, audio DSPs for offloading audio playback in a low-power mode, video DSPs paired with NPUs in the camera subsystem to improve image capture (stabilization, filters, enhancement), baseband DSPs — often with attached NPUs — for high speed communications channel processing in the Wi-Fi and 5G subsystems, sensor hub fusion DSPs, and even power-management processors that maximize battery life.”

It helps to separate what you call general-purpose and what is application-specific. “There is so much benefit to be had from running your software on dedicated hardware, what we call bespoke silicon, because it gives you an advantage over your competitors,” says Marc Swinnen, director of product marketing in Ansys’ Semiconductor Division. “Your software runs faster, lower power, and is designed to run specifically what you want to run. It’s hard for a competitor with off-the-shelf hardware to compete with you. Silicon has become so central to the business value, the business model, of many companies that it has become important to have that optimized.”

There is a balance, however. “If there is any cost justification in terms of return on investment and deployment costs, power costs, thermal costs, cooling costs, then it always makes sense to build a custom ASIC,” says Sharad Chole, chief scientist and co-founder of Expedera. “We saw that for cryptocurrency, we see that right now for AI. We saw that for edge computing, which requires extremely ultra-low power sensors and ultra-low power processes. But there also has been a push for general-purpose computing hardware, because then you can easily make the applications more abstract and scalable.”

Part of the seeming conflict is due to the scope of specificity. “When you look at the architecture, it’s really the scope that determines the application specificity,” says Frank Schirrmeister, vice president of solutions and business development at Arteris. “Domain-specific computing is ubiquitous now. The important part is the constant moving up of the domain specificity to something more complex — from the original IP, to configurable IP, to subsystems that are configurable.”

In the past, it has been driven more by economics. “There’s an ebb and a flow to it,” says Paul Karazuba, vice president of marketing at Expedera. “There’s an ebb and a flow to putting everything into a processor. There’s an ebb and a flow to having co-processors, augmenting functions that are inside of that main processor. It’s a natural evolution of pretty much everything. It may not necessarily be cheaper to design your own silicon, but it may be more expensive in the long run to not design your own silicon.”

An attempt to formalize that ebb and flow was made by Tsugio Makimoto in the 1990s, when he was Sony’s CTO. He observed that electronics cycled between custom solutions and programmable ones approximately every 10 years. What’s changed is that most custom chips from the time of his observation contained highly programmable standard components.

Technology drivers
Today, it would appear that technical issues will decide this. “The industry has managed to work around power issues and push up the thermal envelope beyond points I personally thought were going to be reasonable, or feasible,” says Elad Alon, co-founder and CEO of Blue Cheetah. “We’re hitting that power limit, and when you hit the power limit it drives you toward customization wherever you can do it. But obviously, there is tension between flexibility, scalability, and applicability to the broadest market possible. This is seen in the fast pace of innovation in the AI software world, where tomorrow there could be an entirely different algorithm, and that throws out almost all the customizations one may have done.”

The slowing of Moore’s Law will have a fundamental influence on the balance point. “There have been a number of bespoke silicon companies in the past that were successful for a short period of time, but then failed,” says Ansys’ Swinnen. “They had made some kind of advance, be it architectural or addressing a new market need, but then the general-purpose chips caught up. That is because there’s so much investment in them, and there’s so many people using them, there’s an entire army of people advancing, versus your company, just your team, that’s advancing your bespoke solution. Inevitably, sooner or later, they bypass you and the general-purpose hardware just gets better than the specific one. Right now, the pendulum has swung toward custom solutions being the winner.”

However, general-purpose processors do not automatically advance if companies don’t keep up with adoption of the latest nodes, and that leads to even more opportunities. “When adding accelerators to a general-purpose processor starts to break down, because you want to go faster or become more efficient, you start to create truly customized implementations,” says Siemens’ Klein. “That’s where high-level synthesis starts to become really interesting, because you’ve got that software-defined implementation as your starting point. We can take it through high-level synthesis (HLS) and build an accelerator that’s going to do that one specific thing. We could leave a bunch of registers to define its behavior, or we can just hard code everything. The less general that system is, the more specific it is, usually the higher performance and the greater efficiency that we’re going to take away from it. And it almost always is going to be able to beat a general-purpose accelerator or certainly a general-purpose processor in terms of both performance and efficiency.”

At the same time, IP has become massively configurable. “There used to be IP as the building blocks,” says Arteris’ Schirrmeister. “Since then, the industry has produced much larger and more complex IP that takes on the role of sub-systems, and that’s where scope comes in. We have seen Arm with what they call the compute sub-systems (CSS), which are an integration and then hardened. People care about the chip as a whole, and then the chip and the system context with all that software. Application specificity has become ubiquitous in the IP space. You either build hard cores, you use a configurable core, or you use high-level synthesis. All of them are, by definition, application-specific, and the configurability plays in there.”

Put in perspective, there is more than one way to build a device, and an increasing number of options for getting it done. “There’s a really large market for specialized computing around some algorithm,” says Klein. “IP for that is going to be both in the form of discrete chips, as well as IP that could be built into something. Ultimately, that has to become silicon. It’s got to be hardened to some degree. They can set some parameters and bake it into somebody’s design. Consider an Arm processor. I can configure how many CPUs I want, I can configure how big I want the caches, and then I can go bake that into a specific implementation. That’s going to be the thing that I build, and it’s going to be more targeted. It will have better efficiency and a better cost profile and a better power profile for the thing that I’m doing. Somebody else can take it and configure it a little bit differently. And to the degree that the IP works, that’s a great solution. But there will always be algorithms that don’t have a big enough market for IP to address. And that’s where you go in and do the extreme customization.”

Chiplets
Some have questioned if the emerging chiplet industry will reverse this trend. “We will continue to see systems composed of many hardware accelerator blocks, and advanced silicon integration technologies (i.e., 3D stacking and chiplets) will make that even easier,” says Mythic’s Fick. “There are many companies working on open standards for chiplets, enabling communication bandwidth and energy efficiency that is an order of magnitude greater than what can be built on a PCB. Perhaps soon, the advanced system-in-package will overtake the PCB as the way systems are designed.”

Chiplets are not likely to be highly configurable. “Configuration in the chiplet world might become just a function of switching off things you don’t need,” says Schirrmeister. “Configuration really means that you do not use certain things. You don’t get your money back for those items. It’s all basically applying math and predicting what your volumes are going to be. If it’s an incremental cost that has one more block on it to support another interface, or making the block the Ethernet block with time triggered stuff in it for automotive, that gives you an incremental effort of X. Now, you have to basically estimate whether it also gives you a multiple of that incremental effort as incremental profit. It works out this way because chips just become very configurable. Chiplets are just going in the direction or finding the balance of more generic usage so that you can apply them in more chiplet designs.”

The chiplet market is far from certain today. “The promise of chiplets is that you use only the function that you want from the supplier that you want, in the right node, at the right location,” says Expedera’s Karazuba. “The idea of specialization and chiplets are at arm’s length. They’re actually together, but chiplets have a long way to go. There’s still not that universal agreement of the different things around a chiplet that have to be in order to make the product truly mass market.”

While chiplets have been proven to work, nearly all of the chiplets in use today are proprietary. “To build a viable [commercial] chiplet company, you have to be going after a broad enough market, large enough from a dollar perspective, then you can make all the investment, have success and get everything back accordingly,” says Blue Cheetah’s Alon. “There’s a similar tension where people would like to build a general-purpose chiplet that can be used anywhere, by anyone. That is the plug-and-play discussion, but you could finish up with something that becomes so general-purpose, with so much overhead, that it’s just not attractive in any particular market. In the chiplet case, for technical reasons, it might not actually really work that way at all. You might try to build it for general purpose, and it turns out later that it doesn’t plug into particular sockets that are of interest.”

The economics of chiplet viability have not yet been defined. “The thing about chiplets is they can be small,” says Klein. “Being small means that we don’t need as big a market for them as we would for a very large chip. We can also build them on different technologies. We can have some that are on older technologies, where transistors are cheaper, and we can combine those with other chiplets that might be leading-edge nodes where we could have general-purpose CPUs or NPU accelerators. There’s a mix-and-match, and we can do chiplets smaller than we can general-purpose chips. We can do smaller runs of them. We can take that IP and customize it for a particular market vertical and create some chiplets for that, change the configuration a bit, and do another run for something else. There’s a level of customization that can be deployed and supported by the market that’s a little bit more than we’ve seen in full-size chips, where the entire thing has to be built into one package.

Conclusion
What it means for a design to be general-purpose or custom is changing. All designs will contain some of each. Some companies will develop novel architectures using general-purpose processors, and these will be better than a fully general-purpose solution. Others will create highly customized hardware for some functions that are known to be stable, and general purpose for things that are likely to change. One thing has never changed, however. A company is not likely to add more customization than necessary to satisfy the needs of the market they are targeting.

Further Reading
Challenges With Chiplets And Power Delivery
Benefits and challenges in heterogeneous integration.
Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.

The post Will Domain-Specific ICs Become Ubiquitous? appeared first on Semiconductor Engineering.

Review: Fargo's Self-Identified Libertarian Is No Libertarian

Jon Hamm as Roy Tillman in 'Fargo' | <em>Fargo</em>/FX

Season five of showrunner Noah Hawley's TV version of Fargo tells a violence-filled story exploring domestic abuse, PTSD, the concept of debt (on multiple levels), and the purpose and efficacy of the institutions of marriage and police.

Its villain is designed to cause discomfort for libertarians: Sheriff Roy Tillman (Jon Hamm), who self-identifies as a libertarian and a constitutionalist, and does seem to adhere to a certain peculiar right-wing belief in the county sheriff as the main source of authority. The only libertarianish qualities he evinces are a contempt for the FBI and the ability to recite a few silly, pointless laws. But the writers seem to want his stated ideology to add spice to the audience's dislike of him for being an abusing, murdering, and corrupt bully laundering his own rage and sin through a twisted vision of God.

In one scene, Tillman says he'd rather see orphans fight each other for sport than help them, and another character accuses him of being like a baby—crying for freedom with no responsibility. The whole thing is reminiscent of when on old college pal thinks he is totally crushing libertarianism with a masterful Facebook post.

If Tillman becomes smart quality TV fans' go-to image of libertarians, replacing the weirdly obsessed but well-meaning Ron Swanson of Parks and Recreation, it will be a shame. But hopefully a smart viewer will know, when Tillman calls on the spirit of western resisters of federal power such as Ammon Bundy and LaVoy Finicum, that it's no part of any proven public record that either man ever did anything a hundredth as evil as Tillman does in pretty much every episode.

The post Review: <i>Fargo</i>'s Self-Identified Libertarian Is No Libertarian appeared first on Reason.com.

Kickflip the competition with Destiny 2’s brand-new trick-worthy vehicle, live today

The spirit of competition in Destiny 2 is alive and well with today’s launch of the Guardian Games All-Stars 2024 event. The annual competitive event pits Warlocks, Hunters, and Titans against one another to try and take the top step of the Guardian class podium. Last year, Titans took the win, their second in Guardian Games history, and this year’s event features new twists to spice up the competition. Additionally, Destiny 2: The Witch Queen expansion is available today as a PlayStation Plus Monthly Game and we dive deeper into the rad new Skimmer vehicle.

A new scoring system will be in place to keep everyone on their toes, including new Diamond-tier Medallions (the most valuable offered yet; earning these will help your class’s standings in a significant way, so these Medallions are limited to three per week). In addition, Guardians will be able to earn daily glow effects that come from a variety of sources, including individual achievement (such as being the “best in Tower” Guardian or showing your performance in modes like Supremacy or Competitive Nightfalls) as well as group achievements (like being part of that day’s winning class). 

Focus Activities are new limited-time boosts to Guardian Games activities that will grant bonus Medallions for the winning class. These events will only be open for a few hours each day, and you’ll be able to earn rewards packages by completing them; in fact, the class that dominates a particular Focus Activity will also earn a special champions rewards package that will earn your class a bunch of Medallions for the day’s competition.   

By taking part in Guardian Games events, every player can also unlock sweet rewards, including new Legendary weapons such as Hullabaloo (a new Compressed Wave Frame Heavy Grenade Launcher), a new Exotic Ghost shell, and even a new form of transportation: the Skimmer.

Gettin’ tricky with it

Perhaps the most exciting reward coming for Guardian Games All-Stars, the Skimmer is the first new vehicle option offered in Destiny since the Sparrow, and it’s available to all Guardians at no additional cost (simply pick it up from Eva Levante in the Tower after completing the first Guardian Games All-Stars quest and banking your first Medallion). All players will be able to use the Skimmer during Guardian Games All-Stars, and Guardians who complete a one-step quest during the event will be able to keep the Skimmer permanently once the Games have ended.

A sci-fi take on skateboarding and snowboarding, the Skimmer is a bold new expression of Guardian mobility in Destiny 2, allowing players to perform sweet tricks and sick grinds on their way to their next destination.

“The very beginning of the Skimmer’s development started with a simple question: What is the Guardian version of a Cloud Strider’s skyboard?” said Bungie senior design lead Ben Wommack. “Imagine a Guardian saw a Cloud Strider zipping around in the air and thought, ‘Yeah, that’s neat, but how about this!’ and pulled out something that’s cooler, more stylish, and most importantly, fun to use.”

“We wanted to make sure that we were not doing just a different Sparrow skin with a Guardian standing on top but something that really fulfills new fantasies,” said Bungie staff technical animator Matt Kelly. “The original Sparrow was a blend of a jet ski and a superbike. For the Skimmer, we took inspiration from surfing, snowboarding, and skateboarding, strapped some boosters on it, and blended that up with some space magic.

“In the time between original Destiny and now, player movement abilities when traversing spaces on foot has dramatically changed and expanded, and the gameplay space has held up well. This allowed us to be comfortable pushing the edges of what we could do by adding new verbs into vehicles as well.” 

Skimmers are just as zippy as their Sparrow counterparts, and they are capable of a host of new tricks and grinds that will make travel time even more fun. Guardians will be able to pull off four base tricks that map to the emote buttons: a tre flip, a Tamedog, a 360 grab, and a 360 spin where the Guardian leaves the board. There are also four grinds: a 5-0, a boardslide, a backside darkslide, and a crooked nosegrind. In addition, the dev team has also added a variety of grounded turn dodges inspired by carving while surfing as well as some air dodges with grab spins to avoid obstacles (or slam into an unlucky enemy that gets in your way). Finally, there’s a vehicle jump ability that will launch your Guardian into the air, complete with a nice ollie grab as a finishing touch.

“We are lucky enough that we have a massive group of excited skaters, snowboarders, and surfers who were extremely excited about pointing out their favorite boarders and tricks to implement,” said Kelly. 

“We also had a bunch of animators who were able to dive in and really put the polish on some amazing and difficult-to-animate moves. Inspiration was everywhere.”

One final thought to whet your Skimmer appetite. During development, Bungie testers found some creative uses for Skimmers when combined with Strand’s grapple ability. Bungie test engineer Thomas Duda explains:

“Without any design specifically going towards this interaction, we discovered during a playtest that players can grapple the hoverboard while another player is using it. A player with Strand grapple can hook on and get pulled along with the grind, and they’ll be taken wherever the grinding player steers. If you’re in a firefight that’s not going too well, a friend can glide over you, and you can skyhook your way out of there, like the scene in The Dark Knight where Batman extracts from the high rise in Hong Kong.”

Test your medal

As if competing for the glory of your chosen Guardian class isn’t enough, we also have an additional incentive for Guardians who want to embody the spirit of the Guardian Games All-Stars event. Players who complete the Gold Event Challenge by March 26, 2024, at 9:59 AM PT, will earn the Bungie Rewards offer to purchase a physical 2024 Guardian Games All-Stars Medal through the Bungie Store.

The Witch Queen arrives with PlayStation Plus

PlayStation Plus members, the threat of the Lucent Hive continues to loom, and we’re putting the call out to you all. Starting today, and for a limited time, Destiny 2: The Witch Queen is available to all PlayStation Plus members as a Monthly Game at no additional charge. Now is your chance to own one of the greatest chapters in Destiny history, as you take on Savathûn herself in a desperate bid to take back the Light. Experience the thrilling campaign from the start, explore The Witch Queen’s mysterious Throne World, and test your skills against Legendary mode. All of that and more, and it’s yours to keep once you download it.

If you’re a PlayStation Plus owner new to Destiny 2 or looking for the ideal time to jump back in, don’t miss out on this opportunity to experience one of the best adventures Destiny 2 has to offer. Head over to the PlayStation Store, grab The Witch Queen expansion, and prepare for the fight ahead.  

Kickflip the competition with Destiny 2’s brand-new trick-worthy vehicle, live today

The spirit of competition in Destiny 2 is alive and well with today’s launch of the Guardian Games All-Stars 2024 event. The annual competitive event pits Warlocks, Hunters, and Titans against one another to try and take the top step of the Guardian class podium. Last year, Titans took the win, their second in Guardian Games history, and this year’s event features new twists to spice up the competition. Additionally, Destiny 2: The Witch Queen expansion is available today as a PlayStation Plus Monthly Game and we dive deeper into the rad new Skimmer vehicle.

A new scoring system will be in place to keep everyone on their toes, including new Diamond-tier Medallions (the most valuable offered yet; earning these will help your class’s standings in a significant way, so these Medallions are limited to three per week). In addition, Guardians will be able to earn daily glow effects that come from a variety of sources, including individual achievement (such as being the “best in Tower” Guardian or showing your performance in modes like Supremacy or Competitive Nightfalls) as well as group achievements (like being part of that day’s winning class). 

Focus Activities are new limited-time boosts to Guardian Games activities that will grant bonus Medallions for the winning class. These events will only be open for a few hours each day, and you’ll be able to earn rewards packages by completing them; in fact, the class that dominates a particular Focus Activity will also earn a special champions rewards package that will earn your class a bunch of Medallions for the day’s competition.   

By taking part in Guardian Games events, every player can also unlock sweet rewards, including new Legendary weapons such as Hullabaloo (a new Compressed Wave Frame Heavy Grenade Launcher), a new Exotic Ghost shell, and even a new form of transportation: the Skimmer.

Gettin’ tricky with it

Perhaps the most exciting reward coming for Guardian Games All-Stars, the Skimmer is the first new vehicle option offered in Destiny since the Sparrow, and it’s available to all Guardians at no additional cost (simply pick it up from Eva Levante in the Tower after completing the first Guardian Games All-Stars quest and banking your first Medallion). All players will be able to use the Skimmer during Guardian Games All-Stars, and Guardians who complete a one-step quest during the event will be able to keep the Skimmer permanently once the Games have ended.

A sci-fi take on skateboarding and snowboarding, the Skimmer is a bold new expression of Guardian mobility in Destiny 2, allowing players to perform sweet tricks and sick grinds on their way to their next destination.

“The very beginning of the Skimmer’s development started with a simple question: What is the Guardian version of a Cloud Strider’s skyboard?” said Bungie senior design lead Ben Wommack. “Imagine a Guardian saw a Cloud Strider zipping around in the air and thought, ‘Yeah, that’s neat, but how about this!’ and pulled out something that’s cooler, more stylish, and most importantly, fun to use.”

“We wanted to make sure that we were not doing just a different Sparrow skin with a Guardian standing on top but something that really fulfills new fantasies,” said Bungie staff technical animator Matt Kelly. “The original Sparrow was a blend of a jet ski and a superbike. For the Skimmer, we took inspiration from surfing, snowboarding, and skateboarding, strapped some boosters on it, and blended that up with some space magic.

“In the time between original Destiny and now, player movement abilities when traversing spaces on foot has dramatically changed and expanded, and the gameplay space has held up well. This allowed us to be comfortable pushing the edges of what we could do by adding new verbs into vehicles as well.” 

Skimmers are just as zippy as their Sparrow counterparts, and they are capable of a host of new tricks and grinds that will make travel time even more fun. Guardians will be able to pull off four base tricks that map to the emote buttons: a tre flip, a Tamedog, a 360 grab, and a 360 spin where the Guardian leaves the board. There are also four grinds: a 5-0, a boardslide, a backside darkslide, and a crooked nosegrind. In addition, the dev team has also added a variety of grounded turn dodges inspired by carving while surfing as well as some air dodges with grab spins to avoid obstacles (or slam into an unlucky enemy that gets in your way). Finally, there’s a vehicle jump ability that will launch your Guardian into the air, complete with a nice ollie grab as a finishing touch.

“We are lucky enough that we have a massive group of excited skaters, snowboarders, and surfers who were extremely excited about pointing out their favorite boarders and tricks to implement,” said Kelly. 

“We also had a bunch of animators who were able to dive in and really put the polish on some amazing and difficult-to-animate moves. Inspiration was everywhere.”

One final thought to whet your Skimmer appetite. During development, Bungie testers found some creative uses for Skimmers when combined with Strand’s grapple ability. Bungie test engineer Thomas Duda explains:

“Without any design specifically going towards this interaction, we discovered during a playtest that players can grapple the hoverboard while another player is using it. A player with Strand grapple can hook on and get pulled along with the grind, and they’ll be taken wherever the grinding player steers. If you’re in a firefight that’s not going too well, a friend can glide over you, and you can skyhook your way out of there, like the scene in The Dark Knight where Batman extracts from the high rise in Hong Kong.”

Test your medal

As if competing for the glory of your chosen Guardian class isn’t enough, we also have an additional incentive for Guardians who want to embody the spirit of the Guardian Games All-Stars event. Players who complete the Gold Event Challenge by March 26, 2024, at 9:59 AM PT, will earn the Bungie Rewards offer to purchase a physical 2024 Guardian Games All-Stars Medal through the Bungie Store.

The Witch Queen arrives with PlayStation Plus

PlayStation Plus members, the threat of the Lucent Hive continues to loom, and we’re putting the call out to you all. Starting today, and for a limited time, Destiny 2: The Witch Queen is available to all PlayStation Plus members as a Monthly Game at no additional charge. Now is your chance to own one of the greatest chapters in Destiny history, as you take on Savathûn herself in a desperate bid to take back the Light. Experience the thrilling campaign from the start, explore The Witch Queen’s mysterious Throne World, and test your skills against Legendary mode. All of that and more, and it’s yours to keep once you download it.

If you’re a PlayStation Plus owner new to Destiny 2 or looking for the ideal time to jump back in, don’t miss out on this opportunity to experience one of the best adventures Destiny 2 has to offer. Head over to the PlayStation Store, grab The Witch Queen expansion, and prepare for the fight ahead.  

NVIDIA Reveals Gaming, Creating, Generative AI, Robotics Innovations at CES

The AI revolution returned to where it started this week, putting powerful new tools into the hands of gamers and content creators.

Generative AI models that will bring lifelike characters to games and applications and new GPUs for gamers and creators were among the highlights of a news-packed address Monday ahead of this week’s CES trade show in Las Vegas.

“Today, NVIDIA is at the center of the latest technology transformation: generative AI,” said Jeff Fisher, senior vice president for GeForce at NVIDIA, who was joined by leaders across the company to introduce products and partnerships across gaming, content creation, and robotics.

A Launching Pad for Generative AI

As AI shifts into the mainstream, Fisher said NVIDIA’s RTX GPUs, with more than 100 million units shipped, are pivotal in the burgeoning field of generative AI, exemplified by innovations like ChatGPT and Stable Diffusion.

In October, NVIDIA released the TensorRT-LLM library for Windows, accelerating large language models, or LLMs, like Llama 2 and Mistral up to 5x on RTX PCs.

And with our new Chat with RTX playground, releasing later this month, enthusiasts can connect an RTX-accelerated LLM to their own data, from locally stored documents to YouTube videos, using retrieval-augmented generation, or RAG, a technique for enhancing the accuracy and reliability of generative AI models.

Fisher also introduced TensorRT acceleration for Stable Diffusion XL and SDXL Turbo in the popular Automatic1111 text-to-image app, providing up to a 60% boost in performance.

NVIDIA Avatar Cloud Engine Microservices Debut With Generative AI Models for Digital Avatars

NVIDIA ACE is a technology platform that brings digital avatars to life with generative AI. ACE AI models are designed to run in the cloud or locally on the PC.

In an ACE demo featuring Convai’s new technologies, NVIDIA’s Senior Product Manager Seth Schneider showed how it works.

 

First, a player’s voice input is passed to NVIDIA’s automatic speech recognition model, which translates speech to text. Then, the text is put into an LLM to generate the character’s response.

After that, the text response is vocalized using a text-to-speech model, which is passed to an animation model to create a realistic lip sync. Finally, the dynamic character is rendered into the game scene.

At CES, NVIDIA is announcing ACE Production Microservices for NVIDIA Audio2Face and NVIDIA Riva Automatic Speech Recognition. Available now, each model can be incorporated by developers individually into their pipelines.

NVIDIA is also announcing game and interactive avatar developers are pioneering ways ACE and generative AI technologies can be used to transform interactions between players and non-playable characters in games and applications. Developers embracing ACE include Convai, Charisma.AI, Inworld, miHoYo, NetEase Games, Ourpalm, Tencent, Ubisoft and UneeQ.

Getty Images Releases Generative AI by iStock and AI Image Generation Tools Powered by NVIDIA Picasso

Generative AI empowers designers and marketers to create concept imagery, social media content and more. Today, iStock by Getty Images is releasing a genAI service built on NVIDIA Picasso, an AI foundry for visual design, Fisher announced.

The iStock service allows anyone to create 4K imagery from text using an AI model trained on Getty Images’ extensive catalog of licensed, commercially safe creative content. New editing application programming interfaces that give customers powerful control over their generated images are also coming soon.

The generative AI service is available today at istock.com, with advanced editing features releasing via API.

NVIDIA Introduces GeForce RTX 40 SUPER Series

Fisher announced a new series of GeForce RTX 40 SUPER GPUs with more gaming and generative AI performance.

Fisher said that the GeForce RTX 4080 SUPER can power fully ray-traced games at 4K. It’s 1.4x faster than the RTX 3080 Ti without frame gen in the most graphically intensive games. With 836 AI TOPS, NVIDIA DLSS Frame Generation delivers an extra performance boost, making the RTX 4080 SUPER twice as fast as an RTX 3080 Ti.

Creators can generate video with Stable Video Diffusion 1.5x faster and images with Stable Diffusion XL 1.7x faster. The RTX 4080 SUPER features more cores and faster memory, giving it a performance edge at a great new price of $999. It will be available starting Jan. 31.

Next up is the RTX 4070 Ti SUPER. NVIDIA has added more cores and increased the frame buffer to 16GB and the memory bus to 256 bits. It’s 1.6x faster than a 3070 Ti and 2.5x faster with DLSS 3, Fisher said. The RTX 4070 Ti SUPER will be available starting Jan. 24 for $799.

Fisher also introduced the RTX 4070 SUPER. NVIDIA has added 20% more cores, making it faster than the RTX 3090 while using a fraction of the power. And with DLSS 3, it’s 1.5x faster in the most demanding games. It will be available for $599 starting Jan. 17.

NVIDIA RTX Remix Open Beta Launches This Month

There are over 10 billion game mods downloaded each year. With RTX Remix, modders can remaster classic games with full ray tracing, DLSS, NVIDIA Reflex and generative AI texture tools that transform low-resolution textures into 4K, physically accurate materials. The RTX Remix app will be released in open beta on Jan. 22.

RTX Remix has already delivered stunning remasters in NVIDIA’s Portal with RTX and the modder-made Portal: Prelude RTX. Now, Orbifold Studios is using RTX Remix to develop Half-Life 2 RTX: An RTX Remix Project, a community remaster of one of the highest-rated games of all time.

Check out this new Half-Life 2 RTX gameplay trailer:

 

Twitch and NVIDIA to Release Multi-Encode Livestreaming

Twitch is one of the most popular platforms for content creators, with over 7 million streamers going live each month to 35 million daily viewers. Fisher explained that these viewers are on all kinds of devices and internet services.

Yet many Twitch streamers are limited to broadcasting at a single resolution and quality level. As a result, they must broadcast at lower quality to reach more viewers.

To address this, Twitch, OBS and NVIDIA announced Enhanced Broadcasting, supported by all RTX GPUs. This new feature allows streamers to transmit up to three concurrent streams to Twitch at different resolutions and quality so each viewer gets the optimal experience.

Beta signups start today and will go live later this month. Twitch will also experiment with 4K and AV1 on the GeForce RTX 40 Series GPUs to deliver even better quality and higher resolution streaming.

‘New Wave’ of AI-Ready RTX Laptops

RTX is the fastest-growing laptop platform, having grown 5x in the last four years. Over 50 million devices are enjoyed by gamers and creators across the globe.

More’s coming. Fisher announced “a new wave” of RTX laptops launching from every major manufacturer. “Thanks to powerful RT and Tensor Cores, every RTX laptop is AI-ready for the best gaming and AI experiences,” Fisher said.

With an installed base of 100 million GPUs and 500 RTX games and apps, GeForce RTX is the world’s largest platform for gamers, creators and, now, generative AI.

Activision and Blizzard Games Embrace RTX

More than 500 games and apps now take advantage of NVIDIA RTX technology, NVIDIA’s Senior Consumer Marketing Manager Kristina Bartz said, including Alan Wake 2, which won three awards at this year’s Game Awards.

NVIDIA Consumer Marketing Manager Kristina Bartz spoke about how NVIDIA technologies are being integrated into popular games.

It’s a list that keeps growing with 14 new RTX titles announced at CES.

Horizon Forbidden West, the critically acclaimed sequel to Horizon Zero Dawn, will come to PC early this year with the Burning Shores expansion, accelerated by DLSS 3.

Pax Dei is a social sandbox massively multiplayer online game inspired by the legends of the medieval era. Developed by Mainframe Industries with veterans from CCP Games, Blizzard and Remedy Entertainment, Pax Dei will launch in early access on PC with AI-accelerated DLSS 3 this spring.

Last summer, Diablo IV launched with DLSS 3 and immediately became Blizzard’s fastest-selling game. RTX ray tracing will now be coming to Diablo IV in March.

More than 500 games and apps now take advantage of NVIDIA RTX technology, with more coming.

Day Passes and G-SYNC Technology Coming to GeForce NOW

NVIDIA’s partnership with Activision also extends to the cloud with GeForce NOW, Bartz said. In November, NVIDIA welcomed the first Activation and Blizzard game, Call of Duty: Modern Warfare 3. Diablo IV and Overwatch 2 are coming soon.

GeForce NOW will get Day Pass membership options starting in February. Priority and Ultimate Day Passes will give gamers a full day of gaming with the fastest access to servers, with all the same benefits as members, including NVIDIA DLSS 3.5 and NVIDIA Reflex for Ultimate Day Pass purchasers.

NVIDIA also announced Cloud G-SYNC technology is coming to GeForce NOW, which varies the display refresh rate to match the frame rate on G-SYNC monitors, giving members the smoothest, tear-free gaming experience from the cloud.

Generative AI Powers Smarter Robots With NVIDIA Isaac

NVIDIA Vice President of Robotics and Edge Computing Deepu Talla addressed the intersection of AI and robotics.

Closing out the special address, NVIDIA Vice President of Robotics and Edge Computing Deepu Talla shared how the infusion of generative AI into robotics is speeding up the ability to bring robots from proof of concept to real-world deployment.

Talla gave a peek into the growing use of generative AI in the NVIDIA robotics ecosystem, where robotics innovators like Boston Dynamics and Collaborative Robots are changing the landscape of human-robot interaction.

Explore generative AI sessions and experiences at NVIDIA GTC, the global conference on AI and accelerated computing, running March 18-21 in San Jose, Calif., and online.

NVIDIA to Reveal New AI Innovations at CES 2024

In the lead-up to next month’s CES trade show in Las Vegas, NVIDIA will unveil its latest advancements in artificial intelligence — including generative AI — and a spectrum of other cutting-edge technologies.

Scheduled for Monday, Jan. 8, at 8 a.m. PT, the company’s special address will be publicly streamed. Save the date and plan to tune in to the virtual address, which will focus on consumer technologies and robotics, on NVIDIA’s website, YouTube or Twitch.

AI and NVIDIA technologies will be the focus of 14 conference sessions, including four at CES Digital Hollywood, “Reshaping Retail – AI Creating Opportunity,” “Robots at Work” and “Cracking the Smart Car.”

And throughout CES, NVIDIA’s story will be enriched by the presence of over 85 NVIDIA customers and partners.

  • Consumer: AI, gaming and NVIDIA Studio announcements and demos with partners including Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MSI, Razer, Samsung, Zotac and more.
  • Auto: Showcasing partnerships with leaders including Mercedes-Benz, Hyundai, Kia, Polestar, Luminar and Zoox.
  • Robotics: Working alongside Dreame Innovation Technology, DriveU, Ecotron Corp., e-con Systems, Enchanted Tools, GluxKind, Hesai Technology, Leopard Imaging, Segway-Ninebot (Willand (Beijing) Technology Co., Ltd.), Orbbec, Qt Group, Unitree Robotics, Voyant Photonics and ZVISION Technologies Co., Ltd.
  • Enterprise: Collaborations with Accenture, Adobe, Altair, Ansys, AWS, Capgemini, Dassault Systems, Deloitte, Google, Meta, Microsoft, Siemens, Wipro and others.

For the investment community, NVIDIA will participate in a CES Virtual Fireside Chat hosted by J.P. Morgan on Tuesday, Jan. 9, at 8 a.m. PT. Listen to the live audio webcast at investor.nvidia.com.

Visit NVIDIA’s event web page for a complete list of sessions and a view of our extensive partner ecosystem at the show.

Design Tool Think Tank Required

When I was in the EDA industry as a technologist, there were three main parts to my role. The first was to tell customers about new technologies being developed and tool extensions that would be appearing in the next release. These were features they might find beneficial both in the projects they were undertaking today, and even more so, would apply to future projects. Second, I would try and find out what new issues they were finding, or where the tools were not delivering the capabilities they required. This would feed into tool development planning. And finally, I would take those features selected by the marketing team for implementation and try to work out how best to implement them if it wasn’t obvious to the development teams.

By far the most difficult task out of the three was getting new requirements from customers. Most engineers have their heads down, concentrating on getting their latest chip out. When you ask them about new features, the only thing they offer are their current pain points. These usually involve incremental features or bugs, where the workaround is disliked, or insufficient performance.

Thirty years ago, when I first started doing that role, there were dedicated methodology groups within the larger companies whose job it was to develop flows and methodologies for future projects. This would appear to be the ideal people to ask, but in many cases they were so disconnected from the development team that what they asked for would never actually be used by the development team. These groups were idealists who wanted to instill revolutionary changes, whereas the development teams wanted evolutionary tools. The furthest many of those developments went was pilot projects that never became mainstream.

It seems as if the industry needs a better path to get requirements into the EDA companies. This used to be defined by the ITRS, which would look forward and project the new capabilities that would be required and the timeframes for them. That no longer exists. Today, standards are being driven by semiconductor companies. This is a change from the past, where we used to see the EDA companies driving the developments done within groups like Accellera. When I look at their recent undertakings, most of them are driven by end users.

Getting a standards group started today happens fairly late in the process. It implies an immediate need, but does not really allow time for solutions to be developed ahead of time. It appears that a think tank is required where the industry can discuss issues and problems for which new tool development is required. That can then be built into the EDA roadmaps so that the technology becomes available when it is needed.

One such area is power analysis. I have been writing stories about how important power and energy is becoming and may indeed soon become the limiter for many of the most complex designs. Some of the questions I always ask are:

  • What tools are being developed for doing power analysis of software?
  • How can you calculate the energy consumed for a given function?
  • How can users optimize a design for power or energy?

I rarely get straight answers to any of these questions. Instead, I’m often given vague ideas about how a user could do this in a manual fashion given the tools currently available.

I was beginning to think I was barking up the wrong tree and perhaps these were not legitimate concerns. My sanity was restored by a comment on one of my recent power related stories. Allan Cantle, OCP HPC Sub-Project Leader at Open Compute Project Foundation, wrote: “While it’s great to see articles like this highlight the need for us all to focus on energy centric computing, the sad news is that our tools don’t report energy in any obvious way to show the stupid architectural mistakes we often make from an energy consumption perspective. We are solving all the problems from a bottoms-up perspective by bringing things closer together. While that does bring tremendous energy efficiency benefits, it also creates massively increasing energy density. There is so much low-hanging fruit from a top-down system architecture approach that the industry is missing because we need to think outside the box and across our silos.”

Cantle went on to say: “A trivial improvement in tools that report energy consumption as a first-class metric will make it far easier for us to understand and rectify the mistakes we make as we build new energy-centric, domain-specific computers for each application. Alternatively, the silicon gods that rule our industry would be wise to take a step backward and think about the problem from a systems level perspective.”

I couldn’t agree more, and I find it frustrating that no EDA company seems to be listening. I am sure part of the problem is that the large customers are working on their own internal solutions, and they feel it will provide them with a competitive advantage. Until it becomes clear that all of their competitors have similar solutions, and that they no longer get an advantage from it, then they will look to transfer those solutions to the EDA companies so they do not have to maintain them. The EDA companies will then start to fight to make the solution they have acquired the standard. It all takes a long time.

In partial defense of the EDA companies, they are facing so many new issues these days that they are spread very thin dealing with new nodes, 2.5D, 3D, shift left, multi-physics, AI algorithms – to name just a few. They already spend more on R&D than most technology companies as a percentage of revenue.

Perhaps Accellera could start to include discussion forums in events like DVCon. This would allow for an open discussion about the problems they need to have solved. Perhaps they could start to produce the EDA equivalent of the old ITRS roadmap. It sure would save a lot of time and energy (pun intended).

The post Design Tool Think Tank Required appeared first on Semiconductor Engineering.

2.5D Integration: Big Chip Or Small PCB?

Defining whether a 2.5D device is a printed circuit board shrunk down to fit into a package, or is a chip that extends beyond the limits of a single die, may seem like hair-splitting semantics, but it can have significant consequences for the overall success of a design.

Planar chips always have been limited by size of the reticle, which is about 858mm2. Beyond that, yield issues make the silicon uneconomical. For years, that has limited the number of features that could be crammed onto a planar substrate. Any additional features would need to be designed into additional chips and connected with a printed circuit board (PCB).

The advent of 2.5D packaging technology has opened up a whole new axis for expansion, allowing multiple chiplets to be interconnected inside an advanced package. But the starting point for this packaged design can have a big impact on how the various components are assembled, who is involved, and which tools are deployed and when.

There are several reasons why 2.5D is gaining ground today. One is cost. “If you can build smaller chips, or chiplets, and those chiplets have been designed and optimized to be integrated into a package, it can make the whole thing smaller,” says Tony Mastroianni, advanced packaging solutions director at Siemens Digital Industries Software. “And because the yield is much higher, that has a dramatic impact on cost. Rather than having 50% or below yield for die-sized chips, you can get that up into the 90% range.”

Interconnecting chips using a PCB also limits performance. “Historically, we had chips packaged separately, put on the PCB, and connected with some routing,” says Ramin Farjadrad, CEO and co-founder of Eliyan. “The problems people started to face were twofold. One was that the bandwidth between these chips was limited by going through the PCB, and then a limited number of balls on the package limited the connectivity between these chips.”

The key difference with 2.5D compared to a PCB is that 2.5D uses chip dimensions. There are much finer-grain wires, and various components can be packed much closer together on an interposer or in a package than on a board. For those reasons, wires can be shorter, there can be more of them, and bandwidth is increased.

That impacts performance at multiple levels. “Since they are so close, you don’t have the long transport RC or LC delays, so it’s much faster,” says Siemens’ Mastroianni. “You don’t need big drivers on a chip to drive long traces over the board, so you have lower power. You get orders of magnitude better performance — and lower power. A common metric is to talk about pico joules per bit. The amount of energy it takes to move bits makes 2.5D compelling.”

Still, the mindset affects the initial design concept, and that has repercussions throughout the flow. “If you talk to a die designer, they’re probably going to say that it is just a big chip,” says John Park, product management group director in the Custom IC & PCB Group at Cadence. “But if you talk to a package designer, or a board designer, they’re going to say it’s basically a tiny PCB.”

Who is right? “The internal organizational structure within the company often decides how this is approached,” says Marc Swinnen, director of product marketing at Ansys. “Longer term, you want to make sure that your company is structured to match the physics and not try to match the physics to your company.”

What is clear is that nothing is certain. “The digital world was very regular in that every two years we got a new node that was half size,” says Cadence’s Park. “There would be some new requirements, but it was very evolutionary. Packaging is the Wild West. We might get 8 new packaging technologies this year, 3 next year, 12 the next year. Many of these are coming from the foundries, whereas it used to be just from the outsourced semiconductor assembly and test companies (OSATs) and the substrate providers. While the foundries are a new entrant, the OSATs are offering some really interesting packaging technologies at a lower cost.”

Part of the reason for this is that different groups of people have different requirement sets. “The government and the military see the primary benefits as heterogeneous integration capabilities,” says Ansys’ Swinnen. “They are not pushing the edge of processing technology. Instead, they are designing things like monolithic microwave integrated circuits (MMICs), where they need waveguides for very high-speed signals. They approach it from a packaging assembly point of view. Conversely, the high-performance compute (HPC) companies approach it from a pile of 5nm and 3nm chips with high performance high-bandwidth memory (HBM). They see it as a silicon assembly problem. The benefit they see is the flexibility of the architecture, where they can throw in cores and interfaces and create products for specific markets without having to redesign each chiplet. They see flexibility as the benefit. Military sees heterogeneous integration as the benefit.”

Materials
There are several materials used as the substrate in 2.5D packaging technology, each of which has different tradeoffs in terms of cost, density, and bandwidth, along with each having a selection of different physical issues that must be overcome. One of the primary points of differentiation is the bump pitch, as shown in figure 1.

Fig 1. Chiplet interconnection for various substrate configurations. Source: Eliyan

Fig 1. Chiplet interconnection for various substrate configurations. Source: Eliyan

When talking about an interposer, it generally is considered to be silicon. “The interposer could be a large piece of silicon (Fig 1 top), or just silicon bridges between the chips (Fig 1 middle) to provide the connectivity,” says Eliyan’s Farjadrad. “Both of these solutions use micro-bumps, which have high density. Interposers and bridges provide a lot of high-density bumps and traces, and that gives you bandwidth. If you utilize 1,000 wires each running at 5Gb, you get 5Tb. If you have 10,000, you get 50Tb. But those signals cannot go more than two or three millimeters. Alternatively, if you avoid the silicon interposer and you stay with an organic package (Fig 1 bottom), such as flip chip package, the density of the traces is 5X to 10X less. However, the thickness of the wires can be 5X to 10X more. That’s a significant advantage, because the resistance of the wires will go down by the square of the thickness of the wires. The cross section of that wire goes up by the square of that wire, so the resistance comes down significantly. If it’s 5X less density, that means you can run signals almost 25X further.”

For some people, it is all about bandwidth per millimeter. “If you have a parallel bus, or a parallel interface that is high speed, and you want bandwidth per millimeter, then you would probably pick a silicon interposer,” says Kent Stahn, senior manager of hardware engineering in Synopsys‘ Solutions Group. “An organic substrate is low-loss, low-cost, but it doesn’t have the density. In between, there are a bunch of solutions that deliver on some of that, but not for the same cost.”

There are other reasons to pick a substrate material, as well. “Silicon interposer comes from a foundry, so availability is a problem,” says Manuel Mota, senior staff product manager in Synopsys’ Solutions Group. “Some companies are facing challenges in sourcing advanced packages because capacity is taken. By going to other technologies that have a little less bandwidth density, but perhaps enough for your application, you can find them elsewhere. That’s becoming a critical aspect.”

All of these technologies are progressing rapidly, however. “The reticle limit is about 858mm square,” says Park. “People are talking about interposers that are perhaps four times that size, but we have laminates that go much bigger. Some of the laminate substrates coming from Japan are approaching that same level of interconnect density that we can get from silicon. I personally see more push towards organic substrates. Chip-on-Wafer-on-Substrate (CoWoS) from TSMC uses a silicon interposer and has been the technology of choice for about 12 years. More recently they introduced CoWoS-R, which uses film polyamide, closer to an organic type of substrate. Now we hear a lot about glass substrates.”

Over time, the total real estate inside the package may grow. “It doesn’t make sense for foundries to continue to build things the size of a 30-inch printed circuit board,” adds Park. “There are materials that are capable of addressing the bigger designs. Where we really need density is die-to-die. We want those chiplets right next to each other, a couple of millimeters of interconnect length. We want things very short. But the rest of it is just fanning out the I/O so that it connects to the PCB.”

This is why bridges are popular. “We do see a progression to bridges for the high-speed part of the interface,” say Synopsys’ Stahn. “The back side of it would be fanout, like RDL fanout. We see RDL packages that are going to be more like traditional packages going forward.”

Interposers offer additional capabilities. “Today, 99% of the interposers are passive,” says Park. “There’s no front end of line, there are no device layers. It’s purely back end of line processing. You are adding three, four, five metal layers to that silicon. That’s what we call a passive interposer. It’s just creating that die-to-die interconnect. But there are people taking that die and making it an active interposer, basically adding logic to that.”

That can happen for different purposes. “You already see some companies doing active interposers, where they add power management or some of the controls logic,” says Mota. “When you start putting active circuits on interposer, is it still a 2.5D integration, or does it become a 3D integration? We don’t see a big trend toward active interposers today.”

There are some new issues, though. “You have to consider coefficients of thermal expansion (CTE) mismatches,” says Stahn. “This happens whenever two materials with different CTEs are bonded together. Let’s start with the silicon interposer. You can get higher wattage systems, where the SoCs can be talking to their peers, and that can consume a lot of power. A silicon interposer still has to go in a package. The CTE mismatches are between the silicon to the package material. And with the bridge, you’re using it where you need it, but it’s still silicon die-to-die. You have to do the thermal mechanical analysis to make sure that the power that you’re delivering, and the CTE mismatches that you have, result in a viable system.”

While signal lengths in theory can get longer, this poses some problems. “When you’re making those long connections inside a chip, you typically limit those routes to a couple of millimeters, and then you buffer it,” says Mastroianni. “The problem with a passive silicon interposer is there are no buffers. That can really become a serious issue. If you do need to make those connections, you need to plan those out very carefully. And you do need to make sure you’re running timing analysis. Typically, your package guys are not going to be doing that analysis. That’s more of a problem that’s been solved with static timing analysis by silicon engineers. We do need to introduce an STA flow and deal with all the extractions that include organic and silicon type traces, and it becomes a new problem. When you start getting into some of those very long traces, your simple RC timing delays, which are assumed in normal STA delay calculators, don’t account for some of the inductance and mutual inductance between those traces, so you can get serious accuracy issues for those long traces.”

Active interposers help. “With active interposers, you can overcome some of the long-distance problems by putting in buffers or signal repeaters,” says Swinnen. “Then it starts looking more like a chip again, and you can only do it on silicon. You have the EMIB technology from Intel, where they embedded chiplet into the interposer and that’s an active bridge. The chip talks to the EMIB chip, and they both talk to you through this little active bridge chip, which is not exactly an active interposer, but acts almost like an active interposer.”

But even passive components add value. “The first thing that’s being done is including trench capacitors in the interposer,” says Mastroianni. “That gives you the ability to do some good decoupling, where it counts, close to the die. If you put them out on the board, you lose a lot of the benefits for the high-speed interfaces. If you can get them in the interposer, sitting right under where you have the fast-switching speed signals, you can get some localized decoupling.”

In addition to different materials, there is the question of who designs the interposer. “The industry seems to think of it as a little PCB in the context of who’s doing the design,” says Matt Commens, senior manager for product management at Ansys. “The interposers are typically being designed by packaging engineers, even though they are silicon processes. This is especially true for the high-performance ones. It seems counterintuitive, but they have that signal integrity background, they’ve been designing transmission lines and minimizing mismatch at interconnects. A traditional IC designer works from a component point of view. So certainly, the industry is telling us that the people they’re assigning to do that design work are packaging type of personas.”

Power
There are some considerable differences in routing between PCBs and interposers. “Interposer routing is much easier, as the number of components is drastically reduced compared to the PCB,” says Andy Heinig, head of department for efficient electronics at Fraunhofer IIS/EAS. “On the other hand, the power grid on the interposer is much more complex due to the higher resistance of the metal layers and the fact that the power grid is cut out by signal wires. The routing for the die-to-die interface is more complex due to the routing density.”

Power delivery looks very different. “If you look at a PCB, they put these big metal pour areas embedded in the layers, and they void out areas where things need to go through,” says Park. “You put down a bunch of copper and then you void out the others. We can’t build an interposer that way. We have to deposit the interconnect, so the power and ground structures on a silicon interposer will look more like a digital chip. But the signal will look more like a PCB or laminate package.”

Routing does look more like a PCB than a chip. “You’ll see things like teardrops or fillets where it makes a connection to a pad or via to create better yield,” adds Park. “The routing styles today are more aligned to PCBs than they are to a digital IC, where you just have 90° orthogonal corners and clean routing channels. For interposers, whether it’s silicon or organic, the via is often bigger than the wire, which is a classic PCB problem. The routers, if we’re talking about digital, is again more like a small PCB than a die.”

TSVs can create problems, too. “If you’re going to treat them as square, you’re losing a lot of space at the corners,” says Swinnen. “You really want 45° around those objects. Silicon routers are traditionally Manhattan, although there has been a long tradition of RDL routing, which is the top layer where the bumps are connected. That has traditionally used octagonal bumps or round bumps, and then 45° routing. It’s not as flexible as the PCB routing, but they have redistribution layer routers, and also they have some routers that come from the full custom side which have full river routing.”

Related Reading
True 3D Is Much Tougher Than 2.5D
While terms often are used interchangeably, they are very different technologies with different challenges.
Thermal Integrity Challenges Grow In 2.5D
Work is underway to map heat flows in interposer-based designs, but there’s much more to be done.

The post 2.5D Integration: Big Chip Or Small PCB? appeared first on Semiconductor Engineering.

Accellera Preps New Standard For Clock-Domain Crossing

Part of the hierarchical development flow is about to get a lot simpler, thanks to a new standard being created by Accellera. What is less clear is how long will it take before users see any benefit.

At the register transfer level (RTL), when a data signal passes between two flip flops, it initially is assumed that clocks are perfect. After clock-tree synthesis and place-and-route are performed, there can be considerable timing skew between the clock edges arriving those adjacent flops. That makes timing sign-off difficult, but at least the clocks are still synchronous.

But if the clocks come from different sources, are at different frequencies, or a design boundary exists between the flip flops — which would happen with the integration of IP blocks — it’s impossible to guarantee that no clock edges will arrive when the data is unstable. That can cause the output to become unknown for a period of time. This phenomenon, known as metastability, cannot be eliminated, and the verification of those boundaries is known as clock-domain crossing (CDC) analysis.

Special care is required on those boundaries. “You have to compensate for metastability by ensuring that the CDC crossings follow a specific set of logic design principles,” says Prakash Narain, president and CEO of Real Intent. “The general process in use today follows a hierarchical approach and requires that the clock-domain crossing internal to an IP is protected and safe. At the interface of the IP, where the system connects with the IP, two different teams share the problem. An IP provider may recommend an integration methodology, which often is captured in an abstraction model. That abstraction model enables the integration boundary to be verified while the internals of it will not be checked for CDC. That has already been verified.”

In the past, those abstract models differentiated the CDC solutions from veracious vendors. That’s no longer the case. Every IP and tool vendor has different formats, making it costly for everyone. “I don’t know that there’s really anything new or differentiating coming down the pipe for hierarchical modeling,” says Kevin Campbell, technical product manager at Siemens Digital Industries Software. “The creation of the standard will basically deliver much faster results with no loss of quality. I don’t know how much more you can differentiate in that space other than just with performance increases.”

While this has been a problem for the whole industry for quite some time, Intel decided it was time for a solution. The company pushed Accellera to take up the issue, and helped facilitate the creation of the standard by chairing the committee. “I’m going to describe three methods of building a product,” says Iredamola “Dammy” Olopade, chair of the Accellera working group, and a principal engineer at Intel. “Method number one is where you build everything in a monolithic fashion. You own every line of code, you know the architecture, you use the tool of your choice. That is a thing of the past. The second method uses some IP. It leverages reuse and enables the quick turnaround of new SoCs. There used to be a time when all IPs came from the same source, and those were integrating into a product. You could agree upon the tools. We are quickly moving to a world where I need to source IPs wherever I can get them. They don’t use the same tools as I do. In that world, common standards are critical to integrating quickly.”

In some cases, there is a hierarchy of IP. “Clock-domain crossings are a central part of our business,” says Frank Schirrmeister, vice president of solutions and business development at Arteris. “A network-on-chip (NoC) can be considered as ‘CDC central’ because most blocks connected to the NoC have different clocks. Also, our SoC integration tools see all of the blocks to be integrated, and those touch various clock domains and therefore need to deal with the CDC code that is inserted.”

This whole thing can become very messy. “While every solution supports hierarchical modeling, every tool has its own model solution and its own model representation,” says Siemens’ Campbell. “Vendors, or users, are stuck with a CDC solution, because the models were created within a certain solution. There’s no real transportability between any of the hierarchical modeling solutions unless they want to go regenerate models for another solution.”

That creates a lot of extra work. “Today, when dealing with customer CDC issues, we have to consider the customer’s specific environment, and for CDC, a potential mix of in-house flows and commercial tools from various vendors,” says Arteris’ Schirrmeister. “The compatibility matrix becomes very complex, very fast. If adopted, the new Accellera CDC standard bears the potential to make it easier for IP vendors, like us, to ensure compatibility and reduce the effort required to validate IP across multiple customer toolsets. The intent, as specified in the requirements is that ‘every IP provider can run its tool of choice to verify and produce collateral and generate the standard format for SoCs that use a different tool.'”

Everyone benefits. “IP providers will not need to provide extra documentation of clock domains for the SoC integrator to use in their CDC analysis,” says Ahmed Nasr, digital design manager at Mixel. “The standard CDC attributes generated by the EDA tool will be self-contained.”

The use model is relatively simple. “An IP developer signs off on CDC and then exports the abstract model,” says Real Intent’s Narain. “It is likely they will write this out in both the Accellera format and the native format to provide backward compatibility. At the next level of hierarchy, you read in the abstract model instead of reading in the full view of the design. They have various views of the IP, including the CDC view of the IP, which today is on the basis of whatever tool they use for CDC sign-off.”

The potential is significant. “If done right and adopted, the industry may arrive at a common language to describe CDC aspects that can streamline the validation process across various tools and environments used by different users,” says Schirrmeister. “As a result, companies will be able to integrate and validate IP more efficiently than before, accelerating development cycles and reducing the complexity associated with SoC integration.”

The standard
Intel’s Olopade describes the approach that was taken during the creation of the standard. “You take the most complex situations you are likely to find, you box them, and you co-design them in order to reduce the risk of bugs,” he said. “The boundaries you create are supposed to be simple boundaries. We took that concept, and we brought it into our definition to say the following: ‘We will look at all kinds of crossings, we will figure out the simple common uses, and we will cover that first.’ That is expected to cover 95% to 98% of the community. We are not trying to handle 700 different exceptions. It is common. It is simple. It is what guarantees production quality, not just from a CDC standpoint, but just from a divide-and-conquer standpoint.”

That was the starting point. “Then we added elements to our design document that says, ‘This is how we will evaluate complexity, and this is how we’ll determine what we cover first,'” he says. “We broke things down into three steps. Step one is clock-domain crossing. Everyone suffers from this problem. Step two is reset-domain crossing (RDC). As low power is getting into more designs, there are a lot more reset domains, and there is risk between these reset domains. Some companies care, but many companies don’t because they are not in a power-aware environment. It became a secondary consideration. Beyond the basic CDC in phase one, and RDC in phase two, all other interesting, small usage complexities will be handled in phase three as extensions to the standard. We are not going to get bogged down supporting everything under the sun.”

Within the standards group there are two sub-groups — a mapping team and a format team. Common standards, such as AMBA, UCIe, and PCIe have been looked at to make sure that these are fully covered by the standard. That means that the concepts should be useful for future markets.

“The concepts contained in the standard are extensible to hardened chiplets,” says Mixel’s Nasr. “By providing an accurate standard CDC view for the chiplet, it will enable integration with other chiplets.”

Some of those issues have yet to be fully explored. “The standard’s current documentation primarily focuses on clock-domain crossing within an SoC itself,” says Schirrmeister. “Its direct applicability to the area of chiplets would depend on further developments. The interfaces between fully hardened IP blocks on chiplets would communicate through standard interfaces like UCIe, BoW, or XSR, so the synchronization issues between chiplets on substrates would appear to be elevated to the protocol levels.”

Reset-domain crossings have yet to appear in the standard. “The genesis of CDC is asynchronous clocks,” says Narain. “But the genesis for reset-domain crossing is asynchronous resets. While the destination is due to the clock, the source of the problem is somewhere else. And as a result, the nature of the problem, the methodology that people use to manage that problem, are very different. The kind of information that you need to retain, and the kind of information that you can throw away, is different for every problem. Hence, abstractions are actually very customized for the application.”

Does the standard cover enough ground? That is part of the purpose of the review period that was used to collect information. “I can see some room for future improvement — for example, making some attributes mandatory like logic, associated_clocks, clock_period for clock ports,” says Nasr. “Another proposed improvement is adding reconvergence information, to be able to detect reconverging outputs of parallel synchronizers.”

The impact of all of this, if realized, is enormous. “If you truly run a collaborative, inclusive, development cycle, two things will happen,” says Olopade. “One, you are going to be able to find multiple ways to solve each problem. You need to understand the pros and cons against the real problems you are trying to solve and agree on the best way we should do it together. For each of those, we record the options, the pros and cons, and the reason one was selected. In a public review, those that couldn’t be part of that discussion get to weigh in. We weigh it against what they are suggesting versus why did we choose it. In the cases where it is part of what we addressed, and we justified it, we just respond, and we do not make a change. If you’re truly inclusive, you do allow that feedback to cause you to change your mind. We received feedback on about three items that we had debated, where the feedback challenged the decisions and got us to rehash things.”

The big challenge
Still, the creation of a standard is just the first step. Unless a standard is fully adopted, its value becomes diminished. “It’s a commendable objective and a worthy endeavor,” says Schirrmeister. “It will make interoperability easier and eventually allow us, and the whole industry, to reduce the compatibility matrix we maintain to deal with vendor tools individually. It all will depend on adoption by the vendors, though.”

It is off to a good start. “As with any standard, good intentions sometimes get severed by reality,” says Campbell. “There has been significant collaboration and agreements on how the standard is being pushed forward. We did not see self-submarining, or some parties playing nice just to see what’s going on but not really supporting it. This does seem like good collaboration and good decision making across the board.”

Implementation is another hurdle. “Will it actually provide the benefit that it is supposed to provide?” asks Narain. “That will depend upon how completely and how quickly EDA tool vendors provide support for the standard. From our perception, the engineering challenge for implementing this is not that large. When this is standardized, we will provide support for it as soon as we can.”

Even then, adoption isn’t a slam dunk. “There are short- and long-term problems,” warns Campbell. “IP vendors already have to support multiple formats, but now you have to add Accellera on top of that. There’s going to be some pain both for the IP vendors and for EDA vendors. We are going to have to be backward-compatible and some programs go on for decades. There’s a chance that some of these models will be around for a very long time. That’s the short-term pain. But the biggest hurdle to overcome for a third-party IP vendor, and EDA vendor, is quality assurance. The whole point of a hierarchical development methodology is faster CDC closure with no loss in quality. The QA load here is going to be big, because no customer is going to want to take the risk if they’ve got a solution that is already working well.”

Some of those issues and fears are expected to be addressed at the upcoming DVCon conference. “We will be providing a tutorial on CDC,” says Olopade. “The first 30 minutes covers the basics of CDC for those who haven’t been doing this for the last 10 years. The next hour will talk about the Accellera solution. It will concentrate on those topics which were hotly debated, and we need to help people understand, or carry people along with what we recommend. Then it may become more acceptable and more adoptive.”

Related Reading
Design And Verification Methodologies Breaking Down
As chips become more complex, existing tools and methodologies are stretched to the breaking point.

The post Accellera Preps New Standard For Clock-Domain Crossing appeared first on Semiconductor Engineering.

Changing the Mental Health Emergency Response System in Washington County, Oregon

pOn October 24, 2022 at 2 a.m., 27-year-old Joshua Wesley called a crisis help line from his home in Washington County, Oregon, just west of Portland. He was having suicidal thoughts and knew that he needed professional help. But instead of receiving a mental health provider as specifically requested, he encountered a group of armed police officers at his door. This response not only deprived Wesley of the immediate psychiatric care that he needed, but it also led to him being arrested and seriously injured by the responding officer. He ultimately spent two weeks in the hospital, and six months in jail./p pWesley told us that he felt that he needed qualified professionals to console him, talk him down, and give him solutions. But the officers that showed up made the situation worse by simply trying tried to put him in handcuffs and cart him off./p pJoining forces with the ACLU, Disability Rights Oregon, the ACLU of Oregon, and the law firm Shepherd Mullin, Wesley is a plaintiff in a recently filed lawsuit against Washington County and the local 911 dispatch center. The lawsuit asserts that the county’s emergency response system discriminates against people with mental health disabilities and exposes them to risk of serious harm, including injury, arrest, and incarceration. Wesley said that he joined the case because he believes strongly in helping out others facing similar struggles./p div class=wp-heading mb-8 h2 id= class=wp-heading-h2 with-standardA Life-or-Death Situation/h2 /div pWashington County has a history of inappropriately responding to mental health crises. In 2022, police officers were dispatched to 100 percent of the calls coded as “behavioral health incidents” in Washington County. The county does have mobile crisis teams comprised exclusively of mental health clinicians, the sole non-police response available there. But, while the mobile crisis teams are intended to be available 24/7, in practice, they’re underfunded, not connected with the emergency dispatch system, and often unavailable — especially at night, when many mental health crises occur./p pPolice response to mental health crises can be dangerous and even deadly. Police officers are not qualified mental health professionals and should not be expected to assess and treat people in crisis. Beyond that, police presence may actually make mental health symptoms worse, triggering anxiety and paranoia. Most alarming of all, it is a href=https://www.treatmentadvocacycenter.org/reports_publications/overlooked-in-the-undercounted-the-role-of-mental-illness-in-fatal-law-enforcement-encounters/estimated/a that people with untreated mental illness are 16 times more likely than others to be killed by the police during an encounter./p a href=https://www.aclu.org/news/criminal-law-reform/911-reimagining-a-system-that-defaults-to-dispatching-police class=wp-link mb-8 target=_blank div class=p-4-mobile p-6-tablet div class=mb-4 div class=wp-link__img-wrapper is-relative img width=1200 height=628 src=https://wp.api.aclu.org/wp-content/uploads/2024/02/7d213448e117ddd5d2241ad28928c059.jpg class=attachment-original size-original alt= decoding=async loading=lazy srcset=https://wp.api.aclu.org/wp-content/uploads/2024/02/7d213448e117ddd5d2241ad28928c059.jpg 1200w, https://wp.api.aclu.org/wp-content/uploads/2024/02/7d213448e117ddd5d2241ad28928c059-768x402.jpg 768w, https://wp.api.aclu.org/wp-content/uploads/2024/02/7d213448e117ddd5d2241ad28928c059-400x209.jpg 400w, https://wp.api.aclu.org/wp-content/uploads/2024/02/7d213448e117ddd5d2241ad28928c059-600x314.jpg 600w, https://wp.api.aclu.org/wp-content/uploads/2024/02/7d213448e117ddd5d2241ad28928c059-800x419.jpg 800w, https://wp.api.aclu.org/wp-content/uploads/2024/02/7d213448e117ddd5d2241ad28928c059-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /div /div div class= div class=wp-link__title h3 class=is-size-6-mobile pr-4 911: Reimagining a System that Defaults to Dispatching Police /h3 /div div class=wp-link__description pr-3 mt-1 p class=is-size-7-mobile is-size-6-tabletEmergency response systems must be revamped to equip 911 call-takers to dispatch non-police first responders./p /div /div /div div class=wp-link__source p-4 px-6-tablet p class=is-size-7Source: American Civil Liberties Union/p /div /a pThat’s what nearly happened in Wesley’s case. Instead of being provided with the care he was seeking — on-site psychiatric assessment and treatment — he was placed under a “police officer hold,” a form of involuntary detention, and transported to a hospital via ambulance. Wesley was not treated or stabilized during transport and his symptoms worsened. At the hospital, Wesley was still suicidal and he attempted to take an officer’s firearm to use on himself. During the incident, the officer stabbed Wesley several times, resulting in serious injuries to his chest, stomach, and head./p pThe damage to Wesley’s body serves as a constant reminder of the incident. The scars left from the incident demonstrate that there could have been other ways to deal with the situation, Wesley told us./p pWesley then spent two weeks in the hospital recovering. During this time, his repeated requests for mental health assistance and therapy were denied. He remained handcuffed to his bed and kept under near-constant police surveillance. Wesley felt that the doctors stopped looking at him as a patient who needed help and treatment to heal, but rather, as a criminal./p pAfter being released from the hospital, Wesley faced criminal charges arising from the altercation with the officer. He spent six months in jail, missing the birth of his first and only son. He also missed the holidays and time with his family at a time of great strife./p pUltimately, it took months for Wesley to receive the psychiatric help that he first sought in October./p div class=wp-heading mb-8 h2 id= class=wp-heading-h2 with-standardA More Humane Emergency Response/h2 /div pWhen someone in Washington County experiences a physical health crisis, like a heart attack or a severe allergic reaction, they can call 911 and expect a response from a qualified medical professional, like an EMT or paramedic. The same cannot be said, however, for someone experiencing a mental health crisis./p pThe lawsuit explains how this discrepancy violates the Americans with Disabilities Act and Rehabilitation Act. Mental health crises demand a mental health response — not a police response — because they are, at their core, health emergencies./p pExperts agree that mental health emergencies should be addressed by mental health professionals, not the police. As part of theira href=https://www.samhsa.gov/sites/default/files/national-guidelines-for-behavioral-health-crisis-care-02242020.pdf recommended best practices,/a the Substance Abuse and Mental Health Services Administration (SAMHSA) proposes a three-tiered system that includes a crisis call center, mobile crisis teams, and stabilization centers for walk-ins and drop-offs. SAMHSA also noted that responding with police is “unacceptable and unsafe,” a view that the a href=https://www.nami.org/Blogs/NAMI-Blog/July-2022/Mobile-Crisis-Teams-Providing-an-Alternative-to-Law-Enforcement-for-Mental-Health-CrisesNational Alliance on Mental Illness/a shares./p pAs a result of Washington County’s inappropriate response to mental health crises, it discriminates against people with mental health disabilities on a daily basis. . This lawsuit seeks to improve its mental healthcare system. Possible solutions include fully funding mobile crisis response teams that can bring care and support to the people who need it, when they need it./p pWashington County isn#8217;t the only jurisdiction with a system in need of reform. Justice Department investigations have found similar discrimination in Louisville and Minneapolis, stating that relying on police as mental health first responders causes “real harm in the form of trauma, injury, and death to people experiencing behavioral health issues.”/p pWesley hopes that this case brings widespread attention to an issue that impacts many lives on a daily basis. People with mental health disabilities are harmed both because of a failed response to mental health crises , and because many people with mental health disabilities don’t want to call for help out of fear of an armed police response. Wesley sees a need nationwide for an important reckoning for how jurisdictions respond to mental health crises. Counties and other locales should be looking at their systems and asking, “Is our system for mental health crisis response fair? Is it safe? Is it right?”/p pHow jurisdictions answer these questions could have a major impact on the care and support people with mental health disabilities receive while in crisis. We must not allow discriminatory practices that cause real harm and death to go unchecked./p div class=rss-cta__titleWe need you with us to keep fighting/diva href=https://action.aclu.org/give/now class=rss-cta__buttonDonate today/a/div
❌