FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Latest
  • Democratic Platform Attacks Trump for Not Going to WarMatthew Petti
    Donald Trump oversaw some scary moments in international politics. The former president seriously escalated tensions with North Korea and Iran, leading to several war scares. But he pulled back from the brink, sometimes against the wishes of his more hawkish advisers. He avoided a direct U.S.-Iranian war and opened a direct line of communication with North Korea. Democrats seem to wish he'd gone to war instead. The Democratic National Committee's
     

Democratic Platform Attacks Trump for Not Going to War

20. Srpen 2024 v 03:11
Then-vice president Joe Biden tours the Joint Security Area on the border between North Korea and South Korea on December 7, 2013. | U.S. Navy Photo by Mass Communication Specialist 2nd Class Chris Church

Donald Trump oversaw some scary moments in international politics. The former president seriously escalated tensions with North Korea and Iran, leading to several war scares. But he pulled back from the brink, sometimes against the wishes of his more hawkish advisers. He avoided a direct U.S.-Iranian war and opened a direct line of communication with North Korea.

Democrats seem to wish he'd gone to war instead. The Democratic National Committee's 2024 platform, approved in a symbolic vote on Monday night, tries to outhawk Trump, denouncing his "fecklessness" on Iran and his "love letters" to North Korea. Although the platform condemns Trump for pulling out of diplomacy with Iran, it also attacks his decisions not to bomb Iran at several crucial points.

Ironically, the Democratic platform is not much different from Republicans' own attacks on the Biden administration. Each side accuses the other of weakness, and neither wants to take credit for diplomacy or own the compromises necessary to avoid war.

It's easy to forget now, but in 2017 the Korean peninsula had become a remarkably tense place. North Korea was testing nuclear weapons and intercontinental ballistic missiles capable of hitting U.S. soil. The U.S. military was massing forces in the region, and Trump was issuing threats.

Trump's national security adviser, H.R. McMaster, reportedly called for a military attack aimed at giving North Korea a "bloody nose." McMaster and Sen. Lindsey Graham (R–S.C.) publicly warned that war might be inevitable.

And then, in January 2018, a false alarm drove home the lesson that nuclear war is nothing to play around with. During a disaster preparedness drill, authorities in Hawaii accidentally sent an alert about an incoming ballistic missile. For more than half an hour, Hawaiians and tourists were convinced that they were going to die in a nuclear war.

A few months later, McMaster was out of the White House. Trump accepted an invitation to meet with North Korean leader Kim Jong Un in June 2018. Trump met Kim again in February 2019. Stepping over the North Korean–South Korean border in June 2019, Trump became the first U.S. president to visit North Korea.

The meetings failed to secure a permanent agreement—it didn't help that McMaster's replacement, John Bolton, publicly hinted that denuclearization would end in Kim's violent death—but they bought some crucial breathing room.

The Democrats' 2024 platform attacks the very idea of talks with North Korea. Trump's approach, the platform says, was "embarrassing the United States on the world stage including by flattering and legitimizing Kim Jong Un, exchanging 'love letters' with the North Korean dictator."

This isn't a break with past Democratic rhetoric. During the presidential debates in 2019, then-candidate Joe Biden said that Trump gave "North Korea everything they wanted, creating the legitimacy by having a meeting with Kim Jong Un." Another candidate, Kamala Harris, said that there are "no concessions to be made. He has traded a photo op for nothing."

If even talking to North Korea is a "concession," then it's hard to see what alternative Harris would accept, other than continuing to barrel towards nuclear war.

Iran, unlike North Korea, does not have nuclear weapons. In 2017, Trump tore up an international agreement that regulated Iranian nuclear activities, instead betting on a "maximum pressure" campaign designed to overthrow the Iranian government by cutting off its oil exports. Bolton later said in his memoir that "only regime change would ultimately prevent Iran from possessing nuclear weapons," and then–Secretary of State Mike Pompeo was obsessed with killing the Iranian general Qassem Soleimani.

The Iranian government did not react warmly to the maximum pressure campaign. Iranian forces encouraged rocket attacks on U.S. bases in Iraq, and Iran is believed to be behind sabotage attacks on the international oil industry, including a September 2019 drone strike on Saudi oil infrastructure.

The U.S. military massed forces off the coast of Iran during this time. On June 19, 2019, Iran shot down an American surveillance drone. (The two countries disagree on whether the drone was in Iranian airspace.) Trump ordered a bombing raid on Iranian air defense batteries, then pulled back at the last minute, because killing Iranian troops was "not proportionate to shooting down an unmanned drone."

Although the Democratic platform calls maximum pressure a "reckless and short-sighted decision," it also attacks Trump for failing to hit Iran back at each of these points. "Trump's only response" to an Iraqi militia attack on the U.S. consulate in Basra "was to close our diplomatic facility," the Democrats complain, and "Trump failed to respond against Iran or its proxies" for the attack on Saudi oil facilities.

The platform is somewhat ambiguous on whether Trump should have bombed Iran in June 2019. "Trump responded by tweet and then abruptly called off any actual retaliation, causing confusion and concern among his own national security team," it says. Perhaps putting American lives at risk to avenge the honor of a robot would be too far even for the Biden team.

Maximum pressure reached its climax in January 2020, when Trump followed Pompeo's advice and ordered the military to assassinate Soleimani. Iran responded by launching 12 ballistic missiles at a U.S. base in Iraq, which injured Americans but did not kill anyone. Trump called it even, claiming that "Iran appears to be standing down, which is a good thing for all parties concerned."

At the time, Democrats were highly critical of the decision to risk war by killing an Iranian officer. "Trump just tossed a stick of dynamite into a tinderbox," Biden wrote right after Soleimani was assassinated. After the Iranian retaliation, Democrats immediately put forward a war powers resolution making it clear that the president does not have the authority to start a war with Iran.

The current Democratic platform takes a different tone. When "Iran, for the first and only time in its history, directly launched ballistic missiles against U.S. troops," the document declares disapprovingly, Trump "again took no action." The platform criticizes Trump for making light of U.S. troops' brain injuries without mentioning the assassination that prompted the Iranian attacks in the first place.

After all, it would be hard for Biden to criticize Trump for bringing America to the brink of war in the Middle East when he has done the same.

After four short years of a Democratic administration, the mood among Democratic leaders has gotten more hawkish, especially as the defense of Ukraine gives them a "good war" to rally behind. But that's not necessarily how the American people, including Democratic voters, feel. Direct talks with North Korea are still popular, and direct war with Iran is still unpopular. Republicans and independents are less likely to call themselves hawks than in 2014, and even Democratic voters are only one percentage point more likely to consider themselves hawkish than before.

There is a public appetite for diplomacy and deescalation. But party leaders don't seem to want to take the opportunity. They would prefer to fight over who can outhawk whom.

The post Democratic Platform Attacks Trump for Not Going to War appeared first on Reason.com.

  • ✇IEEE Spectrum
  • Could Advanced Nuclear Reactors Fuel Terrorist Bombs?Glenn Zorpette
    Various scenarios to getting to net zero carbon emissions from power generation by 2050 hinge on the success of some hugely ambitious initiatives in renewable energy, grid enhancements, and other areas. Perhaps none of these are more audacious than an envisioned renaissance of nuclear power, driven by advanced-technology reactors that are smaller than traditional nuclear power reactors.What many of these reactors have in common is that they would use a kind of fuel called high-assay low-enriched
     

Could Advanced Nuclear Reactors Fuel Terrorist Bombs?

18. Červen 2024 v 21:21


Various scenarios to getting to net zero carbon emissions from power generation by 2050 hinge on the success of some hugely ambitious initiatives in renewable energy, grid enhancements, and other areas. Perhaps none of these are more audacious than an envisioned renaissance of nuclear power, driven by advanced-technology reactors that are smaller than traditional nuclear power reactors.

What many of these reactors have in common is that they would use a kind of fuel called high-assay low-enriched uranium (HALEU). Its composition varies, but for power generation, a typical mix contains slightly less than 20 percent by mass of the highly fissionable isotope uranium-235 (U-235). That’s in contrast to traditional reactor fuels, which range from 3 percent to 5 percent U-235 by mass, and natural uranium, which is just 0.7 percent U-235.

Now, though, a paper in Science magazine has identified a significant wrinkle in this nuclear option: HALEU fuel can theoretically be used to make a fission bomb—a fact that the paper’s authors use to argue for the tightening of regulations governing access to, and transportation of, the material. Among the five authors of the paper, which is titled “The Weapons Potential of High-Assay Low-Enriched Uranium,” is IEEE Life Fellow Richard L. Garwin. Garwin was the key figure behind the design of the thermonuclear bomb, which was tested in 1952.

The Science paper is not the first to argue for a reevaluation of the nuclear proliferation risks of HALEU fuel. A report published last year by the National Academies, “Merits and Viability of Different Nuclear Fuel Cycles and Technology Options and the Waste Aspects of Advanced Nuclear Reactors,” devoted most of a chapter to the risks of HALEU fuel. It reached similar technical conclusions to those of the Science article, but did not go as far in its recommendations regarding the need to tighten regulations.

Why is HALEU fuel concerning?

Conventional wisdom had it that U-235 concentrations below 20 percent were not usable for a bomb. But “we found this testimony in 1984 from the chief of the theoretical division of Los Alamos, who basically confirmed that, yes, indeed, it is usable down to 10 percent,” says R. Scott Kemp of MIT, another of the paper’s authors. “So you don’t even need centrifuges, and that’s what really is important here.”

Centrifuges arranged very painstakingly into cascades are the standard means of enriching uranium to bomb-grade material, and they require scarce and costly resources, expertise, and materials to operate. In fact, the difficulty of building and operating such cascades on an industrial scale has for decades served as an effective barrier to would-be builders of nuclear weapons. So any route to a nuclear weapon that bypassed enrichment would offer an undoubtedly easier alternative. The question now is, how much easier?

“It’s not a very good bomb, but it could explode and wreak all kinds of havoc.”

Adding urgency to that question is an anticipated gold rush in HALEU, after years of quiet U.S. government support. The U.S. Department of Energy is spending billions to expand production of the fuel, including US $150 million awarded in 2022 to a subsidiary of Centrus Energy Corp., the only private company in the United States enriching uranium to HALEU concentrations. (Outside of the United States, only Russia and China are producing HALEU in substantial quantities.) Government support also extends to the companies building the reactors that will use HALEU. Two of the largest reactor startups, Terrapower (backed in part by Bill Gates) and X-Energy, have designed reactors that run on forms of HALEU fuel, and have received billions in funding under a DOE program called Advanced Reactor Demonstration Projects.

The difficulty of building a bomb based on HALEU is a murky subject, because many of the specific techniques and practices of nuclear weapons design are classified. But basic information about the standard type of fission weapon, known as an implosion device, has long been known publicly. (The first two implosion devices were detonated in 1945, one in the Trinity test and the other over Nagasaki, Japan.) An implosion device is based on a hollow sphere of nuclear material. In a modern weapon this material is typically plutonium-239, but it can also be a mixture of uranium isotopes that includes a percentage of U-235 ranging from 100 percent all the way down to, apparently, around 10 percent. The sphere is surrounded by shaped chemical explosives that are exploded simultaneously, creating a shockwave that physically compresses the sphere, reducing the distance between its atoms and increasing the likelihood that neutrons emitted from their nuclei will encounter other nuclei and split them, releasing more neutrons. As the sphere shrinks it goes from a subcritical state, in which that chain reaction of neutrons splitting nuclei and creating other neutrons cannot sustain itself, to a critical state, in which it can. As the sphere continues to compress it achieves supercriticality, after which an injected flood of neutrons triggers the superfast, runaway chain reaction that is a fission explosion. All this happens in less than a millisecond.

The authors of the Science paper had to walk a fine line between not revealing too many details about weapons design while still clearly indicating the scope of the challenge of building a bomb based on HALEU. They acknowledge that the amount of HALEU material needed for a 15-kiloton bomb—roughly as powerful as the one that destroyed Hiroshima during the second World War—would be relatively large: in the hundreds of kilograms, but not more than 1,000 kg. For comparison, about 8 kg of Pu-239 is sufficient to build a fission bomb of modest sophistication. Any HALEU bomb would be commensurately larger, but still small enough to be deliverable “using an airplane, a delivery van, or a boat sailed into a city harbor,” the authors wrote.

They also acknowledged a key technical challenge for any would-be weapons makers seeking to use HALEU to make a bomb: preinitiation. The large amount of U-238 in the material would produce many neutrons, which would likely result in a nuclear chain reaction occurring too soon. That would sap energy from the subsequent triggered runaway chain reaction, limiting the explosive yield and producing what’s known in the nuclear bomb business as a “fizzle.“ However, “although preinitiation may have a bigger impact on some designs than others, even those that are sensitive to it could still produce devastating explosive power,” the authors conclude.

In other words, “it’s not a very good bomb, but it could explode and wreak all kinds of havoc,” says John Lee, professor emeritus of nuclear engineering at the University of Michigan. Lee was a contributor to the 2023 National Academies report that also considered risks of HALEU fuel and made policy recommendations similar to those of the Science paper.

Critics of that paper argue that the challenges of building a HALEU bomb, while not insurmountable, would stymie a nonstate group. And a national weapons program, which would likely have the resources to surmount them, would not be interested in such a bomb, because of its limitations and relative unreliability.

“That’s why the IAEA [International Atomic Energy Agency], in their wisdom, said, ‘This is not a direct-use material,’” says Steven Nesbit, a nuclear-engineering consultant and past president of the American Nuclear Society, a professional organization. “It’s just not a realistic pathway to a nuclear weapon.”

The Science authors conclude their paper by recommending that the U.S. Congress direct the DOE’s National Nuclear Security Administration (NNSA) to conduct a “fresh review” of the risks posed by HALEU fuel. In response to an email inquiry from IEEE Spectrum, an NNSA spokesman, Craig Branson, replied: “To meet net-zero emissions goals, the United States has prioritized the design, development, and deployment of advanced nuclear technologies, including advanced and small modular reactors. Many will rely on HALEU to achieve smaller designs, longer operating cycles, and increased efficiencies over current technologies. They will be essential to our efforts to decarbonize while meeting growing energy demand. As these technologies move forward, the Department of Energy and NNSA have programs to work with willing industrial partners to assess the risk and enhance the safety, security, and safeguards of their designs.”

The Science authors also called on the U.S. Nuclear Regulatory Commission (NRC) and the IAEA to change the way they categorize HALEU fuel. Under the NRC’s current categorization, even large quantities of HALEU are now considered category II, which means that security measures focus on the early detection of theft. The authors want weapons-relevant quantities of HALEU reclassified as category I, the same as for quantities of weapons-grade plutonium or highly enriched uranium sufficient to make a bomb. Category I would require much tighter security, focusing on the prevention of theft.

Nesbit scoffs at the proposal, citing the difficulties of heisting perhaps a metric tonne of nuclear material. “Blindly applying all of the baggage that goes with protecting nuclear weapons to something like this is just way overboard,” he says.

But Lee, who performed experiments with HALEU fuel in the 1980s, agrees with his colleagues. “Dick Garwin and Frank von Hipple [and the other authors of the Science paper] have raised some proper questions,” he declares. “They’re saying the NRC should take more precautions. I’m all for that.”

  • ✇Kotaku
  • Where To Find Elden Ring: Shadow Of The Erdtree's Star-Lined SwordBilly Givens
    The Star-Lined Sword is a Katana that can be found in Elden Ring’s Shadow of the Erdtree expansion. It’s an excellent option for DEX/INT builds and features a cool move set that keeps things interesting in combat, especially if you make good use of its unique Ash of War. If you like to wield katanas and look flashy…Read more...
     

Where To Find Elden Ring: Shadow Of The Erdtree's Star-Lined Sword

21. Červen 2024 v 21:49

The Star-Lined Sword is a Katana that can be found in Elden Ring’s Shadow of the Erdtree expansion. It’s an excellent option for DEX/INT builds and features a cool move set that keeps things interesting in combat, especially if you make good use of its unique Ash of War. If you like to wield katanas and look flashy…

Read more...

  • ✇Kotaku
  • Elden Ring: Shadow Of The Erdtree: Where To Find The Great KatanaBilly Givens
    The Great Katana is, well, a Great Katana found in Elden Ring’s Shadow of the Erdtree expansion. This heavy weapon is a powerful option for those who enjoy using katanas but want to be able to stance-break more efficiently and dish out more damage at the same time. You know what the say: Go big or go home, right?…Read more...
     

Elden Ring: Shadow Of The Erdtree: Where To Find The Great Katana

21. Červen 2024 v 18:07

The Great Katana is, well, a Great Katana found in Elden Ring’s Shadow of the Erdtree expansion. This heavy weapon is a powerful option for those who enjoy using katanas but want to be able to stance-break more efficiently and dish out more damage at the same time. You know what the say: Go big or go home, right?…

Read more...

  • ✇Latest
  • Democrats Surprised To Learn Bombs Are Used To Bomb PeopleMatthew Petti
    Bombs kill people. When someone provides bombs to a government at war, those weapons will be used to kill people. It's a simple fact but one that seems to have eluded Democrats. After voting to send bombs to the Israeli military, Sen. Elizabeth Warren (D–Mass.) condemned the Israeli military for killing Palestinian civilians with an American-made bomb. And after urging the Israeli military to use smaller munitions, the Biden administration found
     

Democrats Surprised To Learn Bombs Are Used To Bomb People

29. Květen 2024 v 18:30
U.S. Air Force Staff Sgt. Phan Huy, a weapons team crew chief of the 57th Wing Maintenance Group, loads GBU-39 small diameter bombs onto an A-10C Thunderbolt II, assigned to the 422nd Test and Evaluation Squadron, at Nellis Air Force Base, Nevada, Oct. 24, 2023. This aircraft can hold up to 16 GBU-39 bombs on four designated weapons racks or an assortment of other munitions to broaden mission capabilities. (U.S. Air Force photo by Airman 1st Class Timothy Perish) | U.S. Air Force photo by Airman 1st Class Timothy Perish

Bombs kill people. When someone provides bombs to a government at war, those weapons will be used to kill people. It's a simple fact but one that seems to have eluded Democrats.

After voting to send bombs to the Israeli military, Sen. Elizabeth Warren (D–Mass.) condemned the Israeli military for killing Palestinian civilians with an American-made bomb. And after urging the Israeli military to use smaller munitions, the Biden administration found itself scrambling to deal with a mass civilian casualty event caused by one of those smaller weapons.

On Sunday, the Israeli Air Force bombed Tel al-Sultan, a neighborhood of Rafah that Israel had previously designated a safe zone for fleeing civilians. The Israeli government claimed the airstrike successfully killed two senior Hamas commanders. But a fire started by the bomb spread through the densely-packed tent city, burning to death at least 45 people, including 12 women, eight children, and three elderly. Israeli Prime Minister Benjamin Netanyahu stated that the civilian deaths were a "tragic mistake."

British doctor James Smith called the fire "one of the most horrific things that I have seen or heard of in all of the weeks that I've been working in Gaza." CNN found pieces of a GBU-39/B Small Diameter Bomb, a type of 250-pound bomb that the U.S. military had rush-shipped to Israel following the Hamas attacks last October, with serial numbers from a California manufacturer.

"The Israeli bombing of a refugee camp inside a designated safe zone is horrific," Warren stated on social media. "Israel has a duty to protect innocent civilians and Palestinians seeking shelter in Rafah have nowhere safe to go. Netanyahu's assault of Rafah must stop. We need an immediate cease-fire."

Last month, Warren had voted for a $26.38 billion U.S. military aid package to Israel, as Rep. Thomas Massie (R–Ky.) pointed out. "Ma'am, you voted to send those bombs to Israel," he wrote in a response to Warren's statement.

Warren's office did not respond to a request for comment. In a statement last month, Warren noted that she voted for the aid package after the Biden administration agreed to certify that every military receiving U.S. aid "follows international law, protects civilians in war zones and allows for humanitarian aid."

On May 10, the administration ruled that there are "reasonable" accusations that Israel breaks the laws of war but that the Israeli government gave "credible and reliable" assurances about how it plans to use U.S. weapons. President Joe Biden also said that he would not be "supplying the weapons" for an Israeli invasion of Rafah that threatened the civilian population and held up a shipment of Mark 80 series bombs, which were responsible for some of the worst mass-casualty attacks in Gaza.

At a Senate hearing earlier this month, Secretary of Defense Lloyd Austin presented the GBU-39/B Small Diameter Bomb as a safer alternative to the Mark 80 series: "A Small Diameter Bomb, which is a precision weapon, that's very useful in a dense, built-up environment, but maybe not so much a 2,000-pound bomb that could create a lot of collateral damage."

Last October, the Israeli military used two American-made 2,000-pound bombs to assassinate a Hamas commander, killing dozens of civilians in the Jabaliya refugee camp.

Austin is right that 2,000-pound bombs, which can kill everything within 600 feet, are more likely to harm bystanders than lighter alternatives. And as the name suggests, the Small Diameter Bomb has a smaller lethal radius. However, that doesn't make the bombs any less lethal for people inside the radius—or people caught up in secondary fires caused by the weapon.

Much of the Israeli army's "precision" targeting is carried out by artificial intelligence programs. The Israeli publication +972 Magazine has reported that one AI targeting system called "Lavender" is allowed to kill a large number of civilians per Hamas fighter, and is believed to have a 10 percent error rate when identifying fighters in the first place.

Another program revealed by +972, called "Where's Daddy," targets Hamas fighters who have left the battlefield and gone home to their families.

In other words, the type of weapon matters but how the weapon is used matters more. Despite Biden's earlier threats and assurances over human rights, the Biden administration is keen to defer to Israeli claims.

"As a result of this strike on Sunday, I have no policy changes to speak to," White House spokesman John Kirby said on Tuesday. "It just happened. The Israelis are going to investigate it. We're going to be taking great interest in what they find in that investigation. And we'll see where it goes from there."

The post Democrats Surprised To Learn Bombs Are Used To Bomb People appeared first on Reason.com.

  • ✇Latest
  • World War War III May Already Have Started—in the ShadowsJ.D. Tuccille
    Britain's signals intelligence spy chief raised eyebrows this week with warnings that Russia is coordinating both cyberattacks and physical acts of sabotage against the West. There's evidence to back her claims—and the West may be returning the favor. Coming soon after FBI Director Christopher Wray warned that China is targeting American infrastructure, it looks like the world is not only fracturing once again, but that the hostile blocs are enga
     

World War War III May Already Have Started—in the Shadows

17. Květen 2024 v 13:00
Russian President Vladimir Putin is seen at a military parade | Kommersant Photo Agency/Kommersant/Newscom

Britain's signals intelligence spy chief raised eyebrows this week with warnings that Russia is coordinating both cyberattacks and physical acts of sabotage against the West. There's evidence to back her claims—and the West may be returning the favor. Coming soon after FBI Director Christopher Wray warned that China is targeting American infrastructure, it looks like the world is not only fracturing once again, but that the hostile blocs are engaged in covert warfare.

Rumors of War

"We are increasingly concerned about growing links between the Russian intelligence services and proxy groups to conduct cyberattacks as well as suspected physical surveillance and sabotage operations," Government Communications Headquarters (GCHQ) Director Anne Keast-Butler told an audience at the United Kingdom government-sponsored CyberUK 2024 conference. "Before, Russia simply created the right environments for these groups to operate, but now they are nurturing and inspiring these non-state cyber actors in some cases seemingly coordinating physical attacks against the West."

Keast-Butler, whose agency is comparable to the U.S. National Security Agency (NSA), also called out China, Iran, and North Korea as cybersecurity dangers. But naming Russian officials as being behind "physical attacks" raises the stakes. Sadly, her claims are well-founded.

Sabotage, Espionage, and Other Mischief

"A 20-year-old British man has been charged with masterminding an arson plot against a Ukrainian-linked target in London for the benefit of the Russian state," CBS News reported last month. That wasn't an isolated incident.

"In April alone a clutch of alleged pro-Russian saboteurs were detained across the continent," The Economist noted May 12 in describing what it called a "shadow war" between East and West. "Germany arrested two German-Russian dual nationals on suspicion of plotting attacks on American military facilities and other targets on behalf of the GRU, Russia's military intelligence agency. Poland arrested a man who was preparing to pass the GRU information on Rzeszow airport, the most important hub for military aid to Ukraine. Britain charged several men over an earlier arson attack in March on a Ukrainian-owned logistics firm in London whose Spanish depot was also targeted."

The GCHQ chief's warnings coupled with reality on the ground are alarming in themselves. Worse, they come after FBI Director Christopher Wray issued similar cautions in April about China.

"The PRC [People's Republic of China] has made it clear that it considers every sector that makes our society run as fair game in its bid to dominate on the world stage, and that its plan is to land low blows against civilian infrastructure to try to induce panic and break America's will to resist," Wray told the Vanderbilt Summit on Modern Conflict and Emerging Threats in Nashville, Tennessee.

Wray clarified that, by "infrastructure," he meant "everything from water treatment facilities and energy grids to transportation and information technology."

If that doesn't make you want to check that your pantry is stocked and that the water filter and generator are in working order, nothing will.

A Game Both Sides Can Play

Of course, in war of any sort, the implication is that both sides are involved in conflict. Western intelligence officials are loud in their warnings about foreign threats, but less open regarding just what their own operatives might be doing in Russia, China, and elsewhere. Still, there's evidence that this is hardly a one-sided war, shadowy though it may be.

In June 2022, The New York Times reported that Ukraine's defensive efforts relied heavily on "a stealthy network of commandos and spies rushing to provide weapons, intelligence and training." In addition to Americans, the story noted, "commandos from other NATO countries, including Britain, France, Canada and Lithuania, also have been working inside Ukraine."

American journalist and combat veteran Jack Murphy goes further, claiming the CIA, working through an allied spy service "is responsible for many of the unexplained explosions and other mishaps that have befallen the Russian military industrial complex." The targets include "railway bridges, fuel depots and power plants," he adds.

And if you wonder who blew up Nord Stream 1 and 2, well, so do a lot of people. Russia was initially accused, but it didn't make a lot of sense for the country's forces to destroy pipelines that generated revenue and fed western dependence on Russian natural gas. Since then, Denmark and Sweden have closed inconclusive investigations, journalist Seymour Hersh blamed American officials, and a report by Der Spiegel and The Washington Post placed responsibility on a rogue Ukrainian military officer.

The Wider War Is Here

Taken all together, the warnings from Keast-Butler and Wray, as well as acts of sabotage and arrests of foreign agents suggest that fears of a wider war resulting from Russia's continuing invasion of Ukraine may miss the point; the war could already be here. People looking for tanks and troops are overlooking cyber intrusions, arson, bombings, and other low-level mayhem.

"Russia is definitely at war with the West," Oleksandr Danylyuk of the Royal United Services Institute, a British defense and security think tank, told NBC News earlier this week.

Russian officials seem to embrace that understanding, with Kremlin spokesman Dmitry Peskov commenting in March that the invasion of Ukraine, originally referred to by the euphemism "special military operation," is now more serious. "It has become a war for us as the collective West more and more directly increases its level of involvement in the conflict," he said.

Fortunately, a shadow war of the sort around us is less destructive than open military conflict, especially when the hostilities involve nuclear-armed powers. It's far better that spies hack the email accounts of government officials, as happened in the case of a Russian cyberattack on Germany's ruling Social Democrats, than that cities burn. But civilians still must live with the consequences of combatants attempting to do each other harm—particularly when the harm is to infrastructure on which regular people rely.

So, welcome to the world of global shadow war. Try to not become collateral damage.

The post World War War III May Already Have Started—in the Shadows appeared first on Reason.com.

  • ✇IEEE Spectrum
  • Will Human Soldiers Ever Trust Their Robot Comrades?Roberto J. González
    Editor’s note: This article is adapted from the author’s book War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future (University of California Press, published in paperback April 2024). The blistering late-afternoon wind ripped across Camp Taji, a sprawling U.S. military base just north of Baghdad. In a desolate corner of the outpost, where the feared Iraqi Republican Guard had once manufactured mustard gas, nerve agents, and other chemical weapons, a group o
     

Will Human Soldiers Ever Trust Their Robot Comrades?

27. Duben 2024 v 17:00


Editor’s note: This article is adapted from the author’s book War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future (University of California Press, published in paperback April 2024).

The blistering late-afternoon wind ripped across Camp Taji, a sprawling U.S. military base just north of Baghdad. In a desolate corner of the outpost, where the feared Iraqi Republican Guard had once manufactured mustard gas, nerve agents, and other chemical weapons, a group of American soldiers and Marines were solemnly gathered around an open grave, dripping sweat in the 114-degree heat. They were paying their final respects to Boomer, a fallen comrade who had been an indispensable part of their team for years. Just days earlier, he had been blown apart by a roadside bomb.

As a bugle mournfully sounded the last few notes of “Taps,” a soldier raised his rifle and fired a long series of volleys—a 21-gun salute. The troops, which included members of an elite army unit specializing in explosive ordnance disposal (EOD), had decorated Boomer posthumously with a Bronze Star and a Purple Heart. With the help of human operators, the diminutive remote-controlled robot had protected American military personnel from harm by finding and disarming hidden explosives.

Boomer was a Multi-function Agile Remote-Controlled robot, or MARCbot, manufactured by a Silicon Valley company called Exponent. Weighing in at just over 30 pounds, MARCbots look like a cross between a Hollywood camera dolly and an oversized Tonka truck. Despite their toylike appearance, the devices often leave a lasting impression on those who work with them. In an online discussion about EOD support robots, one soldier wrote, “Those little bastards can develop a personality, and they save so many lives.” An infantryman responded by admitting, “We liked those EOD robots. I can’t blame you for giving your guy a proper burial, he helped keep a lot of people safe and did a job that most people wouldn’t want to do.”

Two men work with a rugged box containing the controller for the small four-wheeled vehicle in front of them. The vehicle has a video camera mounted on a jointed arm. A Navy unit used a remote-controlled vehicle with a mounted video camera in 2009 to investigate suspicious areas in southern Afghanistan.Mass Communication Specialist 2nd Class Patrick W. Mullen III/U.S. Navy

But while some EOD teams established warm emotional bonds with their robots, others loathed the machines, especially when they malfunctioned. Take, for example, this case described by a Marine who served in Iraq:

My team once had a robot that was obnoxious. It would frequently accelerate for no reason, steer whichever way it wanted, stop, etc. This often resulted in this stupid thing driving itself into a ditch right next to a suspected IED. So of course then we had to call EOD [personnel] out and waste their time and ours all because of this stupid little robot. Every time it beached itself next to a bomb, which was at least two or three times a week, we had to do this. Then one day we saw yet another IED. We drove him straight over the pressure plate, and blew the stupid little sh*thead of a robot to pieces. All in all a good day.

Some battle-hardened warriors treat remote-controlled devices like brave, loyal, intelligent pets, while others describe them as clumsy, stubborn clods. Either way, observers have interpreted these accounts as unsettling glimpses of a future in which men and women ascribe personalities to artificially intelligent war machines.

Some battle-hardened warriors treat remote-controlled devices like brave, loyal, intelligent pets, while others describe them as clumsy, stubborn clods.

From this perspective, what makes robot funerals unnerving is the idea of an emotional slippery slope. If soldiers are bonding with clunky pieces of remote-controlled hardware, what are the prospects of humans forming emotional attachments with machines once they’re more autonomous in nature, nuanced in behavior, and anthropoid in form? And a more troubling question arises: On the battlefield, will Homo sapiens be capable of dehumanizing members of its own species (as it has for centuries), even as it simultaneously humanizes the robots sent to kill them?

As I’ll explain, the Pentagon has a vision of a warfighting force in which humans and robots work together in tight collaborative units. But to achieve that vision, it has called in reinforcements: “trust engineers” who are diligently helping the Department of Defense (DOD) find ways of rewiring human attitudes toward machines. You could say that they want more soldiers to play “Taps” for their robot helpers and fewer to delight in blowing them up.

The Pentagon’s Push for Robotics

For the better part of a decade, several influential Pentagon officials have relentlessly promoted robotic technologies, promising a future in which “humans will form integrated teams with nearly fully autonomous unmanned systems, capable of carrying out operations in contested environments.”

Several soldiers wearing helmets and ear protectors pull upright a tall grey drone. Soldiers test a vertical take-off-and-landing drone at Fort Campbell, Ky., in 2020.U.S. Army

As The New York Times reported in 2016: “Almost unnoticed outside defense circles, the Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power.” The U.S. government is spending staggering sums to advance these technologies: For fiscal year 2019, the U.S. Congress was projected to provide the DOD with US $9.6 billion to fund uncrewed and robotic systems—significantly more than the annual budget of the entire National Science Foundation.

Arguments supporting the expansion of autonomous systems are consistent and predictable: The machines will keep our troops safe because they can perform dull, dirty, dangerous tasks; they will result in fewer civilian casualties, since robots will be able to identify enemies with greater precision than humans can; they will be cost-effective and efficient, allowing more to get done with less; and the devices will allow us to stay ahead of China, which, according to some experts, will soon surpass America’s technological capabilities.

A headshot shows a smiling man in a dark suit with his arms crossed.\u00a0 Former U.S. deputy defense secretary Robert O. Work has argued for more automation within the military. Center for a New American Security

Among the most outspoken advocate of a roboticized military is Robert O. Work, who was nominated by President Barack Obama in 2014 to serve as deputy defense secretary. Speaking at a 2015 defense forum, Work—a barrel-chested retired Marine Corps colonel with the slight hint of a drawl—described a future in which “human-machine collaboration” would win wars using big-data analytics. He used the example of Lockheed Martin’s newest stealth fighter to illustrate his point: “The F-35 is not a fighter plane, it is a flying sensor computer that sucks in an enormous amount of data, correlates it, analyzes it, and displays it to the pilot on his helmet.”

The beginning of Work’s speech was measured and technical, but by the end it was full of swagger. To drive home his point, he described a ground combat scenario. “I’m telling you right now,” Work told the rapt audience, “10 years from now if the first person through a breach isn’t a friggin’ robot, shame on us.”

“The debate within the military is no longer about whether to build autonomous weapons but how much independence to give them,” said a 2016 New York Times article. The rhetoric surrounding robotic and autonomous weapon systems is remarkably similar to that of Silicon Valley, where charismatic CEOs, technology gurus, and sycophantic pundits have relentlessly hyped artificial intelligence.

For example, in 2016, the Defense Science Board—a group of appointed civilian scientists tasked with giving advice to the DOD on technical matters—released a report titled “Summer Study on Autonomy.” Significantly, the report wasn’t written to weigh the pros and cons of autonomous battlefield technologies; instead, the group assumed that such systems will inevitably be deployed. Among other things, the report included “focused recommendations to improve the future adoption and use of autonomous systems [and] example projects intended to demonstrate the range of benefits of autonomy for the warfighter.”

What Exactly Is a Robot Soldier?

A red book cover shows the crosshairs of a target surrounded by images of robots and drones. The author’s book, War Virtually, is a critical look at how the U.S. military is weaponizing technology and data.University of California Press

Early in the 20th century, military and intelligence agencies began developing robotic systems, which were mostly devices remotely operated by human controllers. But microchips, portable computers, the Internet, smartphones, and other developments have supercharged the pace of innovation. So, too, has the ready availability of colossal amounts of data from electronic sources and sensors of all kinds. The Financial Times reports: “The advance of artificial intelligence brings with it the prospect of robot-soldiers battling alongside humans—and one day eclipsing them altogether.” These transformations aren’t inevitable, but they may become a self-fulfilling prophecy.

All of this raises the question: What exactly is a “robot-soldier”? Is it a remote-controlled, armor-clad box on wheels, entirely reliant on explicit, continuous human commands for direction? Is it a device that can be activated and left to operate semiautonomously, with a limited degree of human oversight or intervention? Is it a droid capable of selecting targets (using facial-recognition software or other forms of artificial intelligence) and initiating attacks without human involvement? There are hundreds, if not thousands, of possible technological configurations lying between remote control and full autonomy—and these differences affect ideas about who bears responsibility for a robot’s actions.

The U.S. military’s experimental and actual robotic and autonomous systems include a vast array of artifacts that rely on either remote control or artificial intelligence: aerial drones; ground vehicles of all kinds; sleek warships and submarines; automated missiles; and robots of various shapes and sizes—bipedal androids, quadrupedal gadgets that trot like dogs or mules, insectile swarming machines, and streamlined aquatic devices resembling fish, mollusks, or crustaceans, to name a few.

A four-legged black and grey robot moves in the foreground, while in the background a number of uniformed people watch its actions, Members of a U.S. Air Force squadron test out an agile and rugged quadruped robot from Ghost Robotics in 2023.Airman First Class Isaiah Pedrazzini/U.S. Air Force

The transitions projected by military planners suggest that servicemen and servicewomen are in the midst of a three-phase evolutionary process, which begins with remote-controlled robots, in which humans are “in the loop,” then proceeds to semiautonomous and supervised autonomous systems, in which humans are “on the loop,” and then concludes with the adoption of fully autonomous systems, in which humans are “out of the loop.” At the moment, much of the debate in military circles has to do with the degree to which automated systems should allow—or require—human intervention.

“Ten years from now if the first person through a breach isn’t a friggin’ robot, shame on us.” —Robert O. Work

In recent years, much of the hype has centered around that second stage: semiautonomous and supervised autonomous systems that DOD officials refer to as “human-machine teaming.” This idea suddenly appeared in Pentagon publications and official statements after the summer of 2015. The timing probably wasn’t accidental; it came at a time when global news outlets were focusing attention on a public backlash against lethal autonomous weapon systems. The Campaign to Stop Killer Robots was launched in April 2013 as a coalition of nonprofit and civil society organizations, including the International Committee for Robot Arms Control, Amnesty International, and Human Rights Watch. In July 2015, the campaign released an open letter warning of a robotic arms race and calling for a ban on the technologies. Cosigners included world-renowned physicist Stephen Hawking, Tesla founder Elon Musk, Apple cofounder Steve Wozniak, and thousands more.

In November 2015, Work gave a high-profile speech on the importance of human-machine teaming, perhaps hoping to defuse the growing criticism of “killer robots.” According to one account, Work’s vision was one in which “computers will fly the missiles, aim the lasers, jam the signals, read the sensors, and pull all the data together over a network, putting it into an intuitive interface humans can read, understand, and use to command the mission”—but humans would still be in the mix, “using the machine to make the human make better decisions.” From this point forward, the military branches accelerated their drive toward human-machine teaming.

The Doubt in the Machine

But there was a problem. Military experts loved the idea, touting it as a win-win: Paul Scharre, in his book Army of None: Autonomous Weapons and the Future of War, claimed that “we don’t need to give up the benefits of human judgment to get the advantages of automation, we can have our cake and eat it too.” However, personnel on the ground expressed—and continue to express—deep misgivings about the side effects of the Pentagon’s newest war machines.

The difficulty, it seems, is humans’ lack of trust. The engineering challenges of creating robotic weapon systems are relatively straightforward, but the social and psychological challenges of convincing humans to place their faith in the machines are bewilderingly complex. In high-stakes, high-pressure situations like military combat, human confidence in autonomous systems can quickly vanish. The Pentagon’s Defense Systems Information Analysis Center Journal noted that although the prospects for combined human-machine teams are promising, humans will need assurances:

[T]he battlefield is fluid, dynamic, and dangerous. As a result, warfighter demands become exceedingly complex, especially since the potential costs of failure are unacceptable. The prospect of lethal autonomy adds even greater complexity to the problem [in that] warfighters will have no prior experience with similar systems. Developers will be forced to build trust almost from scratch.

In a 2015 article, U.S. Navy Commander Greg Smith provided a candid assessment of aviators’ distrust in aerial drones. After describing how drones are often intentionally separated from crewed aircraft, Smith noted that operators sometimes lose communication with their drones and may inadvertently bring them perilously close to crewed airplanes, which “raises the hair on the back of an aviator’s neck.” He concluded:

[I]n 2010, one task force commander grounded his manned aircraft at a remote operating location until he was assured that the local control tower and UAV [unmanned aerial vehicle] operators located halfway around the world would improve procedural compliance. Anecdotes like these abound…. After nearly a decade of sharing the skies with UAVs, most naval aviators no longer believe that UAVs are trying to kill them, but one should not confuse this sentiment with trusting the platform, technology, or [drone] operators.

Two men look at a variety of screens in a dark room. Bottom: A large gray unmanned aircraft sits in a hangar. U.S. Marines [top] prepare to launch and operate a MQ-9A Reaper drone in 2021. The Reaper [bottom] is designed for both high-altitude surveillance and destroying targets.Top: Lance Cpl. Gabrielle Sanders/U.S. Marine Corps; Bottom: 1st Lt. John Coppola/U.S. Marine Corps

Yet Pentagon leaders place an almost superstitious trust in those systems, and seem firmly convinced that a lack of human confidence in autonomous systems can be overcome with engineered solutions. In a commentary, Courtney Soboleski, a data scientist employed by the military contractor Booz Allen Hamilton, makes the case for mobilizing social science as a tool for overcoming soldiers’ lack of trust in robotic systems.

The problem with adding a machine into military teaming arrangements is not doctrinal or numeric…it is psychological. It is rethinking the instinctual threshold required for trust to exist between the soldier and machine.… The real hurdle lies in surpassing the individual psychological and sociological barriers to assumption of risk presented by algorithmic warfare. To do so requires a rewiring of military culture across several mental and emotional domains.… AI [artificial intelligence] trainers should partner with traditional military subject matter experts to develop the psychological feelings of safety not inherently tangible in new technology. Through this exchange, soldiers will develop the same instinctual trust natural to the human-human war-fighting paradigm with machines.

The Military’s Trust Engineers Go to Work

Soon, the wary warfighter will likely be subjected to new forms of training that focus on building trust between robots and humans. Already, robots are being programmed to communicate in more human ways with their users for the explicit purpose of increasing trust. And projects are currently underway to help military robots report their deficiencies to humans in given situations, and to alter their functionality according to the machine’s perceived emotional state of the user.

At the DEVCOM Army Research Laboratory, military psychologists have spent more than a decade on human experiments related to trust in machines. Among the most prolific is Jessie Chen, who joined the lab in 2003. Chen lives and breathes robotics—specifically “agent teaming” research, a field that examines how robots can be integrated into groups with humans. Her experiments test how humans’ lack of trust in robotic and autonomous systems can be overcome—or at least minimized.

For example, in one set of tests, Chen and her colleagues deployed a small ground robot called an Autonomous Squad Member that interacted and communicated with infantrymen. The researchers varied “situation-awareness-based agent transparency”—that is, the robot’s self-reported information about its plans, motivations, and predicted outcomes—and found that human trust in the robot increased when the autonomous “agent” was more transparent or honest about its intentions.

The Army isn’t the only branch of the armed services researching human trust in robots. The U.S. Air Force Research Laboratory recently had an entire group dedicated to the subject: the Human Trust and Interaction Branch, part of the lab’s 711th Human Performance Wing, located at Wright-Patterson Air Force Base, in Ohio.

In 2015, the Air Force began soliciting proposals for “research on how to harness the socio-emotional elements of interpersonal team/trust dynamics and inject them into human-robot teams.” Mark Draper, a principal engineering research psychologist at the Air Force lab, is optimistic about the prospects of human-machine teaming: “As autonomy becomes more trusted, as it becomes more capable, then the Airmen can start off-loading more decision-making capability on the autonomy, and autonomy can exercise increasingly important levels of decision-making.”

Air Force researchers are attempting to dissect the determinants of human trust. In one project, they examined the relationship between a person’s personality profile (measured using the so-called Big Five personality traits: openness, conscientiousness, extraversion, agreeableness, neuroticism) and his or her tendency to trust. In another experiment, entitled “Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot,” Air Force scientists compared male and female research subjects’ levels of trust by showing them a video depicting a guard robot. The robot was armed with a Taser, interacted with people, and eventually used the Taser on one. Researchers designed the scenario to create uncertainty about whether the robot or the humans were to blame. By surveying research subjects, the scientists found that women reported higher levels of trust in “Robocop” than men.

The issue of trust in autonomous systems has even led the Air Force’s chief scientist to suggest ideas for increasing human confidence in the machines, ranging from better android manners to robots that look more like people, under the principle that

good HFE [human factors engineering] design should help support ease of interaction between humans and AS [autonomous systems]. For example, better “etiquette” often equates to better performance, causing a more seamless interaction. This occurs, for example, when an AS avoids interrupting its human teammate during a high workload situation or cues the human that it is about to interrupt—activities that, surprisingly, can improve performance independent of the actual reliability of the system. To an extent, anthropomorphism can also improve human-AS interaction, since people often trust agents endowed with more humanlike features…[but] anthropomorphism can also induce overtrust.

It’s impossible to know the degree to which the trust engineers will succeed in achieving their objectives. For decades, military trainers have trained and prepared newly enlisted men and women to kill other people. If specialists have developed simple psychological techniques to overcome the soldier’s deeply ingrained aversion to destroying human life, is it possible that someday, the warfighter might also be persuaded to unquestioningly place his or her trust in robots?

❌
❌