FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Latest
  • Harris Joins the FTC's Food Fight Against Kroger-Albertsons MergerC. Jarrett Dieterle
    Amid all the competing headlines of the 2024 election, there may be no more bread-and-butter issue—literally—than how much Americans are paying to put food on their tables. The GOP is gearing up to attack the Biden-Harris administration for escalating grocery store bills, while presumptive Democratic nominee Kamala Harris has now responded with her own plan to fight higher food prices.  One of the hottest items in this political food fight is unq
     

Harris Joins the FTC's Food Fight Against Kroger-Albertsons Merger

17. Srpen 2024 v 13:00
dreamstime_xxl_102716177 | ID 102716177 © Ken Wolter | Dreamstime.com

Amid all the competing headlines of the 2024 election, there may be no more bread-and-butter issue—literally—than how much Americans are paying to put food on their tables. The GOP is gearing up to attack the Biden-Harris administration for escalating grocery store bills, while presumptive Democratic nominee Kamala Harris has now responded with her own plan to fight higher food prices. 

One of the hottest items in this political food fight is unquestionably the ongoing litigation from the Federal Trade Commission (FTC) attempting to block the Kroger-Albertsons grocery store merger. A host of Democratic lawmakers recently joined the legal fight, arguing that any potential merger would raise prices, increase food deserts, and disproportionately hurt unionized labor. As part of her new food price plan, Harris included a call for aggressive antitrust crackdowns in the food and grocery industry, mentioning the Kroger-Albertsons merger by name in her speech this week.

None of the arguments against the merger make much sense on the merits, but the FTC—and the Democratic Party writ large—are stacking the legal deck to achieve a predetermined outcome that conveniently aligns with their policy priorities.

The saga started back in October 2022, when The Kroger Company and Albertsons Companies Inc. (the parent company for popular grocery chains like Safeway and Acme, among others) announced their plans for a $24.6 billion merger. The FTC promptly launched a 16-month investigation, culminating in a lawsuit in federal court to block the proposed merger.

Kroger is the fourth-largest grocery store chain in America—behind Walmart, Amazon, and Costco—and Albertsons is the fifth-largest. Once merged, the combined company would rise to third on the list. On the surface, this may seem to provide some support for the FTC's position, but American shoppers would be wise to read the fine print.  

In truth, if the deal were to proceed, a merged version of Kroger and Albertsons would still only make up 9 percent of overall grocery sales. To put this in further perspective, consider that Walmart—the nation's largest grocery provider—would continue to operate more stores (including its Sam's Club outlets) than a Kroger-Albertson combo and maintain grocery revenue that is more than twice that of the merged company. 

One could easily argue, in other words, that far from being a monopoly, a Kroger-Albertsons joint venture would be the best hedge against potential monopolies forming among the even-more-dominant firms above it on the grocery store food chain. But incredibly, the FTC pretends that two of those larger companies don't exist in the marketplace at all simply by working with their own definitions.

The FTC contends that only local brick-and-mortar supermarkets (what one might think of as a "traditional" grocery store) and hypermarkets (such as Walmart or Target, which sell groceries alongside other goods) count in the market for groceries. This narrow definition completely circumvents wholesale-club stores (such as Costco) and e-commerce companies that sell groceries (such as Amazon). 

Given that Amazon and Costco just happen to be the second- and third-largest grocery retailers in the United States, the agency is blatantly gerrymandering the definition of the marketplace. The agency's longstanding position is that the only relevant market is stores where consumers can buy all or nearly all of their weekly groceries, which begs the question: Has anyone at the FTC stepped foot inside a Costco recently? Many Americans use club stores like Costco and BJ's Wholesale Club as their primary grocery stores, with around 15 percent of Americans ages 18–34 reporting that they do most of their grocery shopping at Costco.  

Pretending that the internet doesn't exist makes even less sense. As the International Center for Law and Economics notes, 25 years ago a mere 10,000 households took part in online shopping, whereas today 12.5 percent of consumers (or over 16 million people) purchase their groceries "mostly or exclusively" online. Amazon is also preparing to make its own big push into brick-and-mortar grocery retailing as well, with CEO Andy Jassy saying last year that the company must "find a mass grocery format that we believe is worth expanding broadly."

Beyond the FTC's tortured marketplace definitions, its arguments for the alleged harms of a conjoined Kroger-Albertsons are equal parts unconvincing and outdated. In its complaint, the agency points to escalating grocery prices in recent years, and Harris echoed this by stating that she would enact a "ban on price gouging on food and groceries" by directing the FTC to impose "harsh penalties" on grocers. She also pledged to continue aggressive antitrust enforcement in the food sector, going so far as to highlight the Kroger-Albertsons merger as an example of the type of deal that could increase prices. However, as many commentators have pointed out, food price increases likely have more to do with inflation than any lack of competition in grocery markets.

In addition to the consumer price harms the FTC alleges, over half of the agency's legal complaint focuses on the alleged harm the proposed merger would cause to the unionized workers at Kroger and Albertsons. Both companies are heavily unionized—in contrast to Walmart and Amazon—and the agency claims that a combined company would have more leverage over unions given that the unions would no longer be able to play one company off against the other as a negotiating tactic. This glosses over the fact that the demand for labor is particularly competitive in the retail sector broadly, and workers could easily just jump ship to a different employer in the face of any exploitative terms pushed by the merged firm.

A final concern highlighted by some Democratic lawmakers is that a merged company could result in more store closures that lead to geographical areas within which there are few or no grocery options. Once again, this ignores the rise of club stores like Costco and online/home delivery grocery options. These alternatives reduce the plausible areas within which such food deserts can take hold, showing once again a poor understanding of the modern grocery marketplace.

Despite the many dubious underpinnings of the FTC's challenge, it fits with the Biden administration's aggressive antitrust emphasis over the past four years. While some observers were holding out hope that a Harris administration might curtail overzealous antitrust enforcement, her new food price agenda has poured cold water all over that (already wishful) thinking.

The post Harris Joins the FTC's Food Fight Against Kroger-Albertsons Merger appeared first on Reason.com.

The US Department of Justice rules that Google is in violation of antitrust laws

5. Srpen 2024 v 23:34

Did you know that Google has been embroiled in a legal battle since 2020? You'd be forgiven if you didn't, as it hasn't been at the forefront of the news for a while now. However, the case has finally wrapped up with a pretty big result; the US Department of Justice has found Google guilty of breaching antitrust laws.

  • ✇Techdirt
  • Jim Jordan Demands Advertisers Explain Why They Don’t Advertise On MAGA Media SitesMike Masnick
    Remember last month when ExTwitter excitedly “rejoined GARM” (the Global Alliance for Responsible Media, an advertising consortium focused on brand safety)? And then, a week later, after Rep. Jim Jordan released a misleading report about GARM, Elon Musk said he was going to sue GARM and hoped criminal investigations would be opened? Unsurprisingly, Jordan has now ratcheted things up a notch by sending investigative demands to a long list of top advertisers associated with GARM. The letter effect
     

Jim Jordan Demands Advertisers Explain Why They Don’t Advertise On MAGA Media Sites

2. Srpen 2024 v 18:20

Remember last month when ExTwitter excitedly “rejoined GARM” (the Global Alliance for Responsible Media, an advertising consortium focused on brand safety)? And then, a week later, after Rep. Jim Jordan released a misleading report about GARM, Elon Musk said he was going to sue GARM and hoped criminal investigations would be opened?

Unsurprisingly, Jordan has now ratcheted things up a notch by sending investigative demands to a long list of top advertisers associated with GARM. The letter effectively accuses these advertisers of antitrust violations for choosing not to advertise on conservative media sites, based on GARM’s recommendations on how to best protect brand safety.

The link there shows all the letters, but we’ll just stick with the first one, to Adidas. The letter doesn’t make any demands specifically about ExTwitter, but does name the GOP’s favorite media sites, and demands to know whether any of these advertisers agreed not to advertise on those properties. In short, this is an elected official demanding to know why a private company chose not to give money to media sites that support that elected official:

Was Adidas Group aware of the coordinated actions taken by GARM toward news outlets and podcasts such as The Joe Rogan Experience, The Daily Wire, Breitbart News, or Fox News, or other conservative media? Does Adidas Group support GARM’s coordinated actions toward these news outlets and podcasts?

Jordan is also demanding all sorts of documents and answers to questions. He is suggesting strongly that GARM’s actions (presenting ways that advertisers might avoid, say, having their brands show up next to neo-Nazi content) were a violation of antitrust law.

This is all nonsense. First of all, choosing not to advertise somewhere is protected by the First Amendment. And there are good fucking reasons not to advertise on media properties most closely associated with nonsense peddling, extremist culture wars, and just general stupidity.

Even more ridiculous is that the letter cites NAACP v. Claiborne Hardware, which is literally the Supreme Court case that establishes that group boycotts are protected speech. It’s the case that says not supporting a business for the purpose of protest, while economic activity, is still protected speech and can’t be regulated by the government (and it’s arguable that what does GARM does is even a boycott at all).

As the Court noted, in holding that organizing a boycott was protected by the First Amendment:

The First Amendment similarly restricts the ability of the State to impose liability on an individual solely because of his association with another.

But, of course, one person who is quite excited is Elon Musk. He quote tweeted (they’re still tweets, right?) the House Judiciary’s announcement of the demands with a popcorn emoji:

Image

So, yeah. Mr. “Free Speech Absolutist,” who claims the Twitter files show unfair attempts by governments to influence speech, now supports the government trying to pressure brands into advertising on certain media properties. It’s funny how the “free speech absolutist” keeps throwing the basic, fundamental principles of free speech out the window the second he doesn’t like the results.

That’s not supporting free speech at all. But, then again, for Elon to support free speech, he’d first have to learn what it means, and he’s shown no inclination of ever doing that.

  • ✇GameFromScratch.com
  • Fyrox 0.34 Rust Powered Game EngineMike
    GameFromScratch.com Fyrox 0.34 Rust Powered Game Engine The Fyrox open source game engine written using the Rust programming language, just released Fyrox 0.34. This release is actually pretty jam packed with new features including: Key Links Fyrox Homepage Fyrox 0.34 Release Notes Station Iapetus GitHub Project You can learn more about Fyrox and the Fyrox 0.34 release […] The post Fyrox 0.34 Rust Powered Game Engine appeared first on GameFromScratch.com.
     

Fyrox 0.34 Rust Powered Game Engine

Od: Mike
24. Květen 2024 v 16:18

GameFromScratch.com
Fyrox 0.34 Rust Powered Game Engine

The Fyrox open source game engine written using the Rust programming language, just released Fyrox 0.34. This release is actually pretty jam packed with new features including: Key Links Fyrox Homepage Fyrox 0.34 Release Notes Station Iapetus GitHub Project You can learn more about Fyrox and the Fyrox 0.34 release […]

The post Fyrox 0.34 Rust Powered Game Engine appeared first on GameFromScratch.com.

Guy Who Didn't Make Hockey Playoffs Crashes Ninja's Stream To Talk Smack

30. Květen 2024 v 17:10

Can you hear that sound—sort of like a brushing, friction-y noise? That’s the sound of me rubbing my hands together with devilish glee as I realize that I can finally write about hockey on Kotaku dot com thanks to Ninja and his latest Twitch stream.

Read more...

  • ✇Recent Questions - Game Development Stack Exchange
  • Pan orthographic non-axis-aligned cameraThe Bic Pen
    I'm trying to create a panning control for a camera in bevy, but I can't seem to get the panning logic right when the camera is rotated. It works fine if the camera transform is directly facing the XY plane. This works with any axis as the "up" direction: Transform::from_xyz(0., 0., 1.).looking_at(Vec3::ZERO, Vec3::X) But, it doesn't move correctly when the camera isn't aligned to the plane, like so: Transform::from_xyz(1., -2., 2.).looking_at(Vec3::ZERO, Vec3::Z) I want the camera to pan over
     

Pan orthographic non-axis-aligned camera

I'm trying to create a panning control for a camera in bevy, but I can't seem to get the panning logic right when the camera is rotated. It works fine if the camera transform is directly facing the XY plane. This works with any axis as the "up" direction:

Transform::from_xyz(0., 0., 1.).looking_at(Vec3::ZERO, Vec3::X)

But, it doesn't move correctly when the camera isn't aligned to the plane, like so:

Transform::from_xyz(1., -2., 2.).looking_at(Vec3::ZERO, Vec3::Z)

I want the camera to pan over the XY plane at a fixed Z height. How can I convert this 2D implementation to a proper 3D implementation?

Here's the pseudocode for my current logic, for those not familiar with bevy:

Vec2 delta_mouse = get_mouse_movement_since_last_frame();
Angle camera_rotation = camera.get_rotation_axis_and_angle().angle;
Vec2 rotation_mat = { x: cos(camera_rotation), y: sin(camera_rotation) };
Vec2 rotated_delta_mouse = {
    x: delta_mouse.x * rotation_mat.x - delta_mouse.y * rotation_mat.y,
    y: delta_mouse.y * rotation_mat.x + delta_mouse.x * rotation_mat.y
};
camera.translation += rotated_delta_mouse;

And the full rust code I am using now:

fn drag_camera(
    input: Res<ButtonInput<MouseButton>>,
    mut camera_query: Query<&mut Transform, With<Camera>>,
    window_query: Query<&Window>,
    mut ev_motion: EventReader<MouseMotion>,
    camera_info: Res<CameraSettings>,
) {
    let pan_button = MouseButton::Left;
    if !input.pressed(pan_button) {
        return;
    }
    let mut pan = Vec2::ZERO;
    for ev in ev_motion.read() {
        pan += ev.delta;
    }
    let mut scale_factor = camera_info.zoom_scale * 2.;
    let window = window_query.single();
    scale_factor /= window.resolution.physical_height() as f32;

    let mut transform = camera_query.single_mut();
    let rotation_angle = transform.rotation.to_axis_angle().1;
    let pan_rotated = pan.rotate(Vec2::from_angle(rotation_angle));

    transform.translation.x -= pan_rotated.x * scale_factor;
    transform.translation.y += pan_rotated.y * scale_factor;
}
  • ✇Semiconductor Engineering
  • Securing AI In The Data CenterBart Stevens
    AI has permeated virtually every aspect of our digital lives, from personalized recommendations on streaming platforms to advanced medical diagnostics. Behind the scenes of this AI revolution lies the data center, which houses the hardware, software, and networking infrastructure necessary for training and deploying AI models. Securing AI in the data center relies on data confidentiality, integrity, and authenticity throughout the AI lifecycle, from data preprocessing to model training and infer
     

Securing AI In The Data Center

9. Květen 2024 v 09:07

AI has permeated virtually every aspect of our digital lives, from personalized recommendations on streaming platforms to advanced medical diagnostics. Behind the scenes of this AI revolution lies the data center, which houses the hardware, software, and networking infrastructure necessary for training and deploying AI models. Securing AI in the data center relies on data confidentiality, integrity, and authenticity throughout the AI lifecycle, from data preprocessing to model training and inference deployment.

High-value datasets containing sensitive information, such as personal health records or financial transactions, must be shielded from unauthorized access. Robust encryption mechanisms, such as Advanced Encryption Standard (AES), coupled with secure key management practices, form the foundation of data confidentiality in the data center. The encryption key used must be unique and used in a secure environment. Encryption and decryption operations of data are constantly occurring and must be performed to prevent key leakage. Should a compromise arise, it should be possible to renew the key securely and re-encrypt data with the new key.

The encryption key used must also be securely stored in a location that unauthorized processes or individuals cannot access. The keys used must be protected from attempts to read them from the device or attempts to steal them using side-channel techniques such as SCA (Side-Channel Attacks) or FIA (Fault Injection Attacks). The multitenancy aspect of modern data centers calls for robust SCA protection of key data.

Hardware-level security plays a pivotal role in safeguarding AI within the data center, offering built-in protections against a wide range of threats. Trusted Platform Modules (TPMs), secure enclaves, and Hardware Security Modules (HSMs) provide secure storage and processing environments for sensitive data and cryptographic keys, shielding them from unauthorized access or tampering. By leveraging hardware-based security features, organizations can enhance the resilience of their AI infrastructure and mitigate the risk of attacks targeting software vulnerabilities.

Ideally, secure cryptographic processing is handled by a Root of Trust core. The AI service provider manages the Root of Trust firmware, but it can also load secure applications that customers can write to implement their own cryptographic key management and storage applications. The Root of Trust can be integrated in the host CPU that orchestrates the AI operations, decrypting the AI model and its specific parameters before those are fed to AI or network accelerators (GPUs or NPUs). It can also be directly integrated with the GPUs and NPUs to perform encryption/decryption at that level. These GPUs and NPUs may also select to store AI workloads and inference models in encrypted form in their local memory banks and decrypt the data on the fly when access is required. Dedicated on-the-fly, low latency in-line memory decryption engines based on the AES-XTS algorithm can keep up with the memory bandwidth, ensuring that the process is not slowed down.

AI training workloads are often distributed among dozens of devices connected via PCIe or high-speed networking technology such as 800G Ethernet. An efficient confidentiality and integrity protocol such as MACsec using the AES-GCM algorithm can protect the data in motion over high-speed Ethernet links. AES-GCM engines integrated with the server SoC and the PCIe acceleration boards ensure that traffic is authenticated and optionally encrypted.

Rambus offers a broad portfolio of security IP covering the key security elements needed to protect AI in the data center. Rambus Root of Trust IP cores ensure a secure boot protocol that protects the integrity of its firmware. This can be combined with Rambus inline memory encryption engines, as well as dedicated solutions for MACsec up to 800G.

Resources

The post Securing AI In The Data Center appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Earning Digital TrustNathalie Bijnens
    The internet of things (IoT) has been growing at a fast pace. In 2023, there were already double the number of internet-connected devices – 16 billion – than people on the planet. However, many of these devices are not properly secured. The high volume of insecure devices being deployed is presenting hackers with more opportunities than ever before. Governments around the world are realizing that additional security standards for IoT devices are needed to address the growing and important role o
     

Earning Digital Trust

9. Květen 2024 v 09:04

The internet of things (IoT) has been growing at a fast pace. In 2023, there were already double the number of internet-connected devices – 16 billion – than people on the planet. However, many of these devices are not properly secured. The high volume of insecure devices being deployed is presenting hackers with more opportunities than ever before. Governments around the world are realizing that additional security standards for IoT devices are needed to address the growing and important role of the billions of connected devices we rely on every day. The EU Cyber Resilience Act and the IoT Cybersecurity Improvement Act in the United States are driving improved security practices as well as an increased sense of urgency.

Digital trust is critical for the continued success of the IoT. This means that security, privacy, and reliability are becoming top concerns. IoT devices are always connected and can be deployed in any environment, which means that they can be attacked via the internet as well as physically in the field. Whether it is a remote attacker getting access to a baby monitor or camera inside your house, or someone physically tampering with sensors that are part of a critical infrastructure, IoT devices need to have proper security in place.

This is even more salient when one considers that each IoT device is part of a multi-party supply chain and is used in systems that contain many other devices. All these devices need to be trusted and communicate in a secure way to maintain the privacy of their data. It is critical to ensure that there are no backdoors left open by any link in the supply chain, or when devices are updated in the field. Any weak link exposes more than just the device in question to security breaches; it exposes its entire system – and the IoT itself – to attacks.

A foundation of trust starts in the hardware

To secure the IoT, each piece of silicon in the supply chain needs to be trusted. The best way to achieve this is by using a hardware-based root of trust (RoT) for every device. An RoT is typically defined as “the set of implicitly trusted functions that the rest of the system or device can use to ensure security.” The core of an RoT consists of an identity and cryptographic keys rooted in the hardware of a device. This establishes a unique, immutable, and unclonable identity to authenticate a device in the IoT network. It establishes the anchor point for the chain of trust, and powers critical system security use cases over the entire lifecycle of a device.

Protecting every device on the IoT with a hardware-based RoT can appear to be an unreachable goal. There are so many types of systems and devices and so many different semiconductor and device manufacturers, each with their own complex supply chain. Many of these chips and devices are high-volume/low-cost and therefore have strict constraints on additional manufacturing or supply chain costs for security. The PSA Certified 2023 Security Report indicates that 72% of tech decision makers are interested in the development of an industry-led set of guidelines to make reaching the goal of a secure IoT more attainable.

Security frameworks and certifications speed-up the process and build confidence

One important industry-led effort in standardizing IoT security that has been widely adopted is PSA Certified. PSA stands for Platform Security Architecture and PSA Certified is a global partnership addressing security challenges and uniting the technology ecosystem under a common security baseline, providing an easy-to consume and comprehensive methodology for the lab-validated assurance of device security. PSA Certified has been adopted by the full supply chain from silicon providers, software vendors, original equipment manufacturers (OEMs), IP providers, governments, content service providers (CSPs), insurance vendors and other third-party schemes. PSA Certified was the winner of the IoT Global Awards “Ecosystem of the year” in 2021.

PSA Certified lab-based evaluations (PSA Certified Level 2 and above) have a choice of evaluation methodologies, including the rigorous SESIP-based methodology (Security Evaluation Standard for IoT Platforms from GlobalPlatform), an optimized security evaluation methodology, designed for connected devices. PSA Certified recognizes that a myriad of different regulations and certification frameworks create an added layer of complexity for the silicon providers, OEMs, software vendors, developers, and service providers tasked with demonstrating the security capability of their products. The goal of the program is to provide a flexible and efficient security evaluation method needed to address the unique complexities and challenges of the evolving digital ecosystem and to drive consistency across device certification schemes to bring greater trust.

The PSA Certified framework recognizes the importance of a hardware RoT for every connected device. It currently provides incremental levels of certified assurance, ranging from a baseline Level 1 (application of best-practice security principles) to a more advanced Level 3 (validated protection against substantial hardware and software attacks).

PSA Certified RoT component

Among the certifications available, PSA Certified offers a PSA Certified RoT Component certification program, which targets separate RoT IP components, such as physical unclonable functions (PUFs), which use unclonable properties of silicon to create a robust trust (or security) anchor. As shown in figure 1, the PSA-RoT Certification includes three levels of security testing. These component-level certifications from PSA Certified validate specific security functional requirements (SFRs) provided by an RoT component and enable their reuse in a fast-track evaluation of a system integration using this component.

Fig. 1: PSA Certified establishes a chain of trust that begins with a PSA-RoT.

A proven RoT IP solution, now PSA Certified

Synopsys PUF IP is a secure key generation and storage solution that enables device manufacturers and designers to secure their products with internally generated unclonable identities and device-unique cryptographic keys. It uses the inherently random start-up values of SRAM as a physical unclonable function (PUF), which generates the entropy required for a strong hardware root of trust.

This root key created by Synopsys PUF IP is never stored, but rather recreated from the PUF upon each use, so there is never a key to be discovered by attackers. The root key is the basis for key management capabilities that enable each member of the supply chain to create its own secret keys, bound to the specific device, to protect their IP/communications without revealing these keys to any other member of the supply chain.

Synopsys PUF IP offers robust PUF-based physical security, with the following properties:

  • No secrets/keys at rest​ (no secrets stored in any memory)
    • prevents any attack on an unpowered device​
    • keys are only present when used​, limiting the window of opportunity for attacks
  • Hardware entropy source/root of trust​
    • no dependence on third parties​ (no key injection from outside)
    • no dependence on security of external components or other internal modules​
    • no dependence on software-based security​
  • Technology-independent, fully digital standard-logic CMOS IP
    • all fabs and technology nodes
    • small footprint
    • re-use in new platforms/deployments
  • Built-in error resilience​ due to advanced error-correction

The Synopsys PUF technology has been field-proven over more than a decade of deployment on over 750 million chips. And now, the Synopsys PUF has achieved the milestone of becoming the world’s first IP solution to be awarded “PSA Certified Level 3 RoT Component.” This certifies that the IP includes substantial protection against both software and hardware attacks (including side-channel and fault injection attacks) and is qualified as a trusted component in a system that requires PSA Level 3 certification.

Fault detection and other countermeasures

In addition to its PUF-related protection against physical attacks, all Synopsys PUF IP products have several built-in physical countermeasures. These include both systemic security features (such as data format validation, data authentication, key use restrictions, built in self-tests (BIST), and heath checks) as well as more specific countermeasures (such as data masking and dummy cycles) that protect against specific attacks.

The PSA Certified Synopsys PUF IP goes even one step further. It validates all inputs through integrity checks and error detection. It continuously asserts that everything runs as intended, flags any observed faults, and ensures security. Additionally, the PSA Certified Synopsys PUF IP provides hardware and software handholds to the user which assist in checking that all data is correctly transferred into and out of the PUF IP. The Synopsys PUF IP driver also supports fault detection and reporting.

Advantages of PUFs over traditional key injection and storage methods

For end-product developers, PUF IP has many advantages over traditional approaches for key management. These traditional approaches typically require key injection (provisioning secret keys into a device) and some form of non-volatile memory (NVM), such as embedded Flash memory or one-time programmable storage (OTP), where the programmed key is stored and where it needs to be protected from being extracted, overwritten, or changed. Unlike these traditional key injection solutions, Synopsys PUF IP does not require sensitive key handling by third parties, since PUF-based keys are created within the device itself. In addition, Synopsys PUF IP offers more flexibility than traditional solutions, as a virtually unlimited number of PUF-based keys can be created. And keys protected by the PUF can be added at any time in the lifecycle rather than only during manufacturing.

In terms of key storage, Synopsys PUF IP offers higher protection against physical attacks than storing keys in some form of NVM. PUF-based root keys are not stored on the device, but they are reconstructed upon each use, so there is nothing for attackers to find on the chip. Instead of storing keys in NVM, Synopsys PUF IP stores only (non-sensitive) helper data and encrypted keys in NVM on- or off-chip. The traditional approach of storing keys on the device in NVM is more vulnerable to physical attacks.

Finally, Synopsys PUF IP provides more portability. Since the Synopsys PUF IP is based on standard SRAM memory cells, it offers a process- and fab agnostic solution for key storage that scales to the most advanced technology nodes.

Conclusion

The large and steady increase in devices connected to the IoT also increases the need for digital trust and privacy. This requires flexible and efficient IoT security solutions that are standardized to streamline implementation and certification across the multiple players involved in the creation and deployment of IoT devices. The PSA Certified framework offers an easy-to-consume and comprehensive methodology for the lab-validated assurance of device security.

Synopsys PUF IP, which has been deployed in over 750 million chips, is the first-ever IP solution to be awarded “PSA Certified Level 3 RoT Component.” This certifies that the IP includes substantial protection against hardware and software attacks. Synopsys PUF IP offers IoT device makers a robust PUF-based security anchor with trusted industry-standard certification and offers the perfect balance between strong security, high flexibility, and low cost.

The post Earning Digital Trust appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Securing AI In The Data CenterBart Stevens
    AI has permeated virtually every aspect of our digital lives, from personalized recommendations on streaming platforms to advanced medical diagnostics. Behind the scenes of this AI revolution lies the data center, which houses the hardware, software, and networking infrastructure necessary for training and deploying AI models. Securing AI in the data center relies on data confidentiality, integrity, and authenticity throughout the AI lifecycle, from data preprocessing to model training and infer
     

Securing AI In The Data Center

9. Květen 2024 v 09:07

AI has permeated virtually every aspect of our digital lives, from personalized recommendations on streaming platforms to advanced medical diagnostics. Behind the scenes of this AI revolution lies the data center, which houses the hardware, software, and networking infrastructure necessary for training and deploying AI models. Securing AI in the data center relies on data confidentiality, integrity, and authenticity throughout the AI lifecycle, from data preprocessing to model training and inference deployment.

High-value datasets containing sensitive information, such as personal health records or financial transactions, must be shielded from unauthorized access. Robust encryption mechanisms, such as Advanced Encryption Standard (AES), coupled with secure key management practices, form the foundation of data confidentiality in the data center. The encryption key used must be unique and used in a secure environment. Encryption and decryption operations of data are constantly occurring and must be performed to prevent key leakage. Should a compromise arise, it should be possible to renew the key securely and re-encrypt data with the new key.

The encryption key used must also be securely stored in a location that unauthorized processes or individuals cannot access. The keys used must be protected from attempts to read them from the device or attempts to steal them using side-channel techniques such as SCA (Side-Channel Attacks) or FIA (Fault Injection Attacks). The multitenancy aspect of modern data centers calls for robust SCA protection of key data.

Hardware-level security plays a pivotal role in safeguarding AI within the data center, offering built-in protections against a wide range of threats. Trusted Platform Modules (TPMs), secure enclaves, and Hardware Security Modules (HSMs) provide secure storage and processing environments for sensitive data and cryptographic keys, shielding them from unauthorized access or tampering. By leveraging hardware-based security features, organizations can enhance the resilience of their AI infrastructure and mitigate the risk of attacks targeting software vulnerabilities.

Ideally, secure cryptographic processing is handled by a Root of Trust core. The AI service provider manages the Root of Trust firmware, but it can also load secure applications that customers can write to implement their own cryptographic key management and storage applications. The Root of Trust can be integrated in the host CPU that orchestrates the AI operations, decrypting the AI model and its specific parameters before those are fed to AI or network accelerators (GPUs or NPUs). It can also be directly integrated with the GPUs and NPUs to perform encryption/decryption at that level. These GPUs and NPUs may also select to store AI workloads and inference models in encrypted form in their local memory banks and decrypt the data on the fly when access is required. Dedicated on-the-fly, low latency in-line memory decryption engines based on the AES-XTS algorithm can keep up with the memory bandwidth, ensuring that the process is not slowed down.

AI training workloads are often distributed among dozens of devices connected via PCIe or high-speed networking technology such as 800G Ethernet. An efficient confidentiality and integrity protocol such as MACsec using the AES-GCM algorithm can protect the data in motion over high-speed Ethernet links. AES-GCM engines integrated with the server SoC and the PCIe acceleration boards ensure that traffic is authenticated and optionally encrypted.

Rambus offers a broad portfolio of security IP covering the key security elements needed to protect AI in the data center. Rambus Root of Trust IP cores ensure a secure boot protocol that protects the integrity of its firmware. This can be combined with Rambus inline memory encryption engines, as well as dedicated solutions for MACsec up to 800G.

Resources

The post Securing AI In The Data Center appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Earning Digital TrustNathalie Bijnens
    The internet of things (IoT) has been growing at a fast pace. In 2023, there were already double the number of internet-connected devices – 16 billion – than people on the planet. However, many of these devices are not properly secured. The high volume of insecure devices being deployed is presenting hackers with more opportunities than ever before. Governments around the world are realizing that additional security standards for IoT devices are needed to address the growing and important role o
     

Earning Digital Trust

9. Květen 2024 v 09:04

The internet of things (IoT) has been growing at a fast pace. In 2023, there were already double the number of internet-connected devices – 16 billion – than people on the planet. However, many of these devices are not properly secured. The high volume of insecure devices being deployed is presenting hackers with more opportunities than ever before. Governments around the world are realizing that additional security standards for IoT devices are needed to address the growing and important role of the billions of connected devices we rely on every day. The EU Cyber Resilience Act and the IoT Cybersecurity Improvement Act in the United States are driving improved security practices as well as an increased sense of urgency.

Digital trust is critical for the continued success of the IoT. This means that security, privacy, and reliability are becoming top concerns. IoT devices are always connected and can be deployed in any environment, which means that they can be attacked via the internet as well as physically in the field. Whether it is a remote attacker getting access to a baby monitor or camera inside your house, or someone physically tampering with sensors that are part of a critical infrastructure, IoT devices need to have proper security in place.

This is even more salient when one considers that each IoT device is part of a multi-party supply chain and is used in systems that contain many other devices. All these devices need to be trusted and communicate in a secure way to maintain the privacy of their data. It is critical to ensure that there are no backdoors left open by any link in the supply chain, or when devices are updated in the field. Any weak link exposes more than just the device in question to security breaches; it exposes its entire system – and the IoT itself – to attacks.

A foundation of trust starts in the hardware

To secure the IoT, each piece of silicon in the supply chain needs to be trusted. The best way to achieve this is by using a hardware-based root of trust (RoT) for every device. An RoT is typically defined as “the set of implicitly trusted functions that the rest of the system or device can use to ensure security.” The core of an RoT consists of an identity and cryptographic keys rooted in the hardware of a device. This establishes a unique, immutable, and unclonable identity to authenticate a device in the IoT network. It establishes the anchor point for the chain of trust, and powers critical system security use cases over the entire lifecycle of a device.

Protecting every device on the IoT with a hardware-based RoT can appear to be an unreachable goal. There are so many types of systems and devices and so many different semiconductor and device manufacturers, each with their own complex supply chain. Many of these chips and devices are high-volume/low-cost and therefore have strict constraints on additional manufacturing or supply chain costs for security. The PSA Certified 2023 Security Report indicates that 72% of tech decision makers are interested in the development of an industry-led set of guidelines to make reaching the goal of a secure IoT more attainable.

Security frameworks and certifications speed-up the process and build confidence

One important industry-led effort in standardizing IoT security that has been widely adopted is PSA Certified. PSA stands for Platform Security Architecture and PSA Certified is a global partnership addressing security challenges and uniting the technology ecosystem under a common security baseline, providing an easy-to consume and comprehensive methodology for the lab-validated assurance of device security. PSA Certified has been adopted by the full supply chain from silicon providers, software vendors, original equipment manufacturers (OEMs), IP providers, governments, content service providers (CSPs), insurance vendors and other third-party schemes. PSA Certified was the winner of the IoT Global Awards “Ecosystem of the year” in 2021.

PSA Certified lab-based evaluations (PSA Certified Level 2 and above) have a choice of evaluation methodologies, including the rigorous SESIP-based methodology (Security Evaluation Standard for IoT Platforms from GlobalPlatform), an optimized security evaluation methodology, designed for connected devices. PSA Certified recognizes that a myriad of different regulations and certification frameworks create an added layer of complexity for the silicon providers, OEMs, software vendors, developers, and service providers tasked with demonstrating the security capability of their products. The goal of the program is to provide a flexible and efficient security evaluation method needed to address the unique complexities and challenges of the evolving digital ecosystem and to drive consistency across device certification schemes to bring greater trust.

The PSA Certified framework recognizes the importance of a hardware RoT for every connected device. It currently provides incremental levels of certified assurance, ranging from a baseline Level 1 (application of best-practice security principles) to a more advanced Level 3 (validated protection against substantial hardware and software attacks).

PSA Certified RoT component

Among the certifications available, PSA Certified offers a PSA Certified RoT Component certification program, which targets separate RoT IP components, such as physical unclonable functions (PUFs), which use unclonable properties of silicon to create a robust trust (or security) anchor. As shown in figure 1, the PSA-RoT Certification includes three levels of security testing. These component-level certifications from PSA Certified validate specific security functional requirements (SFRs) provided by an RoT component and enable their reuse in a fast-track evaluation of a system integration using this component.

Fig. 1: PSA Certified establishes a chain of trust that begins with a PSA-RoT.

A proven RoT IP solution, now PSA Certified

Synopsys PUF IP is a secure key generation and storage solution that enables device manufacturers and designers to secure their products with internally generated unclonable identities and device-unique cryptographic keys. It uses the inherently random start-up values of SRAM as a physical unclonable function (PUF), which generates the entropy required for a strong hardware root of trust.

This root key created by Synopsys PUF IP is never stored, but rather recreated from the PUF upon each use, so there is never a key to be discovered by attackers. The root key is the basis for key management capabilities that enable each member of the supply chain to create its own secret keys, bound to the specific device, to protect their IP/communications without revealing these keys to any other member of the supply chain.

Synopsys PUF IP offers robust PUF-based physical security, with the following properties:

  • No secrets/keys at rest​ (no secrets stored in any memory)
    • prevents any attack on an unpowered device​
    • keys are only present when used​, limiting the window of opportunity for attacks
  • Hardware entropy source/root of trust​
    • no dependence on third parties​ (no key injection from outside)
    • no dependence on security of external components or other internal modules​
    • no dependence on software-based security​
  • Technology-independent, fully digital standard-logic CMOS IP
    • all fabs and technology nodes
    • small footprint
    • re-use in new platforms/deployments
  • Built-in error resilience​ due to advanced error-correction

The Synopsys PUF technology has been field-proven over more than a decade of deployment on over 750 million chips. And now, the Synopsys PUF has achieved the milestone of becoming the world’s first IP solution to be awarded “PSA Certified Level 3 RoT Component.” This certifies that the IP includes substantial protection against both software and hardware attacks (including side-channel and fault injection attacks) and is qualified as a trusted component in a system that requires PSA Level 3 certification.

Fault detection and other countermeasures

In addition to its PUF-related protection against physical attacks, all Synopsys PUF IP products have several built-in physical countermeasures. These include both systemic security features (such as data format validation, data authentication, key use restrictions, built in self-tests (BIST), and heath checks) as well as more specific countermeasures (such as data masking and dummy cycles) that protect against specific attacks.

The PSA Certified Synopsys PUF IP goes even one step further. It validates all inputs through integrity checks and error detection. It continuously asserts that everything runs as intended, flags any observed faults, and ensures security. Additionally, the PSA Certified Synopsys PUF IP provides hardware and software handholds to the user which assist in checking that all data is correctly transferred into and out of the PUF IP. The Synopsys PUF IP driver also supports fault detection and reporting.

Advantages of PUFs over traditional key injection and storage methods

For end-product developers, PUF IP has many advantages over traditional approaches for key management. These traditional approaches typically require key injection (provisioning secret keys into a device) and some form of non-volatile memory (NVM), such as embedded Flash memory or one-time programmable storage (OTP), where the programmed key is stored and where it needs to be protected from being extracted, overwritten, or changed. Unlike these traditional key injection solutions, Synopsys PUF IP does not require sensitive key handling by third parties, since PUF-based keys are created within the device itself. In addition, Synopsys PUF IP offers more flexibility than traditional solutions, as a virtually unlimited number of PUF-based keys can be created. And keys protected by the PUF can be added at any time in the lifecycle rather than only during manufacturing.

In terms of key storage, Synopsys PUF IP offers higher protection against physical attacks than storing keys in some form of NVM. PUF-based root keys are not stored on the device, but they are reconstructed upon each use, so there is nothing for attackers to find on the chip. Instead of storing keys in NVM, Synopsys PUF IP stores only (non-sensitive) helper data and encrypted keys in NVM on- or off-chip. The traditional approach of storing keys on the device in NVM is more vulnerable to physical attacks.

Finally, Synopsys PUF IP provides more portability. Since the Synopsys PUF IP is based on standard SRAM memory cells, it offers a process- and fab agnostic solution for key storage that scales to the most advanced technology nodes.

Conclusion

The large and steady increase in devices connected to the IoT also increases the need for digital trust and privacy. This requires flexible and efficient IoT security solutions that are standardized to streamline implementation and certification across the multiple players involved in the creation and deployment of IoT devices. The PSA Certified framework offers an easy-to-consume and comprehensive methodology for the lab-validated assurance of device security.

Synopsys PUF IP, which has been deployed in over 750 million chips, is the first-ever IP solution to be awarded “PSA Certified Level 3 RoT Component.” This certifies that the IP includes substantial protection against hardware and software attacks. Synopsys PUF IP offers IoT device makers a robust PUF-based security anchor with trusted industry-standard certification and offers the perfect balance between strong security, high flexibility, and low cost.

The post Earning Digital Trust appeared first on Semiconductor Engineering.

  • ✇IEEE Spectrum
  • Will Human Soldiers Ever Trust Their Robot Comrades?Roberto J. González
    Editor’s note: This article is adapted from the author’s book War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future (University of California Press, published in paperback April 2024). The blistering late-afternoon wind ripped across Camp Taji, a sprawling U.S. military base just north of Baghdad. In a desolate corner of the outpost, where the feared Iraqi Republican Guard had once manufactured mustard gas, nerve agents, and other chemical weapons, a group o
     

Will Human Soldiers Ever Trust Their Robot Comrades?

27. Duben 2024 v 17:00


Editor’s note: This article is adapted from the author’s book War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future (University of California Press, published in paperback April 2024).

The blistering late-afternoon wind ripped across Camp Taji, a sprawling U.S. military base just north of Baghdad. In a desolate corner of the outpost, where the feared Iraqi Republican Guard had once manufactured mustard gas, nerve agents, and other chemical weapons, a group of American soldiers and Marines were solemnly gathered around an open grave, dripping sweat in the 114-degree heat. They were paying their final respects to Boomer, a fallen comrade who had been an indispensable part of their team for years. Just days earlier, he had been blown apart by a roadside bomb.

As a bugle mournfully sounded the last few notes of “Taps,” a soldier raised his rifle and fired a long series of volleys—a 21-gun salute. The troops, which included members of an elite army unit specializing in explosive ordnance disposal (EOD), had decorated Boomer posthumously with a Bronze Star and a Purple Heart. With the help of human operators, the diminutive remote-controlled robot had protected American military personnel from harm by finding and disarming hidden explosives.

Boomer was a Multi-function Agile Remote-Controlled robot, or MARCbot, manufactured by a Silicon Valley company called Exponent. Weighing in at just over 30 pounds, MARCbots look like a cross between a Hollywood camera dolly and an oversized Tonka truck. Despite their toylike appearance, the devices often leave a lasting impression on those who work with them. In an online discussion about EOD support robots, one soldier wrote, “Those little bastards can develop a personality, and they save so many lives.” An infantryman responded by admitting, “We liked those EOD robots. I can’t blame you for giving your guy a proper burial, he helped keep a lot of people safe and did a job that most people wouldn’t want to do.”

Two men work with a rugged box containing the controller for the small four-wheeled vehicle in front of them. The vehicle has a video camera mounted on a jointed arm. A Navy unit used a remote-controlled vehicle with a mounted video camera in 2009 to investigate suspicious areas in southern Afghanistan.Mass Communication Specialist 2nd Class Patrick W. Mullen III/U.S. Navy

But while some EOD teams established warm emotional bonds with their robots, others loathed the machines, especially when they malfunctioned. Take, for example, this case described by a Marine who served in Iraq:

My team once had a robot that was obnoxious. It would frequently accelerate for no reason, steer whichever way it wanted, stop, etc. This often resulted in this stupid thing driving itself into a ditch right next to a suspected IED. So of course then we had to call EOD [personnel] out and waste their time and ours all because of this stupid little robot. Every time it beached itself next to a bomb, which was at least two or three times a week, we had to do this. Then one day we saw yet another IED. We drove him straight over the pressure plate, and blew the stupid little sh*thead of a robot to pieces. All in all a good day.

Some battle-hardened warriors treat remote-controlled devices like brave, loyal, intelligent pets, while others describe them as clumsy, stubborn clods. Either way, observers have interpreted these accounts as unsettling glimpses of a future in which men and women ascribe personalities to artificially intelligent war machines.

Some battle-hardened warriors treat remote-controlled devices like brave, loyal, intelligent pets, while others describe them as clumsy, stubborn clods.

From this perspective, what makes robot funerals unnerving is the idea of an emotional slippery slope. If soldiers are bonding with clunky pieces of remote-controlled hardware, what are the prospects of humans forming emotional attachments with machines once they’re more autonomous in nature, nuanced in behavior, and anthropoid in form? And a more troubling question arises: On the battlefield, will Homo sapiens be capable of dehumanizing members of its own species (as it has for centuries), even as it simultaneously humanizes the robots sent to kill them?

As I’ll explain, the Pentagon has a vision of a warfighting force in which humans and robots work together in tight collaborative units. But to achieve that vision, it has called in reinforcements: “trust engineers” who are diligently helping the Department of Defense (DOD) find ways of rewiring human attitudes toward machines. You could say that they want more soldiers to play “Taps” for their robot helpers and fewer to delight in blowing them up.

The Pentagon’s Push for Robotics

For the better part of a decade, several influential Pentagon officials have relentlessly promoted robotic technologies, promising a future in which “humans will form integrated teams with nearly fully autonomous unmanned systems, capable of carrying out operations in contested environments.”

Several soldiers wearing helmets and ear protectors pull upright a tall grey drone. Soldiers test a vertical take-off-and-landing drone at Fort Campbell, Ky., in 2020.U.S. Army

As The New York Times reported in 2016: “Almost unnoticed outside defense circles, the Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power.” The U.S. government is spending staggering sums to advance these technologies: For fiscal year 2019, the U.S. Congress was projected to provide the DOD with US $9.6 billion to fund uncrewed and robotic systems—significantly more than the annual budget of the entire National Science Foundation.

Arguments supporting the expansion of autonomous systems are consistent and predictable: The machines will keep our troops safe because they can perform dull, dirty, dangerous tasks; they will result in fewer civilian casualties, since robots will be able to identify enemies with greater precision than humans can; they will be cost-effective and efficient, allowing more to get done with less; and the devices will allow us to stay ahead of China, which, according to some experts, will soon surpass America’s technological capabilities.

A headshot shows a smiling man in a dark suit with his arms crossed.\u00a0 Former U.S. deputy defense secretary Robert O. Work has argued for more automation within the military. Center for a New American Security

Among the most outspoken advocate of a roboticized military is Robert O. Work, who was nominated by President Barack Obama in 2014 to serve as deputy defense secretary. Speaking at a 2015 defense forum, Work—a barrel-chested retired Marine Corps colonel with the slight hint of a drawl—described a future in which “human-machine collaboration” would win wars using big-data analytics. He used the example of Lockheed Martin’s newest stealth fighter to illustrate his point: “The F-35 is not a fighter plane, it is a flying sensor computer that sucks in an enormous amount of data, correlates it, analyzes it, and displays it to the pilot on his helmet.”

The beginning of Work’s speech was measured and technical, but by the end it was full of swagger. To drive home his point, he described a ground combat scenario. “I’m telling you right now,” Work told the rapt audience, “10 years from now if the first person through a breach isn’t a friggin’ robot, shame on us.”

“The debate within the military is no longer about whether to build autonomous weapons but how much independence to give them,” said a 2016 New York Times article. The rhetoric surrounding robotic and autonomous weapon systems is remarkably similar to that of Silicon Valley, where charismatic CEOs, technology gurus, and sycophantic pundits have relentlessly hyped artificial intelligence.

For example, in 2016, the Defense Science Board—a group of appointed civilian scientists tasked with giving advice to the DOD on technical matters—released a report titled “Summer Study on Autonomy.” Significantly, the report wasn’t written to weigh the pros and cons of autonomous battlefield technologies; instead, the group assumed that such systems will inevitably be deployed. Among other things, the report included “focused recommendations to improve the future adoption and use of autonomous systems [and] example projects intended to demonstrate the range of benefits of autonomy for the warfighter.”

What Exactly Is a Robot Soldier?

A red book cover shows the crosshairs of a target surrounded by images of robots and drones. The author’s book, War Virtually, is a critical look at how the U.S. military is weaponizing technology and data.University of California Press

Early in the 20th century, military and intelligence agencies began developing robotic systems, which were mostly devices remotely operated by human controllers. But microchips, portable computers, the Internet, smartphones, and other developments have supercharged the pace of innovation. So, too, has the ready availability of colossal amounts of data from electronic sources and sensors of all kinds. The Financial Times reports: “The advance of artificial intelligence brings with it the prospect of robot-soldiers battling alongside humans—and one day eclipsing them altogether.” These transformations aren’t inevitable, but they may become a self-fulfilling prophecy.

All of this raises the question: What exactly is a “robot-soldier”? Is it a remote-controlled, armor-clad box on wheels, entirely reliant on explicit, continuous human commands for direction? Is it a device that can be activated and left to operate semiautonomously, with a limited degree of human oversight or intervention? Is it a droid capable of selecting targets (using facial-recognition software or other forms of artificial intelligence) and initiating attacks without human involvement? There are hundreds, if not thousands, of possible technological configurations lying between remote control and full autonomy—and these differences affect ideas about who bears responsibility for a robot’s actions.

The U.S. military’s experimental and actual robotic and autonomous systems include a vast array of artifacts that rely on either remote control or artificial intelligence: aerial drones; ground vehicles of all kinds; sleek warships and submarines; automated missiles; and robots of various shapes and sizes—bipedal androids, quadrupedal gadgets that trot like dogs or mules, insectile swarming machines, and streamlined aquatic devices resembling fish, mollusks, or crustaceans, to name a few.

A four-legged black and grey robot moves in the foreground, while in the background a number of uniformed people watch its actions, Members of a U.S. Air Force squadron test out an agile and rugged quadruped robot from Ghost Robotics in 2023.Airman First Class Isaiah Pedrazzini/U.S. Air Force

The transitions projected by military planners suggest that servicemen and servicewomen are in the midst of a three-phase evolutionary process, which begins with remote-controlled robots, in which humans are “in the loop,” then proceeds to semiautonomous and supervised autonomous systems, in which humans are “on the loop,” and then concludes with the adoption of fully autonomous systems, in which humans are “out of the loop.” At the moment, much of the debate in military circles has to do with the degree to which automated systems should allow—or require—human intervention.

“Ten years from now if the first person through a breach isn’t a friggin’ robot, shame on us.” —Robert O. Work

In recent years, much of the hype has centered around that second stage: semiautonomous and supervised autonomous systems that DOD officials refer to as “human-machine teaming.” This idea suddenly appeared in Pentagon publications and official statements after the summer of 2015. The timing probably wasn’t accidental; it came at a time when global news outlets were focusing attention on a public backlash against lethal autonomous weapon systems. The Campaign to Stop Killer Robots was launched in April 2013 as a coalition of nonprofit and civil society organizations, including the International Committee for Robot Arms Control, Amnesty International, and Human Rights Watch. In July 2015, the campaign released an open letter warning of a robotic arms race and calling for a ban on the technologies. Cosigners included world-renowned physicist Stephen Hawking, Tesla founder Elon Musk, Apple cofounder Steve Wozniak, and thousands more.

In November 2015, Work gave a high-profile speech on the importance of human-machine teaming, perhaps hoping to defuse the growing criticism of “killer robots.” According to one account, Work’s vision was one in which “computers will fly the missiles, aim the lasers, jam the signals, read the sensors, and pull all the data together over a network, putting it into an intuitive interface humans can read, understand, and use to command the mission”—but humans would still be in the mix, “using the machine to make the human make better decisions.” From this point forward, the military branches accelerated their drive toward human-machine teaming.

The Doubt in the Machine

But there was a problem. Military experts loved the idea, touting it as a win-win: Paul Scharre, in his book Army of None: Autonomous Weapons and the Future of War, claimed that “we don’t need to give up the benefits of human judgment to get the advantages of automation, we can have our cake and eat it too.” However, personnel on the ground expressed—and continue to express—deep misgivings about the side effects of the Pentagon’s newest war machines.

The difficulty, it seems, is humans’ lack of trust. The engineering challenges of creating robotic weapon systems are relatively straightforward, but the social and psychological challenges of convincing humans to place their faith in the machines are bewilderingly complex. In high-stakes, high-pressure situations like military combat, human confidence in autonomous systems can quickly vanish. The Pentagon’s Defense Systems Information Analysis Center Journal noted that although the prospects for combined human-machine teams are promising, humans will need assurances:

[T]he battlefield is fluid, dynamic, and dangerous. As a result, warfighter demands become exceedingly complex, especially since the potential costs of failure are unacceptable. The prospect of lethal autonomy adds even greater complexity to the problem [in that] warfighters will have no prior experience with similar systems. Developers will be forced to build trust almost from scratch.

In a 2015 article, U.S. Navy Commander Greg Smith provided a candid assessment of aviators’ distrust in aerial drones. After describing how drones are often intentionally separated from crewed aircraft, Smith noted that operators sometimes lose communication with their drones and may inadvertently bring them perilously close to crewed airplanes, which “raises the hair on the back of an aviator’s neck.” He concluded:

[I]n 2010, one task force commander grounded his manned aircraft at a remote operating location until he was assured that the local control tower and UAV [unmanned aerial vehicle] operators located halfway around the world would improve procedural compliance. Anecdotes like these abound…. After nearly a decade of sharing the skies with UAVs, most naval aviators no longer believe that UAVs are trying to kill them, but one should not confuse this sentiment with trusting the platform, technology, or [drone] operators.

Two men look at a variety of screens in a dark room. Bottom: A large gray unmanned aircraft sits in a hangar. U.S. Marines [top] prepare to launch and operate a MQ-9A Reaper drone in 2021. The Reaper [bottom] is designed for both high-altitude surveillance and destroying targets.Top: Lance Cpl. Gabrielle Sanders/U.S. Marine Corps; Bottom: 1st Lt. John Coppola/U.S. Marine Corps

Yet Pentagon leaders place an almost superstitious trust in those systems, and seem firmly convinced that a lack of human confidence in autonomous systems can be overcome with engineered solutions. In a commentary, Courtney Soboleski, a data scientist employed by the military contractor Booz Allen Hamilton, makes the case for mobilizing social science as a tool for overcoming soldiers’ lack of trust in robotic systems.

The problem with adding a machine into military teaming arrangements is not doctrinal or numeric…it is psychological. It is rethinking the instinctual threshold required for trust to exist between the soldier and machine.… The real hurdle lies in surpassing the individual psychological and sociological barriers to assumption of risk presented by algorithmic warfare. To do so requires a rewiring of military culture across several mental and emotional domains.… AI [artificial intelligence] trainers should partner with traditional military subject matter experts to develop the psychological feelings of safety not inherently tangible in new technology. Through this exchange, soldiers will develop the same instinctual trust natural to the human-human war-fighting paradigm with machines.

The Military’s Trust Engineers Go to Work

Soon, the wary warfighter will likely be subjected to new forms of training that focus on building trust between robots and humans. Already, robots are being programmed to communicate in more human ways with their users for the explicit purpose of increasing trust. And projects are currently underway to help military robots report their deficiencies to humans in given situations, and to alter their functionality according to the machine’s perceived emotional state of the user.

At the DEVCOM Army Research Laboratory, military psychologists have spent more than a decade on human experiments related to trust in machines. Among the most prolific is Jessie Chen, who joined the lab in 2003. Chen lives and breathes robotics—specifically “agent teaming” research, a field that examines how robots can be integrated into groups with humans. Her experiments test how humans’ lack of trust in robotic and autonomous systems can be overcome—or at least minimized.

For example, in one set of tests, Chen and her colleagues deployed a small ground robot called an Autonomous Squad Member that interacted and communicated with infantrymen. The researchers varied “situation-awareness-based agent transparency”—that is, the robot’s self-reported information about its plans, motivations, and predicted outcomes—and found that human trust in the robot increased when the autonomous “agent” was more transparent or honest about its intentions.

The Army isn’t the only branch of the armed services researching human trust in robots. The U.S. Air Force Research Laboratory recently had an entire group dedicated to the subject: the Human Trust and Interaction Branch, part of the lab’s 711th Human Performance Wing, located at Wright-Patterson Air Force Base, in Ohio.

In 2015, the Air Force began soliciting proposals for “research on how to harness the socio-emotional elements of interpersonal team/trust dynamics and inject them into human-robot teams.” Mark Draper, a principal engineering research psychologist at the Air Force lab, is optimistic about the prospects of human-machine teaming: “As autonomy becomes more trusted, as it becomes more capable, then the Airmen can start off-loading more decision-making capability on the autonomy, and autonomy can exercise increasingly important levels of decision-making.”

Air Force researchers are attempting to dissect the determinants of human trust. In one project, they examined the relationship between a person’s personality profile (measured using the so-called Big Five personality traits: openness, conscientiousness, extraversion, agreeableness, neuroticism) and his or her tendency to trust. In another experiment, entitled “Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot,” Air Force scientists compared male and female research subjects’ levels of trust by showing them a video depicting a guard robot. The robot was armed with a Taser, interacted with people, and eventually used the Taser on one. Researchers designed the scenario to create uncertainty about whether the robot or the humans were to blame. By surveying research subjects, the scientists found that women reported higher levels of trust in “Robocop” than men.

The issue of trust in autonomous systems has even led the Air Force’s chief scientist to suggest ideas for increasing human confidence in the machines, ranging from better android manners to robots that look more like people, under the principle that

good HFE [human factors engineering] design should help support ease of interaction between humans and AS [autonomous systems]. For example, better “etiquette” often equates to better performance, causing a more seamless interaction. This occurs, for example, when an AS avoids interrupting its human teammate during a high workload situation or cues the human that it is about to interrupt—activities that, surprisingly, can improve performance independent of the actual reliability of the system. To an extent, anthropomorphism can also improve human-AS interaction, since people often trust agents endowed with more humanlike features…[but] anthropomorphism can also induce overtrust.

It’s impossible to know the degree to which the trust engineers will succeed in achieving their objectives. For decades, military trainers have trained and prepared newly enlisted men and women to kill other people. If specialists have developed simple psychological techniques to overcome the soldier’s deeply ingrained aversion to destroying human life, is it possible that someday, the warfighter might also be persuaded to unquestioningly place his or her trust in robots?

  • ✇Techdirt
  • Dish Network, The Trump Era ‘Fix’ For The Sprint T-Mobile Merger, Heads Into Its Final Death SpiralKarl Bode
    Aging satellite TV provider Dish Network is supposed to be undergoing a major transformation from tired old satellite TV provider to streaming and wireless juggernaut. It was a cornerstone of a Trump administration FCC and DOJ plan to cobble together a new wireless carrier out of twine and vibes as a counter-balance to the competition-eroding T-Mobile and Sprint merger. It’s… not going well. All of the problems critics of the T-Mobile and Sprint merger predicted (layoffs, price hikes, lest robu
     

Dish Network, The Trump Era ‘Fix’ For The Sprint T-Mobile Merger, Heads Into Its Final Death Spiral

Od: Karl Bode
8. Březen 2024 v 14:23

Aging satellite TV provider Dish Network is supposed to be undergoing a major transformation from tired old satellite TV provider to streaming and wireless juggernaut. It was a cornerstone of a Trump administration FCC and DOJ plan to cobble together a new wireless carrier out of twine and vibes as a counter-balance to the competition-eroding T-Mobile and Sprint merger.

It’s… not going well. All of the problems critics of the T-Mobile and Sprint merger predicted (layoffs, price hikes, lest robust competition) have come true. Meanwhile Dish has been bleeding satellite TV, wireless, and streaming TV subscribers for a while (last quarter the company lost another 314,000 TV subscribers, including 249,000 satellite TV subs and 65,000 Sling TV customers).

Dish’s new 5G network has also generally been received as a sort of half-hearted joke. Dish also lost 123,000 prepaid wireless subscribers last quarter; it can’t pay its debt obligations, can’t afford to buy the spectrum it was supposed to acquire as part of the Sprint/T-Mobile merger arrangement; and expanding its half-cooked 5G network looks tenuous at best.

Last year Dish proposed merging with Echostar in a bid to distract everybody from the company’s ongoing mess. They’ve also tried to goose stock valuations by hinting at an equally doomed merger with DirecTV. But those distractions didn’t help either, and there are increasing worries among belatedly aware analysts that this all ends with bankruptcy and a pile of rubble:

“MoffettNathanson analyst Craig Moffett offered a blunt assessment of the company’s future based on Dish’s deteriorating pay-TV and mobile subscriber customer base: “Dish’s business is spiraling towards bankruptcy. Gradually, then all at once, the declines are gathering speed,” he wrote in a research note.”

From 2019 or so I noted that this whole mess was likely a doomed effort, primarily designed to provide cover for an anti-competitive, job-killing wireless merger. It always seemed likely to me that Dish (which had never built a wireless network) would string FCC regulators along for a few years before selling off its valuable spectrum assets and whatever half-assed 5G network it had managed to construct.

Despite this, trade magazines that cover the telecom industry tried desperately to pretend this was all a very serious adult venture, despite zero indication anyone involved had any idea what they were doing. And the deal rubber stamping and circular logic used to justify it ran in very stark contrast to the ongoing pretense that we supposedly care about “antitrust reform.”

Ultimately Dish will make a killing on spectrum, the FCC will fine them a relative pittance for failing to meet the flimsy build requirements affixed to the merger conditions, and Dish CEO Charlie Ergen will trot off into the sunset on a giant pile of money. Some giant player like Verizon will then swoop in to gobble up what’s left of the wreckage, and the industry will consolidate further (the whole point)

The regulatory impact of approving Sprint/T-Mobile, which consolidated the U.S. wireless market from four to three major providers (jacking up prices and killing off thousands of jobs), will be forgotten, and the regulators and officials behind the entire mess will have long ago moved on to other terrible, short-sighted ideas.

  • ✇Destructoid
  • New Rust update adds a tutorial island for new playersAndrew Heaton
    Since releasing to early access over ten years ago, Rust has become one of the most famous online survival games in the current era. I say "famous" when I really mean "infamous" because, as anyone who's played the game knows, it's hard as frick. What we really need is a tutorial. Ta-da! As of now, the latest patch for Rust has started rolling out which, low and behold, implements a Tutorial Island, perfect for getting to grips with the fundamentals. According to a post over on Steam, the isl
     

New Rust update adds a tutorial island for new players

8. Březen 2024 v 16:26

Rust: a person sat in a burning helicopter while someone else looks at a wooden structure.

Since releasing to early access over ten years ago, Rust has become one of the most famous online survival games in the current era. I say "famous" when I really mean "infamous" because, as anyone who's played the game knows, it's hard as frick. What we really need is a tutorial.

Ta-da! As of now, the latest patch for Rust has started rolling out which, low and behold, implements a Tutorial Island, perfect for getting to grips with the fundamentals. According to a post over on Steam, the island offers a way of adding a "safe environment to learn the basic controls and mechanics of Rust before being set loose in the main game with other players."

https://twitter.com/playrust/status/1765814741530824941

It also says that all players will be prompted to run through the tutorial "when they first spawn this month," but you can decline the offer. Here's what you'll learn:

  • Basic movement
  • Crafting
  • Building bases
  • Upgrading bases
  • Respawning
  • Basic combat
  • Resources
  • Looting containers
  • Cooking
  • Using Furnaces
  • Workbenches
  • Using a vehicle

No mention of how to cope with being screamed at by aggressive teens

That's probably one of the biggest things to come out of this update. But there's also another huge improvement that's been made to Rust in the guise of better night lighting.

The update notes go on to say that to combat "gamma hacking" from players, the studio has added a "Nightlight feature." Essentially, this is a shader that illuminates the immediate area in front of you at nighttime, "mimicking the effect of natural moon light."

In short: you should now be able to see much better in the dark. It's still going to be pretty dark, though, so don't expect to be able to confidently run through the inky blackness with nary a care in the world. This is Rust, after all.

The post New Rust update adds a tutorial island for new players appeared first on Destructoid.

  • ✇Kotaku
  • All Of Our Best Palworld Tips, In One Handy Place
    We’re just six weeks in to 2024, and we’ve already seen the release of some absolulte bangers, including Like A Dragon: Infinite Wealth and Persona 3 Reload. But the year’s biggest surprise so far has been a game most observers wrote off as a Pokémon-knockoff—until millions of people realized there was a lot more…Read more...
     

All Of Our Best Palworld Tips, In One Handy Place

16. Únor 2024 v 22:30

We’re just six weeks in to 2024, and we’ve already seen the release of some absolulte bangers, including Like A Dragon: Infinite Wealth and Persona 3 Reload. But the year’s biggest surprise so far has been a game most observers wrote off as a Pokémon-knockoff—until millions of people realized there was a lot more…

Read more...

  • ✇Rock Paper Shotgun Latest Articles Feed
  • This full-size mechanical keyboard is reduced to just £5Will Judd
    Mechanical keyboards can be pretty cheap these days, but I've not never seen one on sale for as little as £4.99 - especially not for a full-size RGB model available for a brand I've actually heard of before! That is indeed the case at GAME though, who are selling the Trust Gaming GXT 865 Asta for £4.99 plus another £4.99 in shipping - that's £20 less than this keyboard normally costs! Read more
     

This full-size mechanical keyboard is reduced to just £5

Mechanical keyboards can be pretty cheap these days, but I've not never seen one on sale for as little as £4.99 - especially not for a full-size RGB model available for a brand I've actually heard of before! That is indeed the case at GAME though, who are selling the Trust Gaming GXT 865 Asta for £4.99 plus another £4.99 in shipping - that's £20 less than this keyboard normally costs!

Read more

❌
❌