FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇GamesIndustry.biz Latest Articles Feed
  • Roblox banned in Türkiye due to child safety concernsSophie McEvoy
    Roblox has been banned in Türkiye following a decision made by the government yesterday.As reported by our sister site Eurogamer, the country's Minister of Justice Yılmaz Tunç announced on social media that the platform had been banned "due to content that could lead to the exploitation of children."In response, Roblox said: "We respect the laws and regulations in countries where we operate and share local lawmakers' commitment to children. We look forward to working together to ensu
     

Roblox banned in Türkiye due to child safety concerns

Roblox has been banned in Türkiye following a decision made by the government yesterday.

As reported by our sister site Eurogamer, the country's Minister of Justice Yılmaz Tunç announced on social media that the platform had been banned "due to content that could lead to the exploitation of children."

In response, Roblox said: "We respect the laws and regulations in countries where we operate and share local lawmakers' commitment to children. We look forward to working together to ensure Roblox is back online in Türkiye as soon as possible."

Read more

  • ✇Latest
  • The Best of Reason: The Fragile GenerationLenore Skenazy, Jonathan Haidt
    This week's featured article is "The Fragile Generation" by Lenore Skenazy and Jonathan Haidt, originally published in the December 2017 print issue. This audio was generated using AI trained on the voice of Katherine Mangu-Ward. Music credits: "Deep in Thought" by CTRL and "Sunsettling" by Man with RosesThe post <I>The Best of Reason</I>: The Fragile Generation appeared first on Reason.com.
     

The Best of Reason: The Fragile Generation

21. Srpen 2024 v 00:21
The Best of Reason Magazine logo | Joanna Andreasson

This week's featured article is "The Fragile Generation" by Lenore Skenazy and Jonathan Haidt, originally published in the December 2017 print issue.

This audio was generated using AI trained on the voice of Katherine Mangu-Ward.

Music credits: "Deep in Thought" by CTRL and "Sunsettling" by Man with Roses

The post <I>The Best of Reason</I>: The Fragile Generation appeared first on Reason.com.

💾

  • ✇GamesIndustry.biz Latest Articles Feed
  • Twitch updates policies on sexual harassmentVikki Blake
    Streaming giant Twitch has updated its policies on sexual harassment.Twitch said it was "making some clarifications to our sexual harassment policy and sharing more about a new AutoMod category designed to flag chat messages that may contain sexual harassment.""While our policy remains largely unchanged, these updates are designed to make the policy easier to understand," the team explained. Read more
     

Twitch updates policies on sexual harassment

Streaming giant Twitch has updated its policies on sexual harassment.

Twitch said it was "making some clarifications to our sexual harassment policy and sharing more about a new AutoMod category designed to flag chat messages that may contain sexual harassment."

"While our policy remains largely unchanged, these updates are designed to make the policy easier to understand," the team explained.

Read more

Roblox reported over 13,000 incidents to the National Center for Missing and Exploited Children in 2023

Roblox reported over 13,000 incidents of child exploitation to the National Center for Missing and Exploited Children in 2023, with around 24 predators arrested for grooming and abusing victims on the hugely popular social game platform in the US.

That's up from 3,000 the year before.

Roblox serves around 77 million players every day, 40% of which are under the age of 13. And as it's available on PlayStation, PC, and a host of mobile devices, it's extraordinarily accessible for children, too.

Read more

  • ✇IEEE Spectrum
  • A New Type of Neural Network Is More InterpretableMatthew Hutson
    Artificial neural networks—algorithms inspired by biological brains—are at the center of modern artificial intelligence, behind both chatbots and image generators. But with their many neurons, they can be black boxes, their inner workings uninterpretable to users. Researchers have now created a fundamentally new way to make neural networks that in some ways surpasses traditional systems. These new networks are more interpretable and also more accurate, proponents say, even when they’re smaller.
     

A New Type of Neural Network Is More Interpretable

5. Srpen 2024 v 17:00


Artificial neural networks—algorithms inspired by biological brains—are at the center of modern artificial intelligence, behind both chatbots and image generators. But with their many neurons, they can be black boxes, their inner workings uninterpretable to users.

Researchers have now created a fundamentally new way to make neural networks that in some ways surpasses traditional systems. These new networks are more interpretable and also more accurate, proponents say, even when they’re smaller. Their developers say the way they learn to represent physics data concisely could help scientists uncover new laws of nature.

“It’s great to see that there is a new architecture on the table.” —Brice Ménard, Johns Hopkins University

For the past decade or more, engineers have mostly tweaked neural-network designs through trial and error, says Brice Ménard, a physicist at Johns Hopkins University who studies how neural networks operate but was not involved in the new work, which was posted on arXiv in April. “It’s great to see that there is a new architecture on the table,” he says, especially one designed from first principles.

One way to think of neural networks is by analogy with neurons, or nodes, and synapses, or connections between those nodes. In traditional neural networks, called multi-layer perceptrons (MLPs), each synapse learns a weight—a number that determines how strong the connection is between those two neurons. The neurons are arranged in layers, such that a neuron from one layer takes input signals from the neurons in the previous layer, weighted by the strength of their synaptic connection. Each neuron then applies a simple function to the sum total of its inputs, called an activation function.

black text on a white background with red and blue lines connecting on the left and black lines connecting on the right In traditional neural networks, sometimes called multi-layer perceptrons [left], each synapse learns a number called a weight, and each neuron applies a simple function to the sum of its inputs. In the new Kolmogorov-Arnold architecture [right], each synapse learns a function, and the neurons sum the outputs of those functions.The NSF Institute for Artificial Intelligence and Fundamental Interactions

In the new architecture, the synapses play a more complex role. Instead of simply learning how strong the connection between two neurons is, they learn the full nature of that connection—the function that maps input to output. Unlike the activation function used by neurons in the traditional architecture, this function could be more complex—in fact a “spline” or combination of several functions—and is different in each instance. Neurons, on the other hand, become simpler—they just sum the outputs of all their preceding synapses. The new networks are called Kolmogorov-Arnold Networks (KANs), after two mathematicians who studied how functions could be combined. The idea is that KANs would provide greater flexibility when learning to represent data, while using fewer learned parameters.

“It’s like an alien life that looks at things from a different perspective but is also kind of understandable to humans.” —Ziming Liu, Massachusetts Institute of Technology

The researchers tested their KANs on relatively simple scientific tasks. In some experiments, they took simple physical laws, such as the velocity with which two relativistic-speed objects pass each other. They used these equations to generate input-output data points, then, for each physics function, trained a network on some of the data and tested it on the rest. They found that increasing the size of KANs improves their performance at a faster rate than increasing the size of MLPs did. When solving partial differential equations, a KAN was 100 times as accurate as an MLP that had 100 times as many parameters.

In another experiment, they trained networks to predict one attribute of topological knots, called their signature, based on other attributes of the knots. An MLP achieved 78 percent test accuracy using about 300,000 parameters, while a KAN achieved 81.6 percent test accuracy using only about 200 parameters.

What’s more, the researchers could visually map out the KANs and look at the shapes of the activation functions, as well as the importance of each connection. Either manually or automatically they could prune weak connections and replace some activation functions with simpler ones, like sine or exponential functions. Then they could summarize the entire KAN in an intuitive one-line function (including all the component activation functions), in some cases perfectly reconstructing the physics function that created the dataset.

“In the future, we hope that it can be a useful tool for everyday scientific research,” says Ziming Liu, a computer scientist at the Massachusetts Institute of Technology and the paper’s first author. “Given a dataset we don’t know how to interpret, we just throw it to a KAN, and it can generate some hypothesis for you. You just stare at the brain [the KAN diagram] and you can even perform surgery on that if you want.” You might get a tidy function. “It’s like an alien life that looks at things from a different perspective but is also kind of understandable to humans.”

Dozens of papers have already cited the KAN preprint. “It seemed very exciting the moment that I saw it,” says Alexander Bodner, an undergraduate student of computer science at the University of San Andrés, in Argentina. Within a week, he and three classmates had combined KANs with convolutional neural networks, or CNNs, a popular architecture for processing images. They tested their Convolutional KANs on their ability to categorize handwritten digits or pieces of clothing. The best one approximately matched the performance of a traditional CNN (99 percent accuracy for both networks on digits, 90 percent for both on clothing) but using about 60 percent fewer parameters. The datasets were simple, but Bodner says other teams with more computing power have begun scaling up the networks. Other people are combining KANs with transformers, an architecture popular in large language models.

One downside of KANs is that they take longer per parameter to train—in part because they can’t take advantage of GPUs. But they need fewer parameters. Liu notes that even if KANs don’t replace giant CNNs and transformers for processing images and language, training time won’t be an issue at the smaller scale of many physics problems. He’s looking at ways for experts to insert their prior knowledge into KANs—by manually choosing activation functions, say—and to easily extract knowledge from them using a simple interface. Someday, he says, KANs could help physicists discover high-temperature superconductors or ways to control nuclear fusion.

  • ✇Ars Technica - All content
  • Elon Musk sues OpenAI, Sam Altman for making a “fool” out of himAshley Belanger
    Enlarge / Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began. (credit: Michael Kovac / Contributor | Getty Images North America) After withdrawing his lawsuit in June for unknown reasons, Elon Musk has revived a complaint accusing OpenAI and its CEO Sam Altman of fraudulently inducing Musk to contribute $44 million in seed funding by promising that OpenAI would always open-source its technology and prioritize serv
     

Elon Musk sues OpenAI, Sam Altman for making a “fool” out of him

5. Srpen 2024 v 19:49
Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began.

Enlarge / Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman's "deception" began. (credit: Michael Kovac / Contributor | Getty Images North America)

After withdrawing his lawsuit in June for unknown reasons, Elon Musk has revived a complaint accusing OpenAI and its CEO Sam Altman of fraudulently inducing Musk to contribute $44 million in seed funding by promising that OpenAI would always open-source its technology and prioritize serving the public good over profits as a permanent nonprofit.

Instead, Musk alleged that Altman and his co-conspirators—"preying on Musk’s humanitarian concern about the existential dangers posed by artificial intelligence"—always intended to "betray" these promises in pursuit of personal gains.

As OpenAI's technology advanced toward artificial general intelligence (AGI) and strove to surpass human capabilities, "Altman set the bait and hooked Musk with sham altruism then flipped the script as the non-profit’s technology approached AGI and profits neared, mobilizing Defendants to turn OpenAI, Inc. into their personal piggy bank and OpenAI into a moneymaking bonanza, worth billions," Musk's complaint said.

Read 29 remaining paragraphs | Comments

  • ✇Techdirt
  • Ctrl-Alt-Speech: I Bet You Think This Block Is About YouLeigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. IIn this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Jim Jordan Demands Advertisers Explain Why They Don’t Advertise On MAGA Media Sites (Techdirt
     

Ctrl-Alt-Speech: I Bet You Think This Block Is About You

3. Srpen 2024 v 00:14

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

IIn this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Discord. In our Bonus Chat at the end of the episode, Mike speaks to Juliet Shen and Camille Francois about the Trust & Safety Tooling Consortium at Columbia School of International and Public Affairs, and the importance of open source tools for trust and safety.

  • ✇Techdirt
  • Jim Jordan Demands Advertisers Explain Why They Don’t Advertise On MAGA Media SitesMike Masnick
    Remember last month when ExTwitter excitedly “rejoined GARM” (the Global Alliance for Responsible Media, an advertising consortium focused on brand safety)? And then, a week later, after Rep. Jim Jordan released a misleading report about GARM, Elon Musk said he was going to sue GARM and hoped criminal investigations would be opened? Unsurprisingly, Jordan has now ratcheted things up a notch by sending investigative demands to a long list of top advertisers associated with GARM. The letter effect
     

Jim Jordan Demands Advertisers Explain Why They Don’t Advertise On MAGA Media Sites

2. Srpen 2024 v 18:20

Remember last month when ExTwitter excitedly “rejoined GARM” (the Global Alliance for Responsible Media, an advertising consortium focused on brand safety)? And then, a week later, after Rep. Jim Jordan released a misleading report about GARM, Elon Musk said he was going to sue GARM and hoped criminal investigations would be opened?

Unsurprisingly, Jordan has now ratcheted things up a notch by sending investigative demands to a long list of top advertisers associated with GARM. The letter effectively accuses these advertisers of antitrust violations for choosing not to advertise on conservative media sites, based on GARM’s recommendations on how to best protect brand safety.

The link there shows all the letters, but we’ll just stick with the first one, to Adidas. The letter doesn’t make any demands specifically about ExTwitter, but does name the GOP’s favorite media sites, and demands to know whether any of these advertisers agreed not to advertise on those properties. In short, this is an elected official demanding to know why a private company chose not to give money to media sites that support that elected official:

Was Adidas Group aware of the coordinated actions taken by GARM toward news outlets and podcasts such as The Joe Rogan Experience, The Daily Wire, Breitbart News, or Fox News, or other conservative media? Does Adidas Group support GARM’s coordinated actions toward these news outlets and podcasts?

Jordan is also demanding all sorts of documents and answers to questions. He is suggesting strongly that GARM’s actions (presenting ways that advertisers might avoid, say, having their brands show up next to neo-Nazi content) were a violation of antitrust law.

This is all nonsense. First of all, choosing not to advertise somewhere is protected by the First Amendment. And there are good fucking reasons not to advertise on media properties most closely associated with nonsense peddling, extremist culture wars, and just general stupidity.

Even more ridiculous is that the letter cites NAACP v. Claiborne Hardware, which is literally the Supreme Court case that establishes that group boycotts are protected speech. It’s the case that says not supporting a business for the purpose of protest, while economic activity, is still protected speech and can’t be regulated by the government (and it’s arguable that what does GARM does is even a boycott at all).

As the Court noted, in holding that organizing a boycott was protected by the First Amendment:

The First Amendment similarly restricts the ability of the State to impose liability on an individual solely because of his association with another.

But, of course, one person who is quite excited is Elon Musk. He quote tweeted (they’re still tweets, right?) the House Judiciary’s announcement of the demands with a popcorn emoji:

Image

So, yeah. Mr. “Free Speech Absolutist,” who claims the Twitter files show unfair attempts by governments to influence speech, now supports the government trying to pressure brands into advertising on certain media properties. It’s funny how the “free speech absolutist” keeps throwing the basic, fundamental principles of free speech out the window the second he doesn’t like the results.

That’s not supporting free speech at all. But, then again, for Elon to support free speech, he’d first have to learn what it means, and he’s shown no inclination of ever doing that.

  • ✇Ars Technica - All content
  • DOJ sues TikTok, alleging “massive-scale invasions of children’s privacy”Ashley Belanger
    Enlarge (credit: NurPhoto / Contributor | NurPhoto) The US Department of Justice sued TikTok today, accusing the short-video platform of illegally collecting data on millions of kids and demanding a permanent injunction "to put an end to TikTok’s unlawful massive-scale invasions of children’s privacy." The DOJ said that TikTok had violated the Children’s Online Privacy Protection Act of 1998 (COPPA) and the Children’s Online Privacy Protection Rule (COPPA Rule), claiming that
     

DOJ sues TikTok, alleging “massive-scale invasions of children’s privacy”

2. Srpen 2024 v 23:26
DOJ sues TikTok, alleging “massive-scale invasions of children’s privacy”

Enlarge (credit: NurPhoto / Contributor | NurPhoto)

The US Department of Justice sued TikTok today, accusing the short-video platform of illegally collecting data on millions of kids and demanding a permanent injunction "to put an end to TikTok’s unlawful massive-scale invasions of children’s privacy."

The DOJ said that TikTok had violated the Children’s Online Privacy Protection Act of 1998 (COPPA) and the Children’s Online Privacy Protection Rule (COPPA Rule), claiming that TikTok allowed kids "to create and access accounts without their parents’ knowledge or consent," collected "data from those children," and failed to "comply with parents’ requests to delete their children’s accounts and information."

The COPPA Rule requires TikTok to prove that it does not target kids as its primary audience, the DOJ said, and TikTok claims to satisfy that "by requiring users creating accounts to report their birthdates."

Read 21 remaining paragraphs | Comments

  • ✇Ars Technica - All content
  • Sam Altman accused of being shady about OpenAI’s safety effortsAshley Belanger
    Enlarge / Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024. (credit: Bloomberg / Contributor | Bloomberg) OpenAI is facing increasing pressure to prove it's not hiding AI risks after whistleblowers alleged to the US Securities and Exchange Commission (SEC) that the AI company's non-disclosure agreements had illegally silenced employees fro
     

Sam Altman accused of being shady about OpenAI’s safety efforts

2. Srpen 2024 v 20:08
Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024.

Enlarge / Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024. (credit: Bloomberg / Contributor | Bloomberg)

OpenAI is facing increasing pressure to prove it's not hiding AI risks after whistleblowers alleged to the US Securities and Exchange Commission (SEC) that the AI company's non-disclosure agreements had illegally silenced employees from disclosing major safety concerns to lawmakers.

In a letter to OpenAI yesterday, Senator Chuck Grassley (R-Iowa) demanded evidence that OpenAI is no longer requiring agreements that could be "stifling" its "employees from making protected disclosures to government regulators."

Specifically, Grassley asked OpenAI to produce current employment, severance, non-disparagement, and non-disclosure agreements to reassure Congress that contracts don't discourage disclosures. That's critical, Grassley said, so that it will be possible to rely on whistleblowers exposing emerging threats to help shape effective AI policies safeguarding against existential AI risks as technologies advance.

Read 27 remaining paragraphs | Comments

  • ✇IEEE Spectrum
  • Autonomous Vehicles Are Great at Driving StraightMatthew S. Smith
    Autonomous vehicles (AVs) have made headlines in recent months, though often for all the wrong reasons. Cruise, Waymo, and Tesla are all under U.S. federal investigation for a variety of accidents, some of which caused serious injury or death. A new paper published in Nature puts numbers to the problem. Its authors analyzed over 37,000 accidents involving autonomous and human-driven vehicles to gauge risk across several accident scenarios. The paper reports AVs were generally less prone to accid
     

Autonomous Vehicles Are Great at Driving Straight

18. Červen 2024 v 18:10


Autonomous vehicles (AVs) have made headlines in recent months, though often for all the wrong reasons. Cruise, Waymo, and Tesla are all under U.S. federal investigation for a variety of accidents, some of which caused serious injury or death.

A new paper published in Nature puts numbers to the problem. Its authors analyzed over 37,000 accidents involving autonomous and human-driven vehicles to gauge risk across several accident scenarios. The paper reports AVs were generally less prone to accidents than those driven by humans, but significantly underperformed humans in some situations.

“The conclusion may not be surprising given the technological context,” said Shengxuan Ding, an author on the paper. “However, challenges remain under specific conditions, necessitating advanced algorithms and sensors and updates to infrastructure to effectively support AV technology.”

The paper, authored by two researchers at the University of Central Florida, analyzed data from 2,100 accidents involving advanced driving systems (SAE Level 4) and advanced driver-assistance systems (SAE Level 2) alongside 35,113 accidents involving human-driven vehicles. The study pulled from publicly available data on human-driven vehicle accidents in the state of California and the AVOID autonomous vehicle operation incident dataset, which the authors made public last year.

While the breadth of the paper’s data is significant, the paper’s “matched case-control analysis” is what sets it apart. Autonomous and human-driven vehicles tend to encounter different roads in different conditions, which can skew accident data. The paper categorizes risks by the variables surrounding the accident, such as whether the vehicle was moving straight or turning, and the conditions of the road and weather.

Level 4 self-driving vehicles were roughly 36 percent less likely to be involved in moderate injury accidents and 90 percent less likely to be involved in a fatal accident.

SAE Level 4 self-driving vehicles (those capable of full self-driving without a human at the wheel) performed especially well by several metrics. They were roughly 36 percent less likely to be involved in moderate injury accidents and 90 percent less likely to be involved in a fatal accident. Compared to human-driven vehicles, the risk of rear-end collision was roughly halved, and the risk of a broadside collision was roughly one-fifth. Level 4 AVs were close to one-fifthtieth as likely to run off the road.

A table of results that compare level 4 autonomous vehicles to human-driven vehicles. The paper’s findings are generally favorable for level 4 AVs, but they perform worse in turns, and at dawn and dusk.Nature

These figures look good for AVs. However, Missy Cummings, director of George Mason University’s Autonomy and Robotics Center and former safety advisor for the National Highway Traffic Safety Administration, was skeptical of the findings.

“The ground rules should be that when you analyze AV accidents, you cannot combine accidents with self-driving cars [SAE Level 4] with the accidents of Teslas [SAE Level 2],” said Cummings. She took issue with discussing them in tandem and points out these categories of vehicles operate differently—so much so that Level 4 AVs aren’t legal in every state, while Level 2 AVs are.

Mohamed Abdel-Aty, an author on the paper and director of the Smart & Safe Transportation Lab at the University of Central Florida, said that while the paper touches on both levels of autonomy, the focus was on Level 4 autonomy. “The model which is the main contribution to this research compared only level 4 to human-driven vehicles,” he said.

And while many findings were generally positive, the authors highlighted two significant negative outcomes for level 4 AVs. It found they were over five times more likely to be involved in an accident at dawn and dusk. They were relatively bad at navigating turns as well, with the odds of an accident during a turn almost doubled compared to those for human-driven vehicles.

More data required for AVs to be “reassuring”

The study’s finding of higher accident rates during turns and in unusual lighting conditions highlight two major categories of challenges facing self-driving vehicles: intelligence and data.

J. Christian Gerdes, codirector of the Center for Automotive Research at Stanford University, said turning through traffic is among the most demanding situations for an AV’s artificial intelligence. “That decision is based a lot on the actions of other road users around you, and you’re going to make the choice based on what you predict.”

Cummings agreed with Gerdes. “Any time uncertainty increases [for an AV], you’re going to see an increased risk of accident. Just by the fact you’re turning, that increases uncertainty, and increases risk.”

AVs’ dramatically higher risk of accidents at dawn and dusk, on the other hand, points towards issues with the data captured by a vehicle’s sensors. Most AVs use a combination of radar and visual sensor systems, and the latter is prone to error in difficult lighting.

It’s not all bad news for sensors, though. Level 4 AVs were drastically better in rain and fog, which suggests that the presence of radar and lidar systems gives AVs an advantage in weather conditions that reduce visibility. Gerdes also said AVs, unlike humans, don’t tire or become distracted when driving through weather that requires more vigilance.

While the paper found AVs have a lower risk of accident overall, that doesn’t mean they’ve passed the checkered flag. Gerdes said poor performance in specific scenarios is meaningful and should rightfully make human passengers uncomfortable.

“It’s hard to make the argument that [AVs] are so much safer driving straight, but if [they] get into other situations, they don’t do as well. People will not find that reassuring,” said Gerdes.

The relative lack of data for Level 4 systems is another barrier. Level 4 AVs make up a tiny fraction of all vehicles on the road and only operate in specific areas. AVs are also packed with sensors and driven by an AI system that may make decisions for a variety of reasons that remain opaque in accident data.

While the paper accounts for the low total number of accidents in its statistical analysis, the authors acknowledge more data is necessary to determine the precise cause of accidents, and hope their findings will encourage others to assist. “I believe one of the benefits of this study is to draw the attention of authorities to the need for better data,” said Ding.

On that, Cummings agreed. “We do not have enough information to make sweeping statements,” she said.

  • ✇Ars Technica - All content
  • Pornhub prepares to block five more states rather than check IDsAshley Belanger
    Enlarge (credit: Aurich Lawson | Getty Images) Pornhub will soon be blocked in five more states as the adult site continues to fight what it considers privacy-infringing age-verification laws that require Internet users to provide an ID to access pornography. On July 1, according to a blog post on the adult site announcing the impending block, Pornhub visitors in Indiana, Idaho, Kansas, Kentucky, and Nebraska will be "greeted by a video featuring" adult entertainer Cherie Dev
     

Pornhub prepares to block five more states rather than check IDs

20. Červen 2024 v 22:33
Pornhub prepares to block five more states rather than check IDs

Enlarge (credit: Aurich Lawson | Getty Images)

Pornhub will soon be blocked in five more states as the adult site continues to fight what it considers privacy-infringing age-verification laws that require Internet users to provide an ID to access pornography.

On July 1, according to a blog post on the adult site announcing the impending block, Pornhub visitors in Indiana, Idaho, Kansas, Kentucky, and Nebraska will be "greeted by a video featuring" adult entertainer Cherie Deville, "who explains why we had to make the difficult decision to block them from accessing Pornhub."

Pornhub explained that—similar to blocks in Texas, Utah, Arkansas, Virginia, Montana, North Carolina, and Mississippi—the site refuses to comply with soon-to-be-enforceable age-verification laws in this new batch of states that allegedly put users at "substantial risk" of identity theft, phishing, and other harms.

Read 25 remaining paragraphs | Comments

  • ✇Liliputing
  • Lilbits: Oh the Humane-ity: AI startup is hoping HP will buy it for $1 billionBrad Linder
    The Humane Ai Pin was supposed to be the first in a new category of wearable, AI-first devices. But it arrived this year to universally awful reviews citing its limited functionality, spotty reliability, awful battery life, and high price, just to name a few problems. Last month we learned that Humane was looking to sell […] The post Lilbits: Oh the Humane-ity: AI startup is hoping HP will buy it for $1 billion appeared first on Liliputing.
     

Lilbits: Oh the Humane-ity: AI startup is hoping HP will buy it for $1 billion

6. Červen 2024 v 23:06

The Humane Ai Pin was supposed to be the first in a new category of wearable, AI-first devices. But it arrived this year to universally awful reviews citing its limited functionality, spotty reliability, awful battery life, and high price, just to name a few problems. Last month we learned that Humane was looking to sell […]

The post Lilbits: Oh the Humane-ity: AI startup is hoping HP will buy it for $1 billion appeared first on Liliputing.

  • ✇Ars Technica - All content
  • AI trained on photos from kids’ entire childhood without their consentAshley Belanger
    Enlarge (credit: RicardoImagen | E+) Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday. This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW's report said. An HRW researcher, Hye Jung Han, helped expose the problem. She
     

AI trained on photos from kids’ entire childhood without their consent

11. Červen 2024 v 00:37
AI trained on photos from kids’ entire childhood without their consent

Enlarge (credit: RicardoImagen | E+)

Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW's report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed "less than 0.0001 percent" of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

Read 29 remaining paragraphs | Comments

Anker recalls a popular power bank for potential fire risk

10. Červen 2024 v 14:07
Anker 321 Power Bank recall

Have a portable Anker charger gathering dust in your drawer? It might be time to dig it out and check the model number. The company ...

The post Anker recalls a popular power bank for potential fire risk appeared first on Gizchina.com.

  • ✇Semiconductor Engineering
  • Sensor Fusion Challenges In AutomotiveEd Sperling
    The number of sensors in automobiles is growing rapidly alongside new safety features and increasing levels of autonomy. The challenge is integrating them in a way that makes sense, because these sensors are optimized for different types of data, sometimes with different resolution requirements even for the same type of data, and frequently with very different latency, power consumption, and reliability requirements. Pulin Desai, group director for product marketing, management and business deve
     

Sensor Fusion Challenges In Automotive

2. Květen 2024 v 09:15

The number of sensors in automobiles is growing rapidly alongside new safety features and increasing levels of autonomy. The challenge is integrating them in a way that makes sense, because these sensors are optimized for different types of data, sometimes with different resolution requirements even for the same type of data, and frequently with very different latency, power consumption, and reliability requirements. Pulin Desai, group director for product marketing, management and business development at Cadence, talks about challenges with sensor fusion, the growing importance of four-dimensional sensing, what’s needed to future-proof sensor designs, and the difficulty of integrating one or more software stacks with conflicting requirements.

The post Sensor Fusion Challenges In Automotive appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Framework For Early Anomaly Detection In AMS Components Of Automotive SoCsTechnical Paper Link
    A technical paper titled “Enhancing Functional Safety in Automotive AMS Circuits through Unsupervised Machine Learning” was published by researchers at University of Texas at Dallas, Intel Corporation, NXP Semiconductors, and Texas Instruments. Abstract: “Given the widespread use of safety-critical applications in the automotive field, it is crucial to ensure the Functional Safety (FuSa) of circuits and components within automotive systems. The Analog and Mixed-Signal (AMS) circuits prevalent in
     

Framework For Early Anomaly Detection In AMS Components Of Automotive SoCs

A technical paper titled “Enhancing Functional Safety in Automotive AMS Circuits through Unsupervised Machine Learning” was published by researchers at University of Texas at Dallas, Intel Corporation, NXP Semiconductors, and Texas Instruments.

Abstract:

“Given the widespread use of safety-critical applications in the automotive field, it is crucial to ensure the Functional Safety (FuSa) of circuits and components within automotive systems. The Analog and Mixed-Signal (AMS) circuits prevalent in these systems are more vulnerable to faults induced by parametric perturbations, noise, environmental stress, and other factors, in comparison to their digital counterparts. However, their continuous signal characteristics present an opportunity for early anomaly detection, enabling the implementation of safety mechanisms to prevent system failure. To address this need, we propose a novel framework based on unsupervised machine learning for early anomaly detection in AMS circuits. The proposed approach involves injecting anomalies at various circuit locations and individual components to create a diverse and comprehensive anomaly dataset, followed by the extraction of features from the observed circuit signals. Subsequently, we employ clustering algorithms to facilitate anomaly detection. Finally, we propose a time series framework to enhance and expedite anomaly detection performance. Our approach encompasses a systematic analysis of anomaly abstraction at multiple levels pertaining to the automotive domain, from hardware- to block-level, where anomalies are injected to create diverse fault scenarios. By monitoring the system behavior under these anomalous conditions, we capture the propagation of anomalies and their effects at different abstraction levels, thereby potentially paving the way for the implementation of reliable safety mechanisms to ensure the FuSa of automotive SoCs. Our experimental findings indicate that our approach achieves 100% anomaly detection accuracy and significantly optimizes the associated latency by 5X, underscoring the effectiveness of our devised solution.”

Find the technical paper here. Published April 2024 (preprint).

Arunachalam, Ayush, Ian Kintz, Suvadeep Banerjee, Arnab Raha, Xiankun Jin, Fei Su, Viswanathan Pillai Prasanth, Rubin A. Parekhji, Suriyaprakash Natarajan, and Kanad Basu. “Enhancing Functional Safety in Automotive AMS Circuits through Unsupervised Machine Learning.” arXiv preprint arXiv:2404.01632 (2024).

Related Reading
Creating IP In The Shadow Of ISO 26262
Automotive regulations can turn an interesting chip design project into a complex and often frustrating checklist exercise. In the case of ISO 26262, that includes a 12-part standard for automotive safety.
Shifting Left Using Model-Based Engineering
MBSE becomes useful for identifying potential problems earlier in the design flow, but it’s not perfect.

 

The post Framework For Early Anomaly Detection In AMS Components Of Automotive SoCs appeared first on Semiconductor Engineering.

Pro-Cop Coalition With No Web Presence Pitches Report Claiming Criminal Justice Reforms Are To Blame For Higher Crime Rates

3. Květen 2024 v 00:50

Because it sells so very well to a certain percentage of the population, ridiculous people are saying ridiculous things about crime rates in the United States. And, of course, the first place to post this so-called “news” is Fox News.

An independent group of law enforcement officials and analysts claim violent crime rates are much higher than figures reported by the Federal Bureau of Investigation in its 2023 violent crime statistics.

The Coalition for Law Order and Safety released its April 2024 report called “Assessing America’s Crime Crises: Trends, Causes, and Consequences,” and identified four potential causes for the increase in crime in most major cities across the U.S.: de-policing, de-carceration, de-prosecution and politicization of the criminal justice system. 

This plays well with the Fox News audience, many of whom are very sure there needs to be a whole lot more law and order, just so long as it doesn’t affect people who literally RAID THE CAPITOL BUILDING IN ORDER TO PREVENT A PEACEFUL TRANSFER OF PRESIDENTIAL POWER FROM HAPPENING.

These people like to hear the nation is in the midst of a criminal apocalypse because it allows them to be even nastier to minorities and even friendlier to cops (I mean, right up until they physically assault them for daring to stand between them and the inner halls of the Capitol buildings).

It’s not an “independent group.” In fact, it’s a stretch to claim there’s anything approaching actual “analysis” in this “report.” This is pro-cop propaganda pretending to be an actual study — one that expects everyone to be impressed by the sheer number of footnotes.

Here’s the thing about the Coalition for Law Order and Safety. Actually, here’s a few things. First off, the name is bad and its creators should feel bad. The fuck does “Law Order” actually mean, with or without the context of the alleged coalition’s entire name?

Second, this “coalition” has no web presence. Perhaps someone with stronger Googling skills may manage to run across a site run by this “coalition,” but multiple searches using multiple parameters have failed to turn up anything that would suggest this coalition exists anywhere outside of the title page of its report [PDF].

Here’s what we do know about this “coalition:” it contains, at most, two coalitioners (sp?). Those would be Mark Morgan, former assistant FBI director and, most recently, the acting commissioner of CBP (Customs and Border Protection) during Trump’s four-year stretch of abject Oval Office failure. (He’s also hooked up with The Federalist and The Heritage Foundation.) The other person is Sean Kennedy, who is apparently an attorney for the “Law Enforcement Legal Defense Fund.” (He also writes for The Federalist.)

At least that entity maintains a web presence. And, as can be assumed by its name, it spends a lot of its time and money ensuring bad cops keep their jobs and fighting against anything that might resemble transparency or accountability. (The press releases even contain exclamation points!)

This is what greets visitors to the Law Enforcement Legal Defense Fund website:

Yep, it’s yet another “George Soros is behind whatever we disagree with” sales pitch. Gotta love a pro-cop site that chooses to lead off with a little of the ol’ anti-antisemitism. This follows shortly after:

Well, duh. But maybe the LELDF should start asking the cops it represents and defends why they’re not doing their jobs. And let’s ask ourselves why we’re paying so much for a public service these so-called public servants have decided they’re just not going to do anymore, even though they’re still willing to collect the paychecks.

We could probably spend hours just discussing these two screenshots and their combination of dog whistles, but maybe we should just get to the report — written by a supposed “coalition,” but reading more like an angry blog post by the only two people actually willing to be named in the PDF.

There are only two aspects of this report that I agree with. First, the “coalition” (lol) is correct in the fact that the FBI’s reported crime rates are, at best, incomplete. The FBI recently changed the way it handles crime reporting, which has introduced plenty of clerical issues that numerous law enforcement agencies are still adjusting to.

Participation has been extremely low due to the learning curve, as well as a general reluctance to share pretty much anything with the public. On top of that, the coding of crimes has changed, which means the FBI is still receiving a blend of old reporting and adding that to new reporting that follows the new nomenclature. As a result, there’s a blend of old and new that potentially muddies crime stats and may result in an inaccurate picture of crime rates across the nation.

The other thing I agree with is the “coalition’s” assertion that criminal activity is under-reported. What I don’t agree with is the cause of this issue, which the copagandists chalk up to “progressive prosecutors” being unwilling to prosecute some crimes and/or bail reform programs making crime consequence-free. I think the real issue is that the public knows how cops will respond to most reported crimes and realizes it’s a waste of their time to report crimes to entities that have gotten progressively worse at solving crime, even as their budget demands and tech uptake continue to increase.

Law enforcement is a job and an extension of government bureaucracy. Things that aren’t easy or flashy just aren’t going to get done. It’s not just a cop problem. It persists anywhere people are employed and (perhaps especially) where people are employed to provide public services to taxpayers.

Those agreements aside, the rest of the report is pure bullshit. It cherry-picks stats, selectively quotes other studies that agree with its assertions, and delivers a bunch of conclusory statements that simply aren’t supported by the report’s contents.

And it engages in the sort tactics no serious report or study would attempt to do. It places its conclusions at the beginning of the report, surrounded by black boxes to highlight the author’s claims, and tagged (hilariously) as “facts.”

Here’s what the authors claim to be facts:

FACT #1: America faces a public safety crisis beset by high crime and an increasingly dysfunctional justice system.

First off, the “public safety crisis” does not exist. Neither does “high crime.” Even if we agree with the authors’ assertions, the crime rates in this country are only slightly above the historical lows we’ve enjoyed for most of the 21st century. It is nowhere near what it used to be, even if (and I’m ceding this ground for the sake of my argument) we’re seeing spikes in certain locations around the country. (I’ll also grant them the “dysfunctional justice system” argument, even though my definition of dysfunction isn’t aligned with theirs. The system is broken and has been for a long time.)

FACT #2: Crime has risen dramatically over the past few years and may be worse than some official statistics claim.

“Dramatically” possibly as in year-over-year in specific areas. “Dramatically” over the course of the past decades? It’s actually still in decline, even given the occasional uptick.

FACT #3: Although preliminary 2023 data shows a decline in many offenses, violent and serious crime remains at highly elevated levels compared to 2019.

Wow, that sounds furious! I wonder what it signifies…? First, the authors admit crime is down, but then they insist crime is actually up, especially when compared to one specific waypoint on the continuum of crime statistics. Man, I’ve been known to cherry-pick stats to back up my assertions, but at least I’ve never (1) limited my cherry-picking to a single year, or (2) pretended my assertions were some sort of study or report backed by a “coalition” of “professionals” and “analysts.” Also: this assertion is pretty much, “This thing that just happened to me once yesterday is a disturbing trend!”

There’s more:

FACT #4: Less than 42% of violent crime and 33% of property crime victims reported the crime to law enforcement.

Even if true (and it probably isn’t), this says more about cops than it says about criminals. When people decide they’re not going to report these crimes, it’s not because they think the criminal justice system as a whole will fail them. It’s because they think the first responders (cops) will fail them. The most likely reason for less crime reporting is the fact that cops are objectively terrible at solving crimes, even the most violent ones.

FACT #5: The American people feel less safe than they did prior to 2020.

First, it depends on who you ask. And second, even if the public does feel this way, it’s largely because of “studies” like this one and “reporting” performed by Fox News and others who love to stoke the “crime is everywhere” fires because it makes it easier to sell anti-immigrant and anti-minority hatred. It has little, if anything, to do with actual crime rates. We’re twice as safe (at least!) as a nation than we were in the 1990s and yet most people are still convinced things are worse than they’ve ever been — a belief they carry from year to year like reverse amortization.

Then we get to the supposed “causes” of all the supposed “facts.” And that’s where it gets somehow stupider. The “coalition” claims this is the direct result of cops doing less cop work due to decreased morale, “political hostility” [cops aren’t a political party, yo], and “policy changes.” All I can say is: suck it up. Sorry the job isn’t the glorious joyride it used to be. Do your job or GTFO. Stop collecting paychecks while harming public safety just because the people you’ve alienated for years are pushing back. Even if this assertion is true (it isn’t), the problem is cops, not society or “politics.”

The authors also claim “decarceration” and “de-prosecution” are part of the problem. Bail reform efforts and prosecutorial discretion has led to fewer people being charged or held without bail. These are good things that are better for society in the long run. Destroying people’s lives simply because they’re suspected of committing a crime creates a destructive cycle that tends to encourage more criminal activity because non-criminal means of income are now that much farther out of reach.

You can tell this argument is bullshit because of who it cites in support of this so-called “finding.” It points to a study released by Paul Cassell and Richard Fowles entitled “Does Bail Reform Increase Crime?” According to the authors it does and that conclusion is supposedly supported by the data pulled from Cook County, Illinois, where bail reform efforts were implemented in 2019.

But the stats don’t back up the paper’s claims. The authors take issue with the county’s “community safety rate” calculations:

The Bail Reform Study reported figures for the number of defendants who “remained crime-free” in both the fifteen months before G.O. 18.8A and the fifteen months after—i.e., the number of defendants who were not charged in Cook County for another crime after their initial bail hearing date. Based on these data, the Study concluded that “considerable stability” existed in “community safety rates” comparing the pre- and post-implementation periods. Indeed, the Study highlighted “community safety rates” that were about the same (or even better) following G.O. 18.8A’s implementation. The Study reported, for example, that the “community safety rate” for male defendants who were released improved from 81.2% before to 82.5% after; and for female defendants, the community safety rate improved from 85.7% to 86.5%.66 Combining the male and female figures produces the result that the overall community safety rate improved from 81.8% before implementation of the changes to 83.0% after.

The authors say this rate is wrong. They argue that releasing more accused criminals resulted in more crime.

[T]he number of defendants released pretrial increased from 20,435 in the “before” period to 24,504 in the “after” period—about a 20% increase. So even though the “community safety rate” remained roughly stable (and even improved very slightly), the total number of crimes committed by pretrial releasees increased after G.O. 18.8A. In the fifteen months before G.O.18.8A, 20,435 defendants were released and 16,720 remained “crime-free”—and, thus, arithmetically (although this number is not directly disclosed in the Study), 3,715 defendants were charged with committing new crimes while they were released. In the fifteen months after G.O. 18.8A, 24,504 defendants were released, and 20,340 remained “crimefree”—and, thus, arithmetically, 4,164 defendants were charged with committing new crimes while they were released. Directly comparing the before and after numbers shows a clear increase from 3,715 defendants who were charged with committing new crimes before to 4,164 after—a 12% increase.

Even if, as the authors point out, more total crimes were committed after more total people were released (bailed out or with no bail set), the County’s assessment isn’t wrong. More people were released and the recidivism rate fell. Prior to G.O. 18.8A’s passage, the “crime-free” rate (as a percentage) was 79.6%. After the implementation of bail reform, it was 83.0%. If we follow the authors to the conclusion they seem to feel is logical, the only way to prevent recidivism is to keep every arrestee locked up until their trial, no matter how minor the crime triggering the arrest.

But that’s not how the criminal justice system is supposed to work. The authors apparently believe thousands of people who are still — in the eyes of the law — innocent (until proven guilty) should stay behind bars because the more people cut loose on bail (or freed without bail being set) increases the total number of criminal acts perpetrated.

Of course, we should expect nothing less. Especially not from Paul Cassell. Cassell presents himself as a “victim’s rights” hero. And while he has a lot to say about giving crime victims more rights than Americans who haven’t had the misfortune of being on the resulting end of a criminal act, he doesn’t have much to say about the frequent abuse of these laws by police officers who’ve committed violence against arrestees.

Not only that, but he’s the author of perhaps the worst paper ever written on the intersection of civil rights and American law enforcement. The title should give you a pretty good idea what you’re in for, but go ahead and give it a read if you feel like voluntarily angrying up your blood:

Still Handcuffing the Cops? A Review of Fifty Years of Empirical Evidence of Miranda’s Harmful Effects on Law Enforcement

Yep, that’s Cassell arguing that the Supreme Court forcing the government to respect Fifth Amendment rights is somehow a net loss for society and the beginning of a five-decade losing streak for law enforcement crime clearance rates.

So, you can see why an apparently imaginary “coalition” that supports “law order” would look to Cassell to provide back-up for piss poor assertions and even worse logic.

There’s plenty more that’s terrible in this so-called study from this so-called coalition. And I encourage you to give it a read because I’m sure there are things I missed that absolutely should be named and shamed in the comments.

But let’s take a look at one of my favorite things in this terrible waste of bits and bytes:

Concomitant with de-prosecution is a shift toward politicization of prosecutorial priorities at the cost of focusing on tackling rising crime and violent repeat offenders. Both local, state, and federal prosecutors have increasingly devoted a greater share of their finite, and often strained, resources to ideologically preferred or politically expedient cases. This approach has two primary and deleterious impacts – on public safety and on public faith in the impartiality of the justice system.

Under the tranche of recently elected progressive district attorneys, prosecutions of police officers have climbed dramatically and well before the death of George Floyd in May 2020, though they have since substantially accelerated.

Yep, that’s how cops see this: getting prosecuted is a “political” thing, as though being a cop was the same thing as being part of a political party. Cops like to imagine themselves as a group worthy of more rights. Unfortunately, lots of legislators agree with them. But trying to hold cops accountable is not an act of partisanship… or at least it shouldn’t be. It should just be the sort of thing all levels of law enforcement oversight strive for. But one would expect nothing more than this sort of disingenuousness from a couple of dudes who want to blame everyone but cops for the shit state the nation’s in (even if it actually isn’t.)

  • ✇Latest
  • How California's Ban on Diesel Locomotives Could Have Major National RepercussionsVeronique de Rugy
    American federalism is struggling. Federal rules are an overwhelming presence in every state government, and some states, due to their size or other leverage, can impose their own policies on much or all of the country. The problem has been made clearer by an under-the-radar plan to phase out diesel locomotives in California. If the federal government provides the state with a helping hand, it would bring nationwide repercussions for a vital, ove
     

How California's Ban on Diesel Locomotives Could Have Major National Repercussions

2. Květen 2024 v 08:02
A diesel locomotive is seen in Mojave, California | DPST/Newscom

American federalism is struggling. Federal rules are an overwhelming presence in every state government, and some states, due to their size or other leverage, can impose their own policies on much or all of the country. The problem has been made clearer by an under-the-radar plan to phase out diesel locomotives in California. If the federal government provides the state with a helping hand, it would bring nationwide repercussions for a vital, overlooked industry.

Various industry and advocacy groups are lining up against California's costly measure, calling on the U.S. Environmental Protection Agency (EPA) to deny a waiver needed to fully implement it. In the past month, more than 30 leading conservative organizations and individuals, hundreds of state and local chambers of commerce, and the U.S. agricultural sector have pleaded with the EPA to help stop this piece of extremism from escaping one coastal state.

Railroads may not be something most Americans, whose attention is on their own cars and roads, think about often. But rail is the most basic infrastructure of interstate commerce, accounting for around 40 percent of long-distance ton-miles. It's also fairly clean, accounting for less than 1 percent of total U.S. emissions. Private companies, like Union Pacific in the West or CSX in the East, pay for their infrastructure and equipment. These facts haven't stopped the regulatory power grab.

Most importantly, the California Air Resources Board (CARB) regulation would have all freight trains operate in zero-emission configuration by 2035. At the end of the decade, the state is mandating the retirement of diesel locomotives 23 years or older, despite typically useful lives of over 40 years. Starting in 2030, new passenger locomotives must operate with zero emissions, with new engines for long-haul freight trains following by 2035. It limits locomotive idling and increases reporting requirements.

Given the interstate nature of railway operations, California needs the EPA to grant a waiver. If the agency agrees, the policy will inevitably affect the entire continental United States.

The kicker is that no technology exists today to enable railroads to comply with California's diktat, rendering the whole exercise fanciful at best.

The Wall Street Journal's editorial board explained last November that while Wabtec Corp. has introduced a pioneering advance in rail technology with the launch of the world's first battery-powered locomotive, the dream of a freight train fully powered by batteries remains elusive. The challenges of substituting diesel with batteries—primarily due to batteries' substantial weight and volume—make it an impractical solution for long-haul trains. Additionally, the risk of battery overheating and potential explosions, which can emit harmful gases, is a significant safety concern. As the editorial noted, "Even if the technology for zero-emission locomotives eventually arrives, railroads will have to test them over many years to guarantee their safety."

The cost-benefit analysis is woefully unfavorable to the forced displacement of diesel locomotives. To "help" the transition, beginning in 2026, CARB will force all railroads operating in California to deposit dollars into an escrow account managed by the state and frozen for the explicit pursuit of the green agenda. For large railroads, this figure will be a staggering $1.6 billion per year, whereas some smaller railroads will pay up to $5 million.

Many of these smaller companies have signaled that they will simply go out of business. For the large railroads, the requirement will lock up about 20 percent of annual spending, money typically used for maintenance and safety improvements.

Transportation is the largest source of U.S. emissions, yet railroads' contribution amounts to not much more than a rounding error. The industry cites its efficiency improvements over time, allowing railroads today to move a ton of freight more than 500 miles on a single gallon of diesel. Its expensive machines, which last between 30 to 50 years and are retrofitted throughout their life cycles, are about 75 percent more efficient than long-haul trucks that carry a comparative amount of freight.

As Patricia Patnode of the Competitive Enterprise Institute, which signed the aforementioned letter to the EPA, recently remarked, "Rather than abolish diesel trains, CARB should stand in awe of these marvels of energy-efficient transportation."

President Joe Biden talks a lot about trains, but his actions since taking office have consistently punished the private companies we should value far more than state-supported Amtrak. In this case, EPA Administrator Michael Regan and the White House need not think too hard. They should wait for reality to catch up before imposing on the rest of us one state's demands and ambitions.

COPYRIGHT 2024 CREATORS.COM

The post How California's Ban on Diesel Locomotives Could Have Major National Repercussions appeared first on Reason.com.

The Truth About Leaving Your Phone Charger Plugged In

Od: Abdullah
22. Duben 2024 v 13:48

Modern phone chargers, when functioning properly, pose minimal risk when left plugged in even without a device connected. However, understanding the implications of standby power ...

The post The Truth About Leaving Your Phone Charger Plugged In appeared first on Gizchina.com.

  • ✇IEEE Spectrum
  • Announcing a Benchmark to Improve AI SafetyMLCommons AI Safety Working Group
    One of the management guru Peter Drucker’s most over-quoted turns of phrase is “what gets measured gets improved.” But it’s over-quoted for a reason: It’s true. Nowhere is it truer than in technology over the past 50 years. Moore’s law—which predicts that the number of transistors (and hence compute capacity) in a chip would double every 24 months—has become a self-fulfilling prophecy and north star for an entire ecosystem. Because engineers carefully measured each generation of manufacturing te
     

Announcing a Benchmark to Improve AI Safety



One of the management guru Peter Drucker’s most over-quoted turns of phrase is “what gets measured gets improved.” But it’s over-quoted for a reason: It’s true.

Nowhere is it truer than in technology over the past 50 years. Moore’s law—which predicts that the number of transistors (and hence compute capacity) in a chip would double every 24 months—has become a self-fulfilling prophecy and north star for an entire ecosystem. Because engineers carefully measured each generation of manufacturing technology for new chips, they could select the techniques that would move toward the goals of faster and more capable computing. And it worked: Computing power, and more impressively computing power per watt or per dollar, has grown exponentially in the past five decades. The latest smartphones are more powerful than the fastest supercomputers from the year 2000.

Measurement of performance, though, is not limited to chips. All the parts of our computing systems today are benchmarked—that is, compared to similar components in a controlled way, with quantitative score assessments. These benchmarks help drive innovation.

And we would know.

As leaders in the field of AI, from both industry and academia, we build and deliver the most widely used performance benchmarks for AI systems in the world. MLCommons is a consortium that came together in the belief that better measurement of AI systems will drive improvement. Since 2018, we’ve developed performance benchmarks for systems that have shown more than 50-fold improvements in the speed of AI training. In 2023, we launched our first performance benchmark for large language models (LLMs), measuring the time it took to train a model to a particular quality level; within 5 months we saw repeatable results of LLMs improving their performance nearly threefold. Simply put, good open benchmarks can propel the entire industry forward.

We need benchmarks to drive progress in AI safety

Even as the performance of AI systems has raced ahead, we’ve seen mounting concern about AI safety. While AI safety means different things to different people, we define it as preventing AI systems from malfunctioning or being misused in harmful ways. For instance, AI systems without safeguards could be misused to support criminal activity such as phishing or creating child sexual abuse material, or could scale up the propagation of misinformation or hateful content. In order to realize the potential benefits of AI while minimizing these harms, we need to drive improvements in safety in tandem with improvements in capabilities.

We believe that if AI systems are measured against common safety objectives, those AI systems will get safer over time. However, how to robustly and comprehensively evaluate AI safety risks—and also track and mitigate them—is an open problem for the AI community.

Safety measurement is challenging because of the many different ways that AI models are used and the many aspects that need to be evaluated. And safety is inherently subjective, contextual, and contested—unlike with objective measurement of hardware speed, there is no single metric that all stakeholders agree on for all use cases. Often the test and metrics that are needed depend on the use case. For instance, the risks that accompany an adult asking for financial advice are very different from the risks of a child asking for help writing a story. Defining “safety concepts” is the key challenge in designing benchmarks that are trusted across regions and cultures, and we’ve already taken the first steps toward defining a standardized taxonomy of harms.

A further problem is that benchmarks can quickly become irrelevant if not updated, which is challenging for AI safety given how rapidly new risks emerge and model capabilities improve. Models can also “overfit”: they do well on the benchmark data they use for training, but perform badly when presented with different data, such as the data they encounter in real deployment. Benchmark data can even end up (often accidentally) being part of models’ training data, compromising the benchmark’s validity.

Our first AI safety benchmark: the details

To help solve these problems, we set out to create a set of benchmarks for AI safety. Fortunately, we’re not starting from scratch— we can draw on knowledge from other academic and private efforts that came before. By combining best practices in the context of a broad community and a proven benchmarking non-profit organization, we hope to create a widely trusted standard approach that is dependably maintained and improved to keep pace with the field.

Our first AI safety benchmark focuses on large language models. We released a v0.5 proof-of-concept (POC) today, 16 April, 2024. This POC validates the approach we are taking towards building the v1.0 AI Safety benchmark suite, which will launch later this year.

What does the benchmark cover? We decided to first create an AI safety benchmark for LLMs because language is the most widely used modality for AI models. Our approach is rooted in the work of practitioners, and is directly informed by the social sciences. For each benchmark, we will specify the scope, the use case, persona(s), and the relevant hazard categories. To begin with, we are using a generic use case of a user interacting with a general-purpose chat assistant, speaking in English and living in Western Europe or North America.

There are three personas: malicious users, vulnerable users such as children, and typical users, who are neither malicious nor vulnerable. While we recognize that many people speak other languages and live in other parts of the world, we have pragmatically chosen this use case due to the prevalence of existing material. This approach means that we can make grounded assessments of safety risks, reflecting the likely ways that models are actually used in the real-world. Over time, we will expand the number of use cases, languages, and personas, as well as the hazard categories and number of prompts.

What does the benchmark test for? The benchmark covers a range of hazard categories, including violent crimes, child abuse and exploitation, and hate. For each hazard category, we test different types of interactions where models’ responses can create a risk of harm. For instance, we test how models respond to users telling them that they are going to make a bomb—and also users asking for advice on how to make a bomb, whether they should make a bomb, or for excuses in case they get caught. This structured approach means we can test more broadly for how models can create or increase the risk of harm.

How do we actually test models? From a practical perspective, we test models by feeding them targeted prompts, collecting their responses, and then assessing whether they are safe or unsafe. Quality human ratings are expensive, often costing tens of dollars per response—and a comprehensive test set might have tens of thousands of prompts! A simple keyword- or rules- based rating system for evaluating the responses is affordable and scalable, but isn’t adequate when models’ responses are complex, ambiguous or unusual. Instead, we’re developing a system that combines “evaluator models”—specialized AI models that rate responses—with targeted human rating to verify and augment these models’ reliability.

How did we create the prompts? For v0.5, we constructed simple, clear-cut prompts that align with the benchmark’s hazard categories. This approach makes it easier to test for the hazards and helps expose critical safety risks in models. We are working with experts, civil society groups, and practitioners to create more challenging, nuanced, and niche prompts, as well as exploring methodologies that would allow for more contextual evaluation alongside ratings. We are also integrating AI-generated adversarial prompts to complement the human-generated ones.

How do we assess models? From the start, we agreed that the results of our safety benchmarks should be understandable for everyone. This means that our results have to both provide a useful signal for non-technical experts such as policymakers, regulators, researchers, and civil society groups who need to assess models’ safety risks, and also help technical experts make well-informed decisions about models’ risks and take steps to mitigate them. We are therefore producing assessment reports that contain “pyramids of information.” At the top is a single grade that provides a simple indication of overall system safety, like a movie rating or an automobile safety score. The next level provides the system’s grades for particular hazard categories. The bottom level gives detailed information on tests, test set provenance, and representative prompts and responses.

AI safety demands an ecosystem

The MLCommons AI safety working group is an open meeting of experts, practitioners, and researchers—we invite everyone working in the field to join our growing community. We aim to make decisions through consensus and welcome diverse perspectives on AI safety.

We firmly believe that for AI tools to reach full maturity and widespread adoption, we need scalable and trustworthy ways to ensure that they’re safe. We need an AI safety ecosystem, including researchers discovering new problems and new solutions, internal and for-hire testing experts to extend benchmarks for specialized use cases, auditors to verify compliance, and standards bodies and policymakers to shape overall directions. Carefully implemented mechanisms such as the certification models found in other mature industries will help inform AI consumer decisions. Ultimately, we hope that the benchmarks we’re building will provide the foundation for the AI safety ecosystem to flourish.

The following MLCommons AI safety working group members contributed to this article:

  • Ahmed M. Ahmed, Stanford UniversityElie Alhajjar, RAND
  • Kurt Bollacker, MLCommons
  • Siméon Campos, Safer AI
  • Canyu Chen, Illinois Institute of Technology
  • Ramesh Chukka, Intel
  • Zacharie Delpierre Coudert, Meta
  • Tran Dzung, Intel
  • Ian Eisenberg, Credo AI
  • Murali Emani, Argonne National Laboratory
  • James Ezick, Qualcomm Technologies, Inc.
  • Marisa Ferrara Boston, Reins AI
  • Heather Frase, CSET (Center for Security and Emerging Technology)
  • Kenneth Fricklas, Turaco Strategy
  • Brian Fuller, Meta
  • Grigori Fursin, cKnowledge, cTuning
  • Agasthya Gangavarapu, Ethriva
  • James Gealy, Safer AI
  • James Goel, Qualcomm Technologies, Inc
  • Roman Gold, The Israeli Association for Ethics in Artificial Intelligence
  • Wiebke Hutiri, Sony AI
  • Bhavya Kailkhura, Lawrence Livermore National Laboratory
  • David Kanter, MLCommons
  • Chris Knotz, Commn Ground
  • Barbara Korycki, MLCommons
  • Shachi Kumar, Intel
  • Srijan Kumar, Lighthouz AI
  • Wei Li, Intel
  • Bo Li, University of Chicago
  • Percy Liang, Stanford University
  • Zeyi Liao, Ohio State University
  • Richard Liu, Haize Labs
  • Sarah Luger, Consumer Reports
  • Kelvin Manyeki, Bestech Systems
  • Joseph Marvin Imperial, University of Bath, National University Philippines
  • Peter Mattson, Google, MLCommons, AI Safety working group co-chair
  • Virendra Mehta, University of Trento
  • Shafee Mohammed, Project Humanit.ai
  • Protik Mukhopadhyay, Protecto.ai
  • Lama Nachman, Intel
  • Besmira Nushi, Microsoft Research
  • Luis Oala, Dotphoton
  • Eda Okur, Intel
  • Praveen Paritosh
  • Forough Poursabzi, Microsoft
  • Eleonora Presani, Meta
  • Paul Röttger, Bocconi University
  • Damian Ruck, Advai
  • Saurav Sahay, Intel
  • Tim Santos, Graphcore
  • Alice Schoenauer Sebag, Cohere
  • Vamsi Sistla, Nike
  • Leonard Tang, Haize Labs
  • Ganesh Tyagali, NStarx AI
  • Joaquin Vanschoren, TU Eindhoven, AI Safety working group co-chair
  • Bertie Vidgen, MLCommons
  • Rebecca Weiss, MLCommons
  • Adina Williams, FAIR, Meta
  • Carole-Jean Wu, FAIR, Meta
  • Poonam Yadav, University of York, UK
  • Wenhui Zhang, LFAI & Data
  • Fedor Zhdanov, Nebius AI

  • ✇Semiconductor Engineering
  • Interoperability And Automation Yield A Scalable And Efficient Safety WorkflowAnn Keffer
    By Ann Keffer, Arun Gogineni, and James Kim Cars deploying ADAS and AV features rely on complex digital and analog systems to perform critical real-time applications. The large number of faults that need to be tested in these modern automotive designs make performing safety verification using a single technology impractical. Yet, developing an optimized safety methodology with specific fault lists automatically targeted for simulation, emulation and formal is challenging. Another challenge is c
     

Interoperability And Automation Yield A Scalable And Efficient Safety Workflow

7. Březen 2024 v 09:07

By Ann Keffer, Arun Gogineni, and James Kim

Cars deploying ADAS and AV features rely on complex digital and analog systems to perform critical real-time applications. The large number of faults that need to be tested in these modern automotive designs make performing safety verification using a single technology impractical.

Yet, developing an optimized safety methodology with specific fault lists automatically targeted for simulation, emulation and formal is challenging. Another challenge is consolidating fault resolution results from various fault injection runs for final metric computation.

The good news is that interoperability of fault injection engines, optimization techniques, and an automated flow can effectively reduce overall execution time to quickly close-the-loop from safety analysis to safety certification.

Figure 1 shows some of the optimization techniques in a safety flow. Advanced methodologies such as safety analysis for optimization and fault pruning, concurrent fault simulation, fault emulation, and formal based analysis can be deployed to validate the safety requirements for an automotive SoC.

Fig. 1: Fault list optimization techniques.

Proof of concept: an automotive SoC

Using an SoC level test case, we will demonstrate how this automated, multi-engine flow handles the large number of faults that need to be tested in advanced automotive designs. The SoC design we used in this test case had approximately three million gates. First, we used both simulation and emulation fault injection engines to efficiently complete the fault campaigns for final metrics. Then we performed formal analysis as part of finishing the overall fault injection.

Fig. 2: Automotive SoC top-level block diagram.

Figure 3 is a representation of the safety island block from figure 2. The color-coded areas show where simulation, emulation, and formal engines were used for fault injection and fault classification.

Fig. 3: Detailed safety island block diagram.

Fault injection using simulation was too time and resource consuming for the CPU core and cache memory blocks. Those blocks were targeted for fault injection with an emulation engine for efficiency. The CPU core is protected by a software test library (STL) and the cache memory is protected by ECC. The bus interface requires end-to-end protection where fault injection with simulation was determined to be efficient. The fault management unit was not part of this experiment. Fault injection for the fault management unit will be completed using formal technology as a next step.

Table 1 shows the register count for the blocks in the safety island.

Table 1: Block register count.

The fault lists generated for each of these blocks were optimized to focus on the safety critical nodes which have safety mechanisms/protection.

SafetyScope, a safety analysis tool, was run to create the fault lists for the FMs for both the Veloce Fault App (fault emulator) and the fault simulator and wrote the fault lists to the functional safety (FuSa) database.

For the CPU and cache memory blocks, the emulator inputs the synthesized blocks and fault injection/fault detection nets (FIN/FDN). Next, it executed the stimulus and captured the states of all the FDNs. The states were saved and used as a “gold” reference for comparison against fault inject runs. For each fault listed in the optimized fault list, the faulty behavior was emulated, and the FDNs were compared against the reference values generated during the golden run, and the results were classified and updated in the fault database with attributes.

Fig. 4: CPU cluster. (Source from https://developer.arm.com/Processors/Cortex-R52)

For each of the sub parts shown in the block diagram, we generated an optimized fault list using the analysis engine. The fault lists are saved into individual session in the FuSa database. We used the statistical random sampling on the overall faults to generate the random sample from the FuSa database.

Now let’s look at what happens when we take one random sample all the way through the fault injection using emulation. However, for this to completely close on the fault injection, we processed N samples.

Table 2: Detected faults by safety mechanisms.

Table 3 shows that the overall fault distribution for total faults is in line with the fault distribution of the random sampled faults. The table further captures the total detected faults of 3125 out of 4782 total faults. We were also able model the detected faults per sub part and provide an overall detected fault ratio of 65.35%. Based on the faults in the random sample and our coverage goal of 90%, we calculated that the margin of error (MOE) is ±1.19%.

Table 3: Results of fault injection in CPU and cache memory.

The total detected (observed + unobserved) 3125 faults provide a clear fault classification. The undetected observed also provide a clear classification for Residual faults. We did further analysis of undetected unobserved and not injected faults.

Table 4: Fault classification after fault injection.

We used many debug techniques to analyze the 616 Undetected Unobserved faults. First, we used formal analysis to check the cone of influence (COI) of these UU faults. The faults which were outside the COI were deemed safe, and there were five faults which were further dropped from analysis. For the faults which were inside the COI, we used engineering judgment with justification of various configurations like, ECC, timer, flash mem related etc. Finally, using formal and engineering judgment we were able to further classify 616 UU faults into safe faults and remaining UU faults into conservatively residual faults. We also reviewed the 79 residual faults and were able to classify 10 faults into safe faults. The not injected faults were also tested against the simulation model to check if any further stimulus is able to inject those faults. Since no stimulus was able to inject these faults, we decided to drop these faults from our consideration and against the margin of error accordingly. With this change our new MOE is ±1.293%.

In parallel, the fault simulator pulled the optimized fault lists for the failure modes of the bus block and ran fault simulations using stimulus from functional verification. The initial set of stimuli didn’t provide enough coverage, so higher quality stimuli (test vectors) were prepared, and additional fault campaigns were run on the new stimuli. All the fault classifications were written into the FuSa database. All runs were parallel and concurrent for overall efficiency and high performance.

Safety analysis using SafetyScope helped to provide more accuracy and reduce the iteration of fault simulation. CPU and cache mem after emulation on various tests resulted an overall SPFM of over 90% as shown in Table 5.

Table 5: Overall results.

At this time not all the tests for BUS block (end to end protection) doing the fault simulation had been completed. Table 6 shows the first initial test was able to resolve the 9.8% faults very quickly.

Table 6: Percentage of detected faults for BUS block by E2E SM.

We are integrating more tests which have high traffic on the BUS to mimic the runtime operation state of the SoC. The results of these independent fault injections (simulation and emulation) were combined for calculating the final metrics on the above blocks, with the results shown in Table 7.

Table 7: Final fault classification post analysis.

Conclusion

In this article we shared the details of a new functional safety methodology used in an SoC level automotive test case, and we showed how our methodology produces a scalable, efficient safety workflow using optimization techniques for fault injection using formal, simulation, and emulation verification engines. Performing safety analysis prior to running the fault injection was very critical and time saving. Therefore, the interoperability for using multiple engines and reading the results from a common FuSa database is necessary for a project of this scale.

For more information on this highly effective functional safety flow for ADAS and AV automotive designs, please download the Siemens EDA whitepaper Complex safety mechanisms require interoperability and automation for validation and metric closure.

Arun Gogineni is an engineering manager and architect for IC functional safety at Siemens EDA.

James Kim is a technical leader at Siemens EDA.

The post Interoperability And Automation Yield A Scalable And Efficient Safety Workflow appeared first on Semiconductor Engineering.

  • ✇Techdirt
  • Once Again, Ron Wyden Had To Stop Bad “Protect The Children” Internet Bills From Moving ForwardMike Masnick
    Senator Ron Wyden is a one-man defense for preventing horrible bills from moving forward in the Senate. Last month, he stopped Josh Hawley from moving a very problematic STOP CSAM bill from moving forward, and now he’s had to do it again. A (bipartisan) group of senators traipsed to the Senate floor Wednesday evening. They tried to skip the line and quickly move some bad bills forward by asking for unanimous consent. Unless someone’s there to object, it effectively moves the bill forward, ending
     

Once Again, Ron Wyden Had To Stop Bad “Protect The Children” Internet Bills From Moving Forward

7. Březen 2024 v 22:36

Senator Ron Wyden is a one-man defense for preventing horrible bills from moving forward in the Senate. Last month, he stopped Josh Hawley from moving a very problematic STOP CSAM bill from moving forward, and now he’s had to do it again.

A (bipartisan) group of senators traipsed to the Senate floor Wednesday evening. They tried to skip the line and quickly move some bad bills forward by asking for unanimous consent. Unless someone’s there to object, it effectively moves the bill forward, ending committee debate about it. Traditionally, this process is used for moving non-controversial bills, but lately it’s been used to grandstand about stupid bills.

Senator Lindsey Graham announced his intention to pull this kind of stunt on bills that he pretends are about “protecting the children” but which do no such thing in reality. Instead of it being just him, he rounded up a bunch of senators and they all pulled out the usual moral panic lines about two terrible bills: EARN IT and STOP CSAM. Both bills are designed to make it sound like good ideas and about protecting children, but the devil is very much in the detail, as both bills undermine end-to-end encryption while assuming that if you just put liability on websites, they’ll magically make child predators disappear.

And while both bills pretend not to attack encryption — and include some language about how they’re not intended to do so — both of them leave open the possibility that the use of end-to-end encryption will be used as evidence against websites for bad things done on those websites.

But, of course, as is the standard for the group of grandstanding senators, they present these bills as (1) perfect and (2) necessary to “protect the children.” The problem is that the bills are actually (1) ridiculously problematic and (2) will actually help bad people online in making end-to-end encryption a liability.

The bit of political theater kicked off with Graham having Senators Grassley, Cornyn, Durbin, Klobuchar, and Hawley talk on and on about the poor kids online. Notably, none of them really talked about how their bills worked (because that would reveal how the bills don’t really do what they pretend they do). Durbin whined about Section 230, misleadingly and mistakenly blaming it for the fact that bad people exist. Hawley did the thing that he loves doing, in which he does his mock “I’m a big bad Senator taking on those evil tech companies” schtick, while flat out lying about reality.

But Graham closed it out with the most misleading bit of all:

In 2024, here’s the state of play: the largest companies in America — social media outlets that make hundreds of billions of dollars a year — you can’t sue if they do damage to your family by using their product because of Section 230

This is a lie. It’s a flat out lie and Senator Graham and his staffers know this. All Section 230 says is that if there is content on these sites that violate the law, the liability goes after whoever created the content. If the features of the site itself “do damage,” then you can absolutely sue the company. But no one is actually complaining about the features. They’re complaining about content. And the liability on the content has to go to who created it.

The problem here is that Graham and all the other senators want to hold companies liable for the speech of users. And that is a very, very bad idea.

Now these platforms enrich our lives, but they destroy our lives.

These platforms are being used to bully children to death.

They’re being used to take sexual images and voluntarily and voluntarily obtain and sending them to the entire world. And there’s not a damn thing you can do about it. We had a lady come before the committee, a mother saying that her daughter was on a social media site that had an anti-bullying provisions. They complained three times about what was happening to her daughter. She killed herself. They went to court. They got kicked out by section 230.

I don’t know the details of this particular case, but first off, the platforms didn’t bully anyone. Other people did. Put the blame on the people actually causing the harm. Separately, and importantly, you can’t blame someone’s suicide on someone else when no one knows the real reasons. Otherwise, you actually encourage increased suicides, as it gives people an ultimate way to “get back” at someone.

Senator Wyden got up and, as he did last month, made it quite clear that we need to stop child sexual abuse and predators. He talked about his bill, which would actually help on these issues by giving law enforcement the resources it needs to go after the criminals, rather than the idea of the bills being pushed that simply blame social media companies for not magically making bad people disappear.

We’re talking about criminal issues, and Senator Wyden is looking to handle it by empowering law enforcement to deal with the criminals. Senators Graham, Durbin, Grassley, Cornyn, Klobuchar, and Hawley are looking to sue tech companies for not magically stopping criminals. One of those approaches makes sense for dealing with criminal activity. And yet it’s the other one that a bunch of senators have lined up behind.

And, of course, beyond the dangerous approach of EARN IT, it inherently undermines encryption, which makes kids (and everyone) less safe, as Wyden also pointed out.

Now, the specific reason I oppose EARN It is it will weaken the single strongest technology that protects children and families online. Something known as strong encryption.

It’s going to make it easier to punish sites that use encryption to secure private conversations and personal devices. This bill is designed to pressure communications and technology companies to scan users messages.

I, for one, don’t find that a particularly comforting idea.

Now, the sponsors of the bill have argued — and Senator Graham’s right, we’ve been talking about this a while — that their bills don’t harm encryption. And yet the bills allow courts to punish companies that offer strong encryption.

In fact, while it includes some they language about protecting encryption, it explicitly allows encryption to be used as evidence for various forms of liability. Prosecutors are going to be quick to argue that deploying encryption was evidence of a company’s negligence preventing the distribution of CSAM, for example.

The bill is also designed to encourage scanning of content on users phones or computers before information is sent over the Internet which has the same consequences as breaking encryption. That’s why a hundred civil society groups including the American Library Association — people then I think all of us have worked for — Human Rights Campaign, the list goes… Restore the Fourth. All of them oppose this bill because of its impact on essential security.

Weakening encryption is the single biggest gift you can give to these predators and these god-awful people who want to stalk and spy on kids. Sexual predators are gonna have a far easier time stealing photographs of kids, tracking their phones, and spying on their private messages once encryption is breached. It is very ironic that a bill that’s supposed to make kids safer would have the effect of threatening the privacy and security of all law-abiding Americans.

My alternative — and I want to be clear about this because I think Senator Graham has been sincere about saying that this is a horrible problem involving kids. We have a disagreement on the remedy. That’s what is at issue.

And what I want us to do is to focus our energy on giving law enforcement officials the tools they need to find and prosecute these monstrous criminals responsible for exploiting kids and spreading vile abuse materials online.

That can help prevent kids from becoming victims in the first place. So I have introduced to do this: the Invest in Child Safety Act to direct five billion dollars to do three specific things to deal with this very urgent problem.

Graham then gets up to respond and lies through his teeth:

There’s nothing in this bill about encryption. We say that this is not an encryption bill. The bill as written explicitly prohibits courts from treating encryption as an independent basis for liability.

We’re agnostic about that.

That’s not true. As Wyden said, the bill has some hand-wavey language about not treating encryption as an independent basis for liability, but it does explicitly allow for encryption to be one of the factors that can be used to show negligence by a platform, as long as you combine it with other factors.

Section (7)(A) is the hand-wavey bit saying you can’t use encryption as “an independent basis” to determine liability, but (7)(B) effectively wipes that out by saying nothing in that section about encryption “shall be construed to prohibit a court from considering evidence of actions or circumstances described in that subparagraph.” In other words, you just have to add a bit more, and then can say “and also, look, they use encryption!”

And another author of the bill, Senator Blumenthal, has flat out said that EARN IT is deliberately written to target encryption. He falsely claims that companies would “use encryption… as a ‘get out of jail free’ card.” So, Graham is lying when he says encryption isn’t a target of the bill. One of his co-authors on the bill admits otherwise.

Graham went on:

What we’re trying to do is hold these companies accountable by making sure they engage in best business practices. The EARN IT acts simply says for you to have liability protections, you have to prove that you’ve tried to protect children. You have to earn it. You’re just not given to you. You have to have the best business practices in place that voluntary commissions that lay out what would be the best way to harden these sites against sexually exploitation. If you do those things you get liability, it’s just not given to you forever. So this is not about encryption.

As to your idea. I’d love to talk to you about it. Let’s vote on both, but the bottom line here is there’s always a reason not to do anything that holds these people liable. That’s the bottom line. They’ll never agree to any bill that allows you to get them in court ever. If you’re waiting on these companies to give this body permission for the average person to sue you. It ain’t never going to happen.

So… all of that is wrong. First of all, the very original version of the EARN IT Act did have provisions to make company’s “earn” 230 protections by following best practices, but that’s been out of the bill for ages. The current version has no such thing.

The bill does set up a commission to create best practices, but (unlike the earlier versions of the bill) those best practice recommendations have no legal force or requirements. And there’s nothing in the bill that says if you follow them you get 230 protections, and if you don’t, you don’t.

Does Senator Graham even know which version of the bill he’s talking about?

Instead, the bill outright modifies Section 230 (before the Commission even researches best practices) and says that people can sue tech companies for the distribution of CSAM. This includes using the offering of encryption as evidence to support the claims that CSAM distribution was done because of “reckless” behavior by a platform.

Either Senator Graham doesn’t know what bill he’s talking about (even though it’s his own bill) or he doesn’t remember that he changed the bill to do something different than it used to try to do.

It’s ridiculous that Senator Wyden remains the only senator who sees this issue clearly and is willing to stand up and say so. He’s the only one who seems willing to block the bad bills while at the same time offering a bill that actually targets the criminals.

  • ✇Boing Boing
  • Undercover video exposes massive "Pig Butchering" romance scam center in DubaiMark Frauenfelder
    Scam Buster Jim Browning collaborated with an insider to record undercover video inside a pig butchering scam operating from a large complex in Dubai. Here, offices full of migrant workers impersonate glamorous models on dating apps to lure victims into fake investment opportunities. — Read the rest The post Undercover video exposes massive "Pig Butchering" romance scam center in Dubai appeared first on Boing Boing.
     

Undercover video exposes massive "Pig Butchering" romance scam center in Dubai

8. Březen 2024 v 18:56
Pig butcher Scam

Scam Buster Jim Browning collaborated with an insider to record undercover video inside a pig butchering scam operating from a large complex in Dubai. Here, offices full of migrant workers impersonate glamorous models on dating apps to lure victims into fake investment opportunities. — Read the rest

The post Undercover video exposes massive "Pig Butchering" romance scam center in Dubai appeared first on Boing Boing.

  • ✇Boing Boing
  • Maine man faces up to 10 years in prison for spiking ice cream with THCMark Frauenfelder
    A Maine man faces charges for allegedly tampering with ice cream by adding THC (tetrahydrocannabinol, the key psychoactive compound responsible for most of cannabis's psychological effects), according to a press release from the U.S. Department of Justice. Marc Flore, 43, was indicted on one count of tampering with consumer products after he reportedly laced a batch of coffee-Oreo flavored ice cream with THC at the Roots Cafe in Newmarket, New Hampshire. — Read the rest The post Maine man faces
     

Maine man faces up to 10 years in prison for spiking ice cream with THC

1. Březen 2024 v 19:25
Maine man charged for spiking ice cream with THC

A Maine man faces charges for allegedly tampering with ice cream by adding THC (tetrahydrocannabinol, the key psychoactive compound responsible for most of cannabis's psychological effects), according to a press release from the U.S. Department of Justice. Marc Flore, 43, was indicted on one count of tampering with consumer products after he reportedly laced a batch of coffee-Oreo flavored ice cream with THC at the Roots Cafe in Newmarket, New Hampshire. — Read the rest

The post Maine man faces up to 10 years in prison for spiking ice cream with THC appeared first on Boing Boing.

  • ✇Ars Technica - All content
  • AI-generated articles prompt Wikipedia to downgrade CNET’s reliability ratingBenj Edwards
    Enlarge (credit: Jaap Arriens/NurPhoto/Getty Images) Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness, as noted in a detailed report from Futurism. The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022. Around November 2022, CNET began publishi
     

AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating

29. Únor 2024 v 23:00
The CNET logo on a smartphone screen.

Enlarge (credit: Jaap Arriens/NurPhoto/Getty Images)

Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness, as noted in a detailed report from Futurism. The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022.

Around November 2022, CNET began publishing articles written by an AI model under the byline "CNET Money Staff." In January 2023, Futurism brought widespread attention to the issue and discovered that the articles were full of plagiarism and mistakes. (Around that time, we covered plans to do similar automated publishing at BuzzFeed.) After the revelation, CNET management paused the experiment, but the reputational damage had already been done.

Wikipedia maintains a page called "Reliable sources/Perennial sources" that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia's perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication.

Read 7 remaining paragraphs | Comments

  • ✇Techdirt
  • Techdirt Podcast Episode 381: KOSA Isn’t Just Wrong About The Internet, It’s Wrong About Child SafetyLeigh Beadon
    In our coverage of the problems with KOSA and other legislative pushes to “protect the children” online, we usually (for obvious reasons) come at the subject from the technology side, and look at all the ways these laws misunderstand the internet. But that’s not their only flaw: these proposals also tend to lack any real understanding of child safety. Maureen Flatley is someone who has been vocal from the other side, having covered child safety issues for about as long as we’ve covered tech, an
     

Techdirt Podcast Episode 381: KOSA Isn’t Just Wrong About The Internet, It’s Wrong About Child Safety

21. Únor 2024 v 22:28

In our coverage of the problems with KOSA and other legislative pushes to “protect the children” online, we usually (for obvious reasons) come at the subject from the technology side, and look at all the ways these laws misunderstand the internet. But that’s not their only flaw: these proposals also tend to lack any real understanding of child safety. Maureen Flatley is someone who has been vocal from the other side, having covered child safety issues for about as long as we’ve covered tech, and she joins us on this week’s episode to discuss how KOSA and its ilk aren’t rooted in what we really know about keeping kids safe.

Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

  • ✇Ars Technica - All content
  • ChatGPT goes temporarily “insane” with unexpected outputs, spooking usersBenj Edwards
    Enlarge (credit: Benj Edwards / Getty Images) On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanli
     

ChatGPT goes temporarily “insane” with unexpected outputs, spooking users

21. Únor 2024 v 17:57
Illustration of a broken toy robot.

Enlarge (credit: Benj Edwards / Getty Images)

On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.

ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.

"It gave me the exact same feeling—like watching someone slowly lose their mind either from psychosis or dementia," wrote a Reddit user named z3ldafitzgerald in response to a post about ChatGPT bugging out. "It’s the first time anything AI related sincerely gave me the creeps."

Read 7 remaining paragraphs | Comments

  • ✇Boing Boing
  • NYPD is on the lookout for woman who bashed subway musician on the head with a metal bottle (video)Mark Frauenfelder
    Iain S. Forrest, 29, is an electric cellist and a doctor who was attacked last week while performing in a New York subway station. He stated, "At 5:50 pm on February 14th, while performing at 34th St Herald Square station, a woman wearing a mustard jacket, red scarf, and gloves assaulted me by smashing the back of my head with my metal water bottle. — Read the rest The post NYPD is on the lookout for woman who bashed subway musician on the head with a metal bottle (video) appeared first on Boin
     

NYPD is on the lookout for woman who bashed subway musician on the head with a metal bottle (video)

20. Únor 2024 v 22:29
Electric Cellist Doctor Attacked in NYC Subway

Iain S. Forrest, 29, is an electric cellist and a doctor who was attacked last week while performing in a New York subway station. He stated, "At 5:50 pm on February 14th, while performing at 34th St Herald Square station, a woman wearing a mustard jacket, red scarf, and gloves assaulted me by smashing the back of my head with my metal water bottle. — Read the rest

The post NYPD is on the lookout for woman who bashed subway musician on the head with a metal bottle (video) appeared first on Boing Boing.

❌
❌