FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Coders Don't Fear AI, Reports Stack Overflow's Massive 2024 Survey

Stack Overflow says over 65,000 developers took their annual survey — and "For the first time this year, we asked if developers felt AI was a threat to their job..." Some analysis from The New Stack: Unsurprisingly, only 12% of surveyed developers believe AI is a threat to their current job. In fact, 70% are favorably inclined to use AI tools as part of their development workflow... Among those who use AI tools in their development workflow, 81% said productivity is one of its top benefits, followed by an ability to learn new skills quickly (62%). Much fewer (30%) said improved accuracy is a benefit. Professional developers' adoption of AI tools in the development process has risen rapidly, going from 44% in 2023 to 62% in 2024... Seventy-one percent of developers with less than five years of experience reported using AI tools in their development process, as compared to just 49% of developers with 20% years of experience coding... At 82%, [ChatGPT] is twice as likely to have been used than GitHub Copilot. Among ChatGPT users, 74% want to continue using it. But "only 43% said they trust the accuracy of AI tools," according to Stack Overflow's blog post, "and 45% believe AI tools struggle to handle complex tasks." More analysis from The New Stack: The latest edition of the global annual survey found full-time employment is holding steady, with over 80% reporting that they have full-time jobs. The percentage of unemployed developers has more than doubled since 2019 but is still at a modest 4.4% worldwide... The median annual salary of survey respondents declined significantly. For example, the average full-stack developer's median 2024 salary fell 11% compared to the previous year, to $63,333... Wage pressure may be the result of more competition from an increase in freelancing. Eighteen percent of professional developers in the 2024 survey said they are independent contractors or self-employed, which is up from 9.5% in 2020. Part-time employment has also risen, presenting even more pressure on full-time salaries... Job losses at tech companies have contributed to a large influx of talent into the freelance market, noted Stack Overflow CEO Prashanth Chandrasekar in an interview with The New Stack. Since COVID-19, he added, the emphasis on remote work means more people value job flexibility. In the 2024 survey, only 20% have returned to full-time in-person work, 38% are full-time remote, while the remainder are in a hybrid situation. Anticipation of future productivity growth due to AI may also be creating uncertainty about how much to pay developers. Two stats jumped out for Visual Studio magazine: In this year's big Stack Overflow developer survey things are much the same for Microsoft-centric data points: VS Code and Visual Studio still rule the IDE roost, while .NET maintains its No. 1 position among non-web frameworks. It's been this way for years, though in 2021 it was .NET Framework at No. 1 among IDEs, while the new .NET Core/.NET 5 entry was No. 3. Among IDEs, there has been less change. "Visual Studio Code is used by more than twice as many developers than its nearest (and related) alternative, Visual Studio," said the 2024 Stack Overflow Developer survey, the 14th in the series of massive reports. Stack Overflow shared some other interesting statistics: "Javascript (62%), HTML/CSS (53%), and Python (51%) top the list of most used languages for the second year in a row... [JavaScript] has been the most popular language every year since the inception of the Developer Survey in 2011." "Python is the most desired language this year (users that did not indicate using this year but did indicate wanting to use next year), overtaking JavaScript." "The language that most developers used and want to use again is Rust for the second year in a row with an 83% admiration rate. " "Python is most popular for those learning to code..." "Technical debt is a problem for 62% of developers, twice as much as the second- and third-most frustrating problems for developers: complex tech stacks for building and deployment."

Read more of this story at Slashdot.

Should We Fight Climate Change by Releasing Sulfur Dioxide into the Stratosphere?

A professor in the University of Chicago's department of geophysical sciences "believes that by intentionally releasing sulfur dioxide into the stratosphere, it would be possible to lower temperatures worldwide," reports the New York Times. He's not the only one promoting the idea. "Harvard University has a solar geoengineering program that has received grants from Microsoft co-founder Bill Gates, the Alfred P. Sloan Foundation and the William and Flora Hewlett Foundation. It's being studied by the Environmental Defense Fund along with the World Climate Research Program.... But many scientists and environmentalists fear that it could result in unpredictable calamities." Because it would be used in the stratosphere and not limited to a particular area, solar geoengineering could affect the whole world, possibly scrambling natural systems, like creating rain in one arid region while drying out the monsoon season elsewhere. Opponents worry it would distract from the urgent work of transitioning away from fossil fuels. They object to intentionally releasing sulfur dioxide, a pollutant that would eventually move from the stratosphere to ground level, where it can irritate the skin, eyes, nose and throat and can cause respiratory problems. And they fear that once begun, a solar geoengineering program would be difficult to stop... Keith, a professor in the University of Chicago's department of geophysical sciences, countered that the risks posed by solar geoengineering are well understood, not as severe as portrayed by critics and dwarfed by the potential benefits. If the technique slowed the warming of the planet by even just 1 degree Celsius, or 1.8 degrees Fahrenheit, over the next century, Keith said, it could help prevent millions of heat-related deaths each decade... Opponents of solar geoengineering cite several main risks. They say it could create a "moral hazard," mistakenly giving people the impression that it is not necessary to rapidly reduce fossil fuel emissions. The second main concern has to do with unintended consequences. "This is a really dangerous path to go down," said Beatrice Rindevall, the chair of the Swedish Society for Nature Conservation, which opposed the experiment. "It could shock the climate system, could alter hydrological cycles and could exacerbate extreme weather and climate instability." And once solar geoengineering began to cool the planet, stopping the effort abruptly could result in a sudden rise in temperatures, a phenomenon known as "termination shock." The planet could experience "potentially massive temperature rise in an unprepared world over a matter of five to 10 years, hitting the Earth's climate with something that it probably hasn't seen since the dinosaur-killing impactor," Pierrehumbert said. On top of all this, there are fears about rogue actors using solar geoengineering and concerns that the technology could be weaponized. Not to mention the fact that sulfur dioxide can harm human health. Keith is adamant that those fears are overblown. And while there would be some additional air pollution, he claims the risk is negligible compared to the benefits. The opposition is making it hard to even conduct tests, according to the article — like when Keith "wanted to release a few pounds of mineral dust at an altitude of roughly 20 kilometers and track how the dust behaved as it floated across the sky." The experiment was called off after opposition from numerous groups — including Greta Thunberg and an organization representing Indigenous people who felt the experiment was disrespecting nature.

Read more of this story at Slashdot.

Why DARPA is Funding an AI-Powered Bug-Spotting Challenge

Somewhere in America's Defense Department, the DARPA R&D agency is running a two-year contest to write an AI-powered program "that can scan millions of lines of open-source code, identify security flaws and fix them, all without human intervention," reports the Washington Post. [Alternate URL here.] But as they see it, "The contest is one of the clearest signs to date that the government sees flaws in open-source software as one of the country's biggest security risks, and considers artificial intelligence vital to addressing it." Free open-source programs, such as the Linux operating system, help run everything from websites to power stations. The code isn't inherently worse than what's in proprietary programs from companies like Microsoft and Oracle, but there aren't enough skilled engineers tasked with testing it. As a result, poorly maintained free code has been at the root of some of the most expensive cybersecurity breaches of all time, including the 2017 Equifax disaster that exposed the personal information of half of all Americans. The incident, which led to the largest-ever data breach settlement, cost the company more than $1 billion in improvements and penalties. If people can't keep up with all the code being woven into every industrial sector, DARPA hopes machines can. "The goal is having an end-to-end 'cyber reasoning system' that leverages large language models to find vulnerabilities, prove that they are vulnerabilities, and patch them," explained one of the advising professors, Arizona State's Yan Shoshitaishvili.... Some large open-source projects are run by near-Wikipedia-size armies of volunteers and are generally in good shape. Some have maintainers who are given grants by big corporate users that turn it into a job. And then there is everything else, including programs written as homework assignments by authors who barely remember them. "Open source has always been 'Use at your own risk,'" said Brian Behlendorf, who started the Open Source Security Foundation after decades of maintaining a pioneering free server software, Apache, and other projects at the Apache Software Foundation. "It's not free as in speech, or even free as in beer," he said. "It's free as in puppy, and it needs care and feeding." 40 teams entered the contest, according to the article — and seven received $1 million in funding to continue on to the next round, with the finalists to be announced at this year's Def Con, according to the article. "Under the terms of the DARPA contest, all finalists must release their programs as open source," the article points out, "so that software vendors and consumers will be able to run them."

Read more of this story at Slashdot.

Epic Games CEO Criticized For Calling Apple's 'Find My' Feature 'Super Creepy'

Slashdot reader Applehu Akbar shared this report from MacRumors: Epic Games CEO Tim Sweeney commented on Apple's 'Find My' service, referring to it as "super creepy surveillance tech" that "shouldn't exist." Sweeney went on to explain that several years ago, "a kid" stole a Mac laptop out of his car. Years later, Sweeney was checking Find My, and as the Mac was still connected to his Apple ID account, it showed him the location where the thief lived. When someone asked Sweeney if he'd at least gotten his laptop back, Sweeney answered "No. I was creeped the hell out by having unexpectedly received the kid's address, and turned off Find My iPhone on all of my devices." Slashdot reader crmarvin42 quipped "Tell me you are stupidly rich, without telling me you are stupidly rich... Next someone will be saying that it is 'Creepy' to have security footage of someone taking your Amazon packages off of your porch." And they also questioned Sweeney's sincerity, suggesting that he's "just saying that to try and make Apple look bad because of all the lawsuits going on." MacRumors followed the ensuing discussion: Sweeney said that the location of a device in someone's possession can't be tracked without tracking the person, and "people have a right to privacy." ["This right applies to second hand device buyers and even to thieves."] He claims that detection and recovery of a lost or stolen device should be "mediated by due process of law" and not exposed to the device owner "in vigilante fashion." Some responded to Sweeney's comments by sharing the headline of a Vox news story about Epic's own privacy polices. ("Fortnite maker Epic Games has to pay $520 million for tricking kids and violating their privacy.") MacRumors cited a 2014 report that thefts of iPhones dropped after the introduction of Apple's "Activation Lock" feature (which prevents the disabling of 'Find My' without a password). But when the blog AppleInsider accused Sweeney of "an incredibly bad leap of logic" — Sweeney responded. "You're idealizing this issue as good guys tracking criminals to their lairs, but when Find My or Google's similar tech points a device owner to a device possessor's home, one must anticipate the presence of families and kids and innocent used device buyers, and ask whether it's really appropriate for a platform to use GPS and shadowy mesh network tech to set up physical confrontations among individuals." Sweeney also posted a quote from Steve Jobs about how at Apple, "we worry that some 14-year-old is going to get stalked and something terrible is going to happen because of our phone."

Read more of this story at Slashdot.

NFL to Roll Out Facial Authentication Software to All Stadiums, League-Wide

America's National Football League "is the latest organization to turn to facial authentication to bolster event security," reports the Record, citing a new announcement this week: All 32 NFL stadiums will start using the technology this season, after the league signed a contract with a company that uses facial scans to verify the identity of people entering event venues and other secure spaces. The facial authentication platform, which counts the Cleveland Browns' owners as investors, will be used to "streamline and secure" entry for thousands of credentialed media, officials, staff and guests so they can easily access restricted areas such as press boxes and locker rooms, Jeff Boehm, the chief operating officer of Wicket, said in a LinkedIn post Monday. "Credential holders simply take a selfie before they come, and then Wicket verifies their identity and checks their credentials with Accredit (a credentialing platform) as they walk through security checkpoints," Boehm added. Wicket technology was deployed in a handful of NFL stadiums last year as part of a pilot program. Other stadiums will start rolling it out beginning on Aug. 8, when the pre-season kicks off. Some teams also have extended their use of the technology to scan the faces of ticket holders. The Cleveland Browns, Atlanta Falcons and New York Mets all have used the company's facial authentication software to authenticate fans with tickets, according to Stadium Tech Report. "Fans come look at the tablet and, instantly, the tablet recognizes the fan," Brandon Covert, the vice president of information technology for the Cleveland Browns, said in a testimonial appearing on Wicket's website. "It's almost a half-second stop. It's not even a stop — more of a pause." "The Browns also use Wicket to verify the ages of fans purchasing alcohol at concession stands, according to Wicket's LinkedIn page," the article points out. And a July report from Privacy International found that 25 of the top 100 soccer stadiums in the world are already using facial recognition technology. Thanks to long-time Slashdot reader schwit1 for sharing the news.

Read more of this story at Slashdot.

How Chinese Attackers Breached an ISP to Poison Insecure Software Updates with Malware

An anonymous reader shared this report from BleepingComputer: A Chinese hacking group tracked as StormBamboo has compromised an undisclosed internet service provider (ISP) to poison automatic software updates with malware. Also tracked as Evasive Panda, Daggerfly, and StormCloud, this cyber-espionage group has been active since at least 2012, targeting organizations across mainland China, Hong Kong, Macao, Nigeria, and various Southeast and East Asian countries. On Friday, Volexity threat researchers revealed that the Chinese cyber-espionage gang had exploited insecure HTTP software update mechanisms that didn't validate digital signatures to deploy malware payloads on victims' Windows and macOS devices... To do that, the attackers intercepted and modified victims' DNS requests and poisoned them with malicious IP addresses. This delivered the malware to the targets' systems from StormBamboo's command-and-control servers without requiring user interaction. Volexity's blog post says they observed StormBamboo "targeting multiple software vendors, who use insecure update workflows..." and then "notified and worked with the ISP, who investigated various key devices providing traffic-routing services on their network. As the ISP rebooted and took various components of the network offline, the DNS poisoning immediately stopped." BleepingComputer notes that "âAfter compromising the target's systems, the threat actors installed a malicious Google Chrome extension (ReloadText), which allowed them to harvest and steal browser cookies and mail data."

Read more of this story at Slashdot.

Are There Diamonds on Mercury?

The planet Mercury could have "a layer of diamonds," reports CNN, citing new research suggesting that about 310 miles (500 kilometers) below the surface...could be a layer of diamonds 11 miles (18 kilometers) thick. And the study's co-author believes lava might carry some of those diamonds up to the surface: The diamonds might have formed soon after Mercury itself coalesced into a planet about 4.5 billion years ago from a swirling cloud of dust and gas, in the crucible of a high-pressure, high-temperature environment. At this time, the fledgling planet is believed to have had a crust of graphite, floating over a deep magma ocean. A team of researchers recreated that searing environment in an experiment, with a machine called an anvil press that's normally used to study how materials behave under extreme pressure but also for the production of synthetic diamonds. "It's a huge press, which enables us to subject tiny samples at the same high pressure and high temperature that we would expect deep inside the mantle of Mercury, at the boundary between the mantle and the core," said Bernard Charlier, head of the department of geology at the University of Liège in Belgium and a coauthor of a study reporting the findings. The team inserted a synthetic mixture of elements — including silicon, titanium, magnesium and aluminum — inside a graphite capsule, mimicking the theorized composition of Mercury's interior in its early days. The researchers then subjected the capsule to pressures almost 70,000 times greater than those found on Earth's surface and temperatures up to 2,000 degrees Celsius (3,630 degrees Fahrenheit), replicating the conditions likely found near Mercury's core billions of years ago. After the sample melted, the scientists looked at changes in the chemistry and minerals under an electron microscope and noted that the graphite had turned into diamond crystals. The researchers believe this mechanism "can not only give us more insight into the secrets hidden below Mercury's surface, but on planetary evolution and the internal structure of exoplanets with similar characteristics."

Read more of this story at Slashdot.

When It Comes to Privacy, Safari Is Only the Fourth-Best Browser

Apple's elaborate new ad campaign promises that Safari is "a browser that protects your privacy." And the Washington Post says Apple "deserves credit for making many privacy protections automatic with Safari..." "But Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, said Safari is no better than the fourth-best web browser for your privacy." "If browser privacy were a sport at the Olympics, Apple isn't getting on the medal stand," Cahn said. (Apple did not comment about this.) Safari stops third-party cookies anywhere you go on the web. So do Mozilla's Firefox and the Brave browser... Chrome allows third-party cookies in most cases unless you turn them off... Even without cookies, a website can pull information like the resolution of your computer screen, the fonts you have installed, add-on software you use and other technical details that in aggregate can help identify your device and what you're doing on it. The measures, typically called "fingerprinting," are privacy-eroding tracking by another name. Nick Doty with the Center for Democracy & Technology said there's generally not much you can do about fingerprinting. Usually you don't know you're being tracked that way. Apple says it defends against common fingerprinting techniques but Cahn said Firefox, Brave and the Tor Browser all are better at protecting you from digital surveillance. That's why he said Safari is no better than the fourth-best browser for privacy. Safari's does offer extra privacy protections in its "private" mode, the article points out. "When you use this option, Apple says it does more to block use of 'advanced' fingerprinting techniques. It also steps up defenses against tracking that adds bits of identifying information to the web links you click." The article concludes that Safari users can "feel reasonably good about the privacy (and security) protections, but you can probably do better — either by tweaking your Apple settings or using a web browser that's even more private than Safari."

Read more of this story at Slashdot.

Journalists at 'The Atlantic' Demand Assurances Their Jobs Will Be Protected From OpenAI

"As media bosses scramble to decide if and how they should partner with AI companies, workers are increasingly concerned that the technology could imperil their jobs or degrade their work..." reports the Washington Post. The latest example? "Two months after the Atlantic reached a licensing deal with OpenAI, staffers at the storied magazine are demanding the company ensure their jobs and work are protected." (Nearly 60 journalists have now signed a letter demanding the company "stop prioritizing its bottom line and champion the Atlantic's journalism.") The unionized staffers want the Atlantic bosses to include AI protections in the union contract, which the two sides have been negotiating since 2022. "Our editorial leaders say that The Atlantic is a magazine made by humans, for humans," the letter says. "We could not agree more..." The Atlantic's new deal with OpenAI grants the tech firm access to the magazine's archives to train its AI tools. While the Atlantic in return will have special access to experiment with these AI tools, the magazine says it is not using AI to create journalism. But some journalists and media observers have raised concerns about whether AI tools are accurately and fairly manipulating the human-written text they work with. The Atlantic staffers' letter noted a pattern by ChatGPT of generating gibberish web addresses instead of the links intended to attribute the reporting it has borrowed, as well as sending readers to sites that have summarized Atlantic stories rather than the original work... Atlantic spokeswoman Anna Bross said company leaders "agree with the general principles" expressed by the union. For that reason, she said, they recently proposed a commitment to not to use AI to publish content "without human review and editorial oversight." Representatives from the Atlantic Union bargaining committee told The Washington Post that "the fact remains that the company has flatly refused to commit to not replacing employees with AI." The article also notes that last month the union representing Lifehacker, Mashable and PCMag journalists "ratified a contract that protects union members from being laid off because AI has impacted their roles and requires the company to discuss any such plans to implement AI tools ahead of time."

Read more of this story at Slashdot.

Gen X and Millennials at Higher Cancer Risk Than Older Generations

"Generation X and millennials are at an increased risk of developing certain cancers compared with older generations," reports the Washington Post, "a shift that is probably due to generational changes in diet, lifestyle and environmental exposures, a large new study suggests." Researchers from the American Cancer analyzed data from more than 23.5 million patients who had been diagnosed with 34 types of cancer from 2000 to 2019 — and also studied mortality data that included 7 million deaths in the U.S. from 25 types of cancer among people ages 25 to 84. [The researchers reported] that cancer rates for 17 of the 34 most common cancers are increasing in progressively younger generations. The findings included: - Cancers with the most significant increased risk are kidney, pancreatic and small intestine, which are two to three times as high for millennial men and women as baby boomers. - Millennial women also are at higher risk of liver and bile duct cancers compared with baby boomers. - Although the risk of getting cancer is rising, for most cancers, the risk of dying of the disease stabilized or declined among younger people. But mortality rates increased for gallbladder, colorectal, testicular and uterine cancers, as well as for liver cancer among younger women. "It is a concern," said Ahmedin Jemal, senior vice president of the American Cancer Society's surveillance and health equity science department, who was the senior author of the study. If the current trend continues, the increased cancer and mortality rates among younger people may "halt or even reverse the progress that we have made in reducing cancer mortality over the past several decades," he added. While there is no clear explanation for the increased cancer rates among younger people, the researchers suggest that there may be several contributing factors, including rising obesity rates; altered microbiomes from unhealthy diets high in saturated fats, red meat and ultra-processed foods or antibiotic use; poor sleep; sedentary lifestyles; and environmental factors, including exposure to pollutants and carcinogenic chemicals.

Read more of this story at Slashdot.

Go Tech Lead Russ Cox Steps Down to Focus on AI-Powered Open-Source Contributor Bot

Thursday Go's long-time tech lead Russ Cox made an announcement: Starting September 1, Austin Clements will be taking over as the tech lead of Go: both the Go team at Google and the overall Go project. Austin is currently the tech lead for what we sometimes call the "Go core", which encompasses compiler toolchain, runtime, and releases. Cherry Mui will be stepping up to lead those areas. I am not leaving the Go project, but I think the time is right for a change... I will be shifting my focus to work more on Gaby [or "Go AI bot," an open-source contributor agent] and Oscar [an open-source contributor agent architecture], trying to make useful contributions in the Go issue tracker to help all of you work more productively. I am hopeful that work on Oscar will uncover ways to help open source maintainers that will be adopted by other projects, just like some of Go's best ideas have been adopted by other projects. At the highest level, my goals for Oscar are to build something useful, learn something new, and chart a path for other projects. These are the same broad goals I've always had for our work on Go, so in that sense Oscar feels like a natural continuation. The post notes that new tech lead Austin Clements "has been working on Go at Google since 2014" (and Mui since 2016). "Their judgment is superb and their knowledge of Go and the systems it runs on both broad and deep. When I have general design questions or need to better understand details of the compiler, linker, or runtime, I turn to them." It's important to remember that tech lead — like any position of leadership — is a service role, not an honorary title. I have been leading the Go project for over 12 years, serving all of you, and trying to create the right conditions for all of you to do your best work. Large projects like Go absolutely benefit from stable leadership, but they can also benefit from leadership changes. New leaders bring new strengths and fresh perspectives. For Go, I think 12+ years of one leader is enough stability; it's time for someone new to serve in this role. In particular, I don't believe that the "BDFL" (benevolent dictator for life) model is healthy for a person or a project. It doesn't create space for new leaders. It's a single point of failure. It doesn't give the project room to grow. I think Python benefited greatly from Guido stepping down in 2018 and letting other people lead, and I've had in the back of my mind for many years that we should have a Go leadership change eventually.... I am going to consciously step back from decision making and create space for Austin and the others to step forward, but I am not disappearing. I will still be available to talk about Go designs, review CLs, answer obscure history questions, and generally help and support you all in whatever way I can. I will still file issues and send CLs from time to time, I have been working on a few potential new standard libraries, I will still advocate for Go across the industry, and I will be speaking about Go at GoLab in Italy in November... I am incredibly proud of the work we have all accomplished together, and I am confident in the leaders both on the Go team at Google and in the Go community. You are all doing remarkable work, and I know you will continue to do that.

Read more of this story at Slashdot.

Could AI Speed Up the Design of Nuclear Reactors?

A professor at Brigham Young University "has figured out a way to shave critical years off the complicated design and licensing processes for modern nuclear reactors," according to an announcement from the university. "AI is teaming up with nuclear power." The typical time frame and cost to license a new nuclear reactor design in the United States is roughly 20 years and $1 billion. To then build that reactor requires an additional five years and between $5 and $30 billion. By using AI in the time-consuming computational design process, [chemical engineering professor Matt] Memmott estimates a decade or more could be cut off the overall timeline, saving millions and millions of dollars in the process — which should prove critical given the nation's looming energy needs.... "Being able to reduce the time and cost to produce and license nuclear reactors will make that power cheaper and a more viable option for environmentally friendly power to meet the future demand...." Engineers deal with elements from neutrons on the quantum scale all the way up to coolant flow and heat transfer on the macro scale. [Memmott] also said there are multiple layers of physics that are "tightly coupled" in that process: the movement of neutrons is tightly coupled to the heat transfer which is tightly coupled to materials which is tightly coupled to the corrosion which is coupled to the coolant flow. "A lot of these reactor design problems are so massive and involve so much data that it takes months of teams of people working together to resolve the issues," he said... Memmott's is finding AI can reduce that heavy time burden and lead to more power production to not only meet rising demands, but to also keep power costs down for general consumers... Technically speaking, Memmott's research proves the concept of replacing a portion of the required thermal hydraulic and neutronics simulations with a trained machine learning model to predict temperature profiles based on geometric reactor parameters that are variable, and then optimizing those parameters. The result would create an optimal nuclear reactor design at a fraction of the computational expense required by traditional design methods. For his research, he and BYU colleagues built a dozen machine learning algorithms to examine their ability to process the simulated data needed in designing a reactor. They identified the top three algorithms, then refined the parameters until they found one that worked really well and could handle a preliminary data set as a proof of concept. It worked (and they published a paper on it) so they took the model and (for a second paper) put it to the test on a very difficult nuclear design problem: optimal nuclear shield design. The resulting papers, recently published in academic journal Nuclear Engineering and Design, showed that their refined model can geometrically optimize the design elements much faster than the traditional method. In two days Memmott's AI algorithm determined an optimal nuclear-reactor shield design that took a real-world molten salt reactor company spent six months. "Of course, humans still ultimately make the final design decisions and carry out all the safety assessments," Memmott says in the announcement, "but it saves a significant amount of time at the front end.... "Our demand for electricity is going to skyrocket in years to come and we need to figure out how to produce additional power quickly. The only baseload power we can make in the Gigawatt quantities needed that is completely emissions free is nuclear power." Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

Amazon Labor Union, Airplane Hub Workers Ally with Teamsters Organizing Workers Nationwide

Two prominent unions are teaming up to challenge Amazon, reports the New York Times — "after years of organizing Amazon workers and pressuring the company to bargain over wages and working conditions." Members of the Amazon Labor Union "overwhelmingly chose to affiliate with the 1.3-million-member International Brotherhood of Teamsters" in a vote last Monday. While the Amazon Labor Union (or ALU) is the only union formally representing Amazon warehouse workers anywhere in America after an election in 2022, "it has yet to begin bargaining with Amazon, which continues to contest the election outcome." Leaders of both unions said the affiliation agreement would put them in a better position to challenge Amazon and would provide the Amazon Labor Union with more money and staff support... The Teamsters are ramping up their efforts to organize Amazon workers nationwide. The union voted to create an Amazon division in 2021, and O'Brien was elected that year partly on a platform of making inroads at the company. The Teamsters told the ALU that they had allocated $8 million to support organizing at Amazon, according to ALU President Christian Smalls, and that the larger union was prepared to tap its more than $300 million strike and defense fund to aid in the effort... The Teamsters also recently reached an affiliation agreement with workers organizing at Amazon's largest airplane hub in the United States, a Kentucky facility known as KCVG. Experts have said unionizing KCVG could give workers substantial leverage because Amazon relies heavily on the hub to meet its one- and two-day shipping goals. Their agreement with the Teamsters says the Amazon Labor Union will also "lend its expertise to assist in organizing other Amazon facilities" across America, according to the article.

Read more of this story at Slashdot.

Slashdot Asks: What Do You Remember About the Web in 1994?

"The Short Happy Reign of the CD-ROM" was just one article in a Fast Company series called 1994 Week. As the week rolled along they also re-visited Yahoo, Netscape, and how the U.S. Congress "forced the videogame industry to grow up." But another article argues that it's in web pages from 1994 that "you can start to see in those weird, formative years some surprising signs of what the web would be, and what it could be." It's hard to say precisely when the tipping point was. Many point to September '93, when AOL users first flooded Usenet. But the web entered a new phase the following year. According to an MIT study, at the start of 1994, there were just 623 web servers. By year's end, it was estimated there were at least 10,000, hosting new sites including Yahoo!, the White House, the Library of Congress, Snopes, the BBC, sex.com, and something called The Amazing FishCam. The number of servers globally was doubling every two months. No one had seen growth quite like that before. According to a press release announcing the start of the World Wide Web Foundation that October, this network of pages "was widely considered to be the fastest-growing network phenomenon of all time." As the year began, Web pages were by and large personal and intimate, made by research institutions, communities, or individuals, not companies or brands. Many pages embodied the spirit, or extended the presence, of newsgroups on Usenet, or "User's Net." (Snopes and the Internet Movie Database, which landed on the Web in 1993, began as crowd-sourced projects on Usenet.) But a number of big companies, including Microsoft, Sun, Apple, IBM, and Wells Fargo, established their first modest Web outposts in 1994, a hint of the shopping malls and content farms and slop factories and strip mines to come. 1994 also marked the start of banner ads and online transactions (a CD, pizzas), and the birth of spam and phishing... [B]ack in '94, the salesmen and oilmen and land-grabbers and developers had barely arrived. In the calm before the storm, the Web was still weird, unruly, unpredictable, and fascinating to look at and get lost in. People around the world weren't just writing and illustrating these pages, they were coding and designing them. For the most part, the design was non-design. With a few eye-popping exceptions, formatting and layout choices were simple, haphazard, personal, and — in contrast to most of today's web — irrepressibly charming. There were no table layouts yet; cascading style sheets, though first proposed in October 1994 by Norwegian programmer Håkon Wium Lie, wouldn't arrive until December 1996... The highways and megalopolises would come later, courtesy of some of the world's biggest corporations and increasingly peopled by bots, but in 1994 the internet was still intimate, made by and for individuals... Soon, many people would add "under construction" signs to their Web pages, like a friendly request to pardon our dust. It was a reminder that someone was working on it — another indication of the craft and care that was going into this never-ending quilt of knowledge. The article includes screenshots of Netscape in action from browser-emulating site OldWeb.Today (albeit without using a 14.4 kbps modems). "Look in and think about how and why this web grew the way it did, and what could have been. Or try to imagine what life was like when the web wasn't worldwide yet, and no one knew what it really was." Slashdot reader tedlistens calls it "a trip down memory lane," offering "some telling glimpses of the future, and some lessons for it too." The article revisits 1994 sites like Global Network Navigator, Time-Warner's Pathfinder, and Wired's online site HotWired as well as 30-year-old versions of the home pages for Wells Fargo and Microsoft. What did they miss? Share your own memories in the comments. What do you remember about the web in 1994?

Read more of this story at Slashdot.

Amazon Retaliated After Employee Walkout Over Return-to-Office Policy, Says NLRB

America's National Labor Relations Board "has filed a complaint against Amazon..." reports the Verge, "that alleges the company 'unlawfully disciplined and terminated an employee' after they assisted in organizing walkouts last May in protest of Amazon's new return-to-work [three days per week] directives, issued early last year." [T]housands of Amazon employees signed petitions against the new mandate and staged a walkout several months later. Despite the protests and pushback, according to a report by Insider, in a meeting in early August 2023, Jassy reaffirmed the company's commitment to employees returning to the office for the majority of the week. The NLRB complaint alleges Amazon "interrogated" employees about the walkout using its internal Chime system. The employee was first put on a performance improvement plan by Amazon following their organizing efforts for the walkout and later "offered a severance payment of nine weeks' salary if the employee signed a severance agreement and global release in exchange for their resignation." According to the NLRB's lawyers, all of that was because the employee engaged in organizing, and the retaliation was intended to discourage "...protected, concerted activities...." The NLRB's general counsel is seeking several different forms of remediation from Amazon, including reimbursement for the employee's "financial harms and search-for-work and work related expenses," a letter of apology, and a "Notice to Employees" that must be physically posted at the company's facilities across the country, distributed electronically, and read by an Amazon rep at a recorded videoconference. Amazon says their actions were entirely unrelated to the workers activism against their return-to-work policies. An Amazon spokesperson told the Verge that instead, the employee "consistently underperformed over a period of nearly a year and repeatedly failed to deliver on projects she was assigned. Despite extensive support and coaching, the former employee was unable to improve her performance and chose to leave the company."

Read more of this story at Slashdot.

Framework Laptop 13 is Getting a Drop-In RISC-V Mainboard Option

An anonymous reader shared this report from the OMG Ubuntu blog: Those of you who own a Framework Laptop 13 — consider me jealous, btw — or are considering buying one in the near future, you may be interested to know that a RISC-V motherboard option is in the works. DeepComputing, the company behind the recently-announced Ubuntu RISC-V laptop, is working with Framework Computer Inc, the company behind the popular, modular, and Linux-friendly Framework laptops, on a RISC-V mainboard. This is a new announcement; the component itself is in early development, and there's no tentative price tag or pre-order date pencilled in... [T]he Framework RISC-V mainboard will use soldered memory and non-upgradeable eMMC storage (though it can boot from microSD cards). It will 'drop into' any Framework Laptop 13 chassis (or Cooler Master Mainboard Case), per Framework's modular ethos... Framework mentions DeepComputing is "working closely with the teams at Canonical and Red Hat to ensure Linux support is solid through Ubuntu and Fedora", which is great news, and cements Canonical's seriousness to supporting Ubuntu on RISC-V. "We want to be clear that in this generation, it is focused primarily on enabling developers, tinkerers, and hobbyists to start testing and creating on RISC-V," says Framework's announcement. "The peripheral set and performance aren't yet competitive with our Intel and AMD-powered Framework Laptop Mainboards." They're calling the Mainboard "a huge milestone both for expanding the breadth of the Framework ecosystem and for making RISC-V more accessible than ever... DeepComputing is demoing an early prototype of this Mainboard in a Framework Laptop 13 at the RISC-V Summit Europe next week, and we'll be sharing more as this program progresses." And their announcement included two additional updates: "Just like we did for Framework Laptop 16 last week, today we're sharing open source CAD for the Framework Laptop 13 shell, enabling development of skins, cases, and accessories." "We now have Framework Laptop 13 Factory Seconds systems available with British English and German keyboards, making entering the ecosystem more affordable than ever." "We're eager to continue growing a new Consumer Electronics industry that is grounded in open access, repairability, and customization at every level."

Read more of this story at Slashdot.

Why Washington's Mount Rainier Still Makes Volcanologists Worry

It's been a 1,000 years since there was a significant volcanic eruption from Mount Rainier, CNN reminds readers. It's a full 60 miles from Tacoma, Washington — and 90 miles from Seattle. Yet "more than Hawaii's bubbling lava fields or Yellowstone's sprawling supervolcano, it's Mount Rainier that has many U.S. volcanologists worried." "Mount Rainier keeps me up at night because it poses such a great threat to the surrounding communities, said Jess Phoenix, a volcanologist and ambassador for the Union of Concerned Scientists, on an episode of CNN's series "Violent Earth With Liv Schreiber." The sleeping giant's destructive potential lies not with fiery flows of lava, which, in the event of an eruption, would be unlikely to extend more than a few miles beyond the boundary of Mount Rainier National Park in the Pacific Northwest. And the majority of volcanic ash would likely dissipate downwind to the east away from population centers, according to the US Geological Survey. Instead, many scientists fear the prospect of a lahar — a swiftly moving slurry of water and volcanic rock originating from ice or snow rapidly melted by an eruption that picks up debris as it flows through valleys and drainage channels. "The thing that makes Mount Rainier tough is that it is so tall, and it's covered with ice and snow, and so if there is any kind of eruptive activity, hot stuff ... will melt the cold stuff and a lot of water will start coming down," said Seth Moran, a research seismologist at USGS Cascades Volcano Observatory in Vancouver, Washington. "And there are tens, if not hundreds of thousands of people who live in areas that potentially could be impacted by a large lahar, and it could happen quite quickly." The deadliest lahar in recent memory was in November 1985 when Colombia's Nevado del Ruiz volcano erupted. Just a couple hours after the eruption started, a river of mud, rocks, lava and icy water swept over the town of Armero, killing over 23,000 people in a matter of minutes... Bradley Pitcher, a volcanologist and lecturer in Earth and environmental sciences at Columbia University, said in an episode of CNN's "Violent Earth"... said that Mount Rainier has about eight times the amount of glaciers and snow as Nevado del Ruiz had when it erupted. "There's the potential to have a much more catastrophic mudflow...." Lahars typically occur during volcanic eruptions but also can be caused by landslides and earthquakes. Geologists have found evidence that at least 11 large lahars from Mount Rainier have reached into the surrounding area, known as the Puget Lowlands, in the past 6,000 years, Moran said. Two major U.S. cities — Tacoma and South Seattle — "are built on 100-foot-thick (30.5-meter) ancient mudflows from eruptions of Mount Rainier," the volcanologist said on CNN's "Violent Earth" series. CNN's article adds that the US Geological Survey already set up a lahar detection system at Mount Rainier in 1998, "which since 2017 has been upgraded and expanded. About 20 sites on the volcano's slopes and the two paths identified as most at risk of a lahar now feature broadband seismometers that transmit real-time data and other sensors including trip wires, infrasound sensors, web cameras and GPS receivers."

Read more of this story at Slashdot.

Apple Might Partner with Meta on AI

Earlier this month Apple announced a partnership with OpenAI to bring ChatGPT to Siri. "Now, the Wall Street Journal reports that Apple and Facebook's parent company Meta are in talks around a similar deal," according to TechCrunch: A deal with Meta could make Apple less reliant on a single partner, while also providing validation for Meta's generative AI tech. The Journal reports that Apple isn't offering to pay for these partnerships; instead, Apple provides distribution to AI partners who can then sell premium subscriptions... Apple has said it will ask for users' permission before sharing any questions and data with ChatGPT. Presumably, any integration with Meta would work similarly.

Read more of this story at Slashdot.

Michigan Lawmakers Advance Bill Requiring All Public High Schools To At Least Offer CS

Michigan's House of Representatives passed a bill requiring all the state's public high schools to offer a computer science course by the start of the 2027-28 school year. (The bill now goes to the Senate, according to a report from Chalkbeat Detroit.) Long-time Slashdot reader theodp writes: Michigan is also removing the requirement for CS teacher endorsements in 2026, paving the way for CS courses to be taught in 2027 by teachers who have "demonstrated strong computer science skills" but do not hold a CS endorsement. Michigan's easing of CS teaching requirements comes in the same year that New York State will begin requiring credentials for all CS teachers. With lobbyist Julia Wynn from the tech giant-backed nonprofit Code.org sitting at her side, Michigan State Rep. Carol Glavnille introduced the CS bill (HB5649) to the House in May (hearing video, 16:20). "This is not a graduation requirement," Glavnille emphasized in her testimony. Code.org's Wynn called the Bill "an important first step" — after all, Code.org's goal is "to require all students to take CS to earn a HS diploma" — noting that Code.org has also been closely collaborating with Michigan's Education department "on the language and the Bill since inception." Wynn went on to inform lawmakers that "even just attending a high school that offers computer science delivers concrete employment and earnings benefits for students," citing a recent Brookings Institute article that also noted "30 states have adopted a key part of Code.org Advocacy Coalition's policy recommendations, which require all high schools to offer CS coursework, while eight states (and counting) have gone a step further in requiring all students to take CS as a high school graduation requirement." Minutes from the hearing report other parties submitting cards in support of HB 5649 included Amazon (a $3+ million Code.org Platinum Supporter) and AWS (a Code.org In-Kind Supporter), as well as College Board (which offers the AP CS A and CSP exams) and TechNet (which notes its "teams at the federal and state levels advocate with policymakers on behalf of our member companies").

Read more of this story at Slashdot.

Longtime Linux Wireless Developer Passes Away. RIP Larry Finger

Slashdot reader unixbhaskar shared this report from Phoronix: Larry Finger who has contributed to the Linux kernel since 2005 and has seen more than 1,500 kernel patches upstreamed into the mainline Linux kernel has sadly passed away. His wife shared the news of Larry Finger's passing this weekend on the linux-wireless mailing list in a brief statement. Reactions are being shared around the internet. LWN writes: The LWN Kernel Source Database shows that Finger contributed to 94 releases in the (Git era) kernel history, starting with 2.6.16 — 1,464 commits in total. He will be missed... In part to his contributions, the Linux wireless hardware support has come a long way over the past two decades. Larry was a frequent contributor to the Linux Wireless and Linux Kernel mailing lists. (Here's a 2006 discussion he had about Git with Linus Torvalds.) Larry also answered 54 Linux questions on Quora, and in 2005 wrote three articles for Linux Journal. And Larry's GitHub profile shows 122 contributions to open source projects just in 2024. In Reddit's Linux forum, one commenter wrote, "He was 84 years old and was still writing code. What a legend. May he rest in peace."

Read more of this story at Slashdot.

OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals

OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals "Amid the hype surrounding Apple's new deal with OpenAI, one issue has been largely papered over," argues the Executive Director of America's writer's advocacy group, the Authors Guild. OpenAI's foundational models "are, and have always been, built atop the theft of creative professionals' work." [L]ast month the company quietly announced Media Manager, scheduled for release in 2025. A tool purportedly designed to allow creators and content owners to control how their work is used, Media Manager is really a shameless attempt to evade responsibility for the theft of artists' intellectual property that OpenAI is already profiting from. OpenAI says this tool would allow creators to identify their work and choose whether to exclude it from AI training processes. But this does nothing to address the fact that the company built its foundational models using authors' and other creators' works without consent, compensation or control over how OpenAI users will be able to imitate the artists' styles to create new works. As it's described, Media Manager puts the burden on creators to protect their work and fails to address the company's past legal and ethical transgressions. This overture is like having your valuables stolen from your home and then hearing the thief say, "Don't worry, I'll give you a chance to opt out of future burglaries ... next year...." AI companies often argue that it would be impossible for them to license all the content that they need and that doing so would bring progress to a grinding halt. This is simply untrue. OpenAI has signed a succession of licensing agreements with publishers large and small. While the exact terms of these agreements are rarely released to the public, the compensation estimates pale in comparison with the vast outlays for computing power and energy that the company readily spends. Payments to authors would have minimal effects on AI companies' war chests, but receiving royalties for AI training use would be a meaningful new revenue stream for a profession that's already suffering... We cannot trust tech companies that swear their innovations are so important that they do not need to pay for one of the main ingredients — other people's creative works. The "better future" we are being sold by OpenAI and others is, in fact, a dystopia. It's time for creative professionals to stand together, demand what we are owed and determine our own futures. The Authors Guild (and 17 other plaintiffs) are now in an ongoing lawsuit against OpenAI and Microsoft. And the Guild's executive director also notes that there's also "a class action filed by visual artists against Stability AI, Runway AI, Midjourney and Deviant Art, a lawsuit by music publishers against Anthropic for infringement of song lyrics, and suits in the U.S. and U.K. brought by Getty Images against Stability AI for copyright infringement of photographs." They conclude that "The best chance for the wider community of artists is to band together."

Read more of this story at Slashdot.

Tuesday SpaceX Launches a NOAA Satellite to Improve Weather Forecasts for Earth and Space

Tuesday a SpaceX Falcon Heavy rocket will launch a special satellite — a state-of-the-art weather-watcher from America's National Oceanic and Atmospheric Administration. It will complete a series of four GOES-R satellite launches that began in 2016. Space.com drills down into how these satellites have changed weather forecasts: More than seven years later, with three of the four satellites in the series orbiting the Earth, scientists and researchers say they are pleased with the results and how the advanced technology has been a game changer. "I think it has really lived up to its hype in thunderstorm forecasting. Meteorologists can see the convection evolve in near real-time and this gives them enhanced insight on storm development and severity, making for better warnings," John Cintineo, a researcher from NOAA's National Severe Storms Laboratory , told Space.com in an email. "Not only does the GOES-R series provide observations where radar coverage is lacking, but it often provides a robust signal before radar, such as when a storm is strengthening or weakening. I'm sure there have been many other improvements in forecasts and environmental monitoring over the last decade, but this is where I have most clearly seen improvement," Cintineo said. In addition to helping predict severe thunderstorms, each satellite has collected images and data on heavy rain events that could trigger flooding, detected low clouds and fog as it forms, and has made significant improvements to forecasts and services used during hurricane season. "GOES provides our hurricane forecasters with faster, more accurate and detailed data that is critical for estimating a storm's intensity, including cloud top cooling, convective structures, specific features of a hurricane's eye, upper-level wind speeds, and lightning activity," Ken Graham, director of NOAA's National Weather Service told Space.com in an email. Instruments such as the Advanced Baseline Imager have three times more spectral channels, four times the image quality, and five times the imaging speed as the previous GOES satellites. The Geostationary Lightning Mapper is the first of its kind in orbit on the GOES-R series that allows scientists to view lightning 24/7 and strikes that make contact with the ground and from cloud to cloud. "GOES-U and the GOES-R series of satellites provides scientists and forecasters weather surveillance of the entire western hemisphere, at unprecedented spatial and temporal scales," Cintineo said. "Data from these satellites are helping researchers develop new tools and methods to address problems such as lightning prediction, sea-spray identification (sea-spray is dangerous for mariners), severe weather warnings, and accurate cloud motion estimation. The instruments from GOES-R also help improve forecasts from global and regional numerical weather models, through improved data assimilation." The final satellite, launching Tuesday, includes a new sensor — the Compact Coronagraph — "that will monitor weather outside of Earth's atmosphere, keeping an eye on what space weather events are happening that could impact our planet," according to the article. "It will be the first near real time operational coronagraph that we have access to," Rob Steenburgh, a space scientist at NOAA's Space Weather Prediction Center, told Space.com on the phone. "That's a huge leap for us because up until now, we've always depended on a research coronagraph instrument on a spacecraft that was launched quite a long time ago."

Read more of this story at Slashdot.

Foundation Honoring 'Star Trek' Creator Offers $1M Prize for AI Startup Benefiting Humanity

The Roddenberry Foundation — named for Star Trek creator Gene Roddenberry — "announced Tuesday that this year's biennial award would focus on artificial intelligence that benefits humanity," reports the Los Angeles Times: Lior Ipp, chief executive of the foundation, told The Times there's a growing recognition that AI is becoming more ubiquitous and will affect all aspects of our lives. "We are trying to ... catalyze folks to think about what AI looks like if it's used for good," Ipp said, "and what it means to use AI responsibly, ethically and toward solving some of the thorny global challenges that exist in the world...." Ipp said the foundation shares the broad concern about AI and sees the award as a means to potentially contribute to creating those guardrails... Inspiration for the theme was also borne out of the applications the foundation received last time around. Ipp said the prize, which is "issue-agnostic" but focused on early-stage tech, produced compelling uses of AI and machine learning in agriculture, healthcare, biotech and education. "So," he said, "we sort of decided to double down this year on specifically AI and machine learning...." Though the foundation isn't prioritizing a particular issue, the application states that it is looking for ideas that have the potential to push the needle on one or more of the United Nations' 17 sustainable development goals, which include eliminating poverty and hunger as well as boosting climate action and protecting life on land and underwater. The Foundation's most recent winner was Sweden-based Elypta, according to the article, "which Ipp said is using liquid biopsies, such as a blood test, to detect cancer early." "We believe that building a better future requires a spirit of curiosity, a willingness to push boundaries, and the courage to think big," said Rod Roddenberry, co-founder of the Roddenberry Foundation. "The Prize will provide a significant boost to AI pioneers leading these efforts." According to the Foundation's announcement, the Prize "embodies the Roddenberry philosophy's promise of a future in which technology and human ingenuity enable everyone — regardless of background — to thrive." "By empowering entrepreneurs to dream bigger and innovate valiantly, the Roddenberry Prize seeks to catalyze the development of AI solutions that promote abundance and well-being for all."

Read more of this story at Slashdot.

Some People Who Rented a Tesla from Hertz Were Still Charged for Gas

"Last week, we reported on a customer who was charged $277 for gasoline his rented Tesla couldn't have possibly used," writes the automotive blog The Drive. "And now, we've heard from other Hertz customers who say they've been charged even more." Hertz caught attention last week for how it handled a customer whom it had charged a "Skip the Pump" fee, which allows renters to pay a premium for Hertz to refill the tank for them. But of course, this customer's rented Tesla Model 3 didn't use gas — it draws power from a battery — and Hertz has a separate, flat fee for EV recharges. Nevertheless, the customer was charged $277.39 despite returning the car with the exact same charge they left with, and Hertz refused to refund it until after our story ran. It's no isolated incident either, as other customers have written in to inform us that it happened to them, too.... Evan Froehlich returned the rental at 21 percent charge, expecting to pay a flat $25 recharge fee. (It's ordinarily $35, but Hertz's loyalty program discounts it.) To Froehlich's surprise, he was hit with a $340.97 "Skip the Pump" fee, which can be applied after returning a car if it's not requested beforehand. He says Hertz's customer service was difficult to reach, and that it took making a ruckus on social media to get Hertz's attention. In the end, a Hertz representative was able to review the charge and have it reversed.... A March 2023 Facebook post documenting a similar case indicates this has been happening for more than a year. After renting a Tesla Model 3, another customer even got a $475.19 "fuel charge," according to the article — in addition to a $25 charging fee: They also faced a $125.01 "rebill" for using the Supercharger network during their rental, which other Hertz customers have expressed surprise and frustration with. Charging costs can vary, but a 75-percent charge from a Supercharger will often cost in the region of just $15.

Read more of this story at Slashdot.

What Happened After a Reporter Tracked Down The Identity Thief Who Stole $5,000

"$5,000 in cash had been withdrawn from my checking account — but not by me," writes journalist Linda Matchan in the Boston Globe. A police station manager reviewed footage from the bank — which was 200 miles away — and deduced that "someone had actually come into the bank and spoken to a teller, presented a driver's license, and then correctly answered some authentication questions to validate the account..." "You're pitting a teller against a national crime syndicate with massive resources behind them," says Paul Benda, executive vice president for risk, fraud, and cybersecurity at the American Bankers Association. "They're very well-funded, well-resourced criminal gangs doing this at an industrial scale." The reporter writes that "For the past two years, I've worked to determine exactly who and what lay behind this crime..." [N]ow I had something new to worry about: Fraudsters apparently had a driver's license with my name on it... "Forget the fake IDs adolescents used to get into bars," says Georgia State's David Maimon, who is also head of fraud insights at SentiLink, a company that works with institutions across the United States to support and solve their fraud and risk issues. "Nowadays fraudsters are using sophisticated software and capable printers to create virtually impossible-to-detect fake IDs." They're able to create synthetic identities, combining legitimate personal information, such as a name and date of birth, with a nine-digit number that either looks like a Social Security number or is a real, stolen one. That ID can then be used to open financial accounts, apply for a bank or car loan, or for some other dodgy purpose that could devastate their victims' financial lives. And there's a complex supply chain underpinning it all — "a whole industry on the dark web," says Eva Velasquez, president and CEO of the Identity Theft Resource Center, a nonprofit that helps victims undo the damage wrought by identity crime. It starts with the suppliers, Maimon told me — "the people who steal IDs, bring them into the market, and manufacture them. There's the producers who take the ID and fake driver's licenses and build the facade to make it look like they own the identity — trying to create credit reports for the synthetic identities, for example, or printing fake utility bills." Then there are the distributors who sell them in the dark corners of the web or the street or through text messaging apps, and finally the customers who use them and come from all walks of life. "We're seeing females and males and people with families and a lot of adolescents, because social media plays a very important role in introducing them to this world," says Maimon, whose team does surveillance of criminals' activities and interactions on the dark web. "In this ecosystem, folks disclose everything they do." The reporter writes that "It's horrifying to discover, as I have recently, that someone has set up a tech company that might not even be real, listing my home as its principal address." Two and a half months after the theft the stolen $5,000 was back in their bank account — but it wasn't until a year later that the thief was identified. "The security video had been shared with New York's Capital Region Crime Analysis Center, where analysts have access to facial recognition technology, and was run through a database of booking photos. A possible match resulted.... She was already in custody elsewhere in New York... Evidently, Deborah was being sought by law enforcement in at least three New York counties. [All three cases involved bank-related identity fraud.]" Deborah was finally charged with two separate felonies: grand larceny in the third degree for stealing property over $3,000, and identity theft. But Deborah missed her next two court dates, and disappeared. "She never came back to court, and now there were warrants for her arrest out of two separate courts." After speaking to police officials the reporter concludes "There was a good chance she was only doing the grunt work for someone else, maybe even a domestic or foreign-organized crime syndicate, and then suffering all the consequences." The UK minister of state for security even says that "in some places people are literally captured and used as unwilling operators for fraudsters."

Read more of this story at Slashdot.

Ubuntu 24.10 to Default to Wayland for NVIDIA Users

An anonymous reader shared this report from the blog OMG Ubuntu: Ubuntu first switched to using Wayland as its default display server in 2017 before reverting the following year. It tried again in 2021 and has stuck with it since. But while Wayland is what most of us now log into after installing Ubuntu, anyone doing so on a PC or laptop with an NVIDIA graphics card present instead logs into an Xorg/X11 session. This is because NVIDIA's proprietary graphics drivers (which many, especially gamers, opt for to get the best performance, access to full hardware capabilities, etc) have not supported Wayland as well as as they could've. Past tense as, thankfully, things have changed in the past few years. NVIDIA's warmed up to Wayland (partly as it has no choice given that Wayland is now standard and a 'maybe one day' solution, and partly because it wants to: opportunities/benefits/security). With the NVIDIA + Wayland sitch' now in a better state than before — but not perfect — Canonical's engineers say they feel confident enough in the experience to make the Ubuntu Wayland session default for NVIDIA graphics card users in Ubuntu 24.10.

Read more of this story at Slashdot.

Linux Foundation Announces Launch of 'High Performance Software Foundation'

This week the nonprofit Linux Foundation announced the launch of the High Performance Software Foundation, which "aims to build, promote, and advance a portable core software stack for high performance computing" (or HPC) by "increasing adoption, lowering barriers to contribution, and supporting development efforts." It promises initiatives focused on "continuously built, turnkey software stacks," as well as other initiatives including architecture support and performance regression testing. Its first open source technical projects are: - Spack: the HPC package manager. - Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way. - Viskores (formerly VTK-m): a toolkit of scientific visualization algorithms for accelerator architectures. - HPCToolkit: performance measurement and analysis tools for computers ranging from desktop systems to GPU-accelerated supercomputers. - Apptainer: Formerly known as Singularity, Apptainer is a Linux Foundation project providing a high performance, full featured HPC and computing optimized container subsystem. - E4S: a curated, hardened distribution of scientific software packages. As use of HPC becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation will provide a neutral space for pivotal projects in the high performance computing ecosystem, enabling industry, academia, and government entities to collaborate on the scientific software. The High Performance Software Foundation benefits from strong support across the HPC landscape, including Premier Members Amazon Web Services (AWS), Hewlett Packard Enterprise, Lawrence Livermore National Laboratory, and Sandia National Laboratories; General Members AMD, Argonne National Laboratory, Intel, Kitware, Los Alamos National Laboratory, NVIDIA, and Oak Ridge National Laboratory; and Associate Members University of Maryland, University of Oregon, and Centre for Development of Advanced Computing. In a statement, an AMD vice president said that by joining "we are using our collective hardware and software expertise to help develop a portable, open-source software stack for high-performance computing across industry, academia, and government." And an AWS executive said the high-performance computing community "has a long history of innovation being driven by open source projects. AWS is thrilled to join the High Performance Software Foundation to build on this work. In particular, AWS has been deeply involved in contributing upstream to Spack, and we're looking forward to working with the HPSF to sustain and accelerate the growth of key HPC projects so everyone can benefit." The new foundation will "set up a technical advisory committee to manage working groups tackling a variety of HPC topics," according to the announcement, following a governance model based on the Cloud Native Computing Foundation.

Read more of this story at Slashdot.

FORTRAN and COBOL Re-enter TIOBE's Ranking of Programming Language Popularity

"The TIOBE Index sets out to reflect the relative popularity of computer languages," writes i-Programmer, "so it comes as something of a surprise to see two languages dating from the 1950's in this month's Top 20. Having broken into the the Top 20 in April 2021 Fortran has continued to rise and has now risen to it's highest ever position at #10... The headline for this month's report by Paul Jansen on the TIOBE index is: Fortran in the top 10, what is going on? Jansen's explanation points to the fact that there are more than 1,000 hits on Amazon for "Fortran Programming" while languages such as Kotlin and Rust, barely hit 300 books for the same search query. He also explains that Fortran is still evolving with the new ISO Fortran 2023 definition published less than half a year ago.... The other legacy language that is on the rise in the TIOBE index is COBOL. We noticed it re-enter the Top 20 in January 2024 and, having dropped out in the interim, it is there again this month. More details from TechRepublic: Along with Fortran holding on to its spot in the rankings, there were a few small changes in the top 10. Go gained 0.61 percentage points year over year, rising from tenth place in May 2023 to eighth this year. C++ rose slightly in popularity year over year, from fourth place to third, while Java (-3.53%) and Visual Basic (-1.8) fell. Here's how TIOBE ranked the 10 most popular programming languages in May: Python C C++ Java C# JavaScript Visual Basic Go SQL Fortran On the rival PYPL ranking of programming language popularity, Fortran does not appear anywhere in the top 29. A note on its page explains that "Worldwide, Python is the most popular language, Rust grew the most in the last 5 years (2.1%) and Java lost the most (-4.0%)." Here's how it ranks the 10 most popular programming languages for May: Python (28.98% share) Java (15.97% share) JavaScript (8.79%) C# (6.78% share) R (4.76% share) PHP (4.55% share) TypeScript (3.03% share) Swift (2.76% share) Rust (2.6% share)

Read more of this story at Slashdot.

Blue Origin Successfully Launches Six Passengers to the Edge of Space

"Blue Origin's tourism rocket has launched passengers to the edge of space for the first time in nearly two years," reports CNN, "ending a hiatus prompted by a failed uncrewed test flight." The New Shepard rocket and capsule lifted off at 9:36 a.m. CT (10:36 a.m. ET) from Blue Origin's facilities on a private ranch in West Texas. NS-25, Blue Origin's seventh crewed flight to date, carried six customers aboard the capsule: venture capitalist Mason Angel; Sylvain Chiron, founder of the French craft brewery Brasserie Mont-Blanc; software engineer and entrepreneur Kenneth L. Hess; retired accountant Carol Schaller; aviator Gopi Thotakura; and Ed Dwight, a retired US Air Force captain selected by President John F. Kennedy in 1961 to be the nation's first Black astronaut candidate... Dwight completed that challenge and reached the edge of space at the age of 90, making him the oldest person to venture to such heights, according to a spokesperson from Blue Origin... "It's a life-changing experience," he said. "Everybody needs to do this." The rocket booster landed safely a couple minutes prior to the capsule. During the mission, the crew soared to more than three times the speed of sound, or more than 2,000 miles per hour. The rocket vaulted the capsule past the Kármán line, an area 62 miles (100 kilometers) above Earth's surface that is widely recognized as the altitude at which outer space begins... "And at the peak of the flight, passengers experienced a few minutes of weightlessness and striking views of Earth through the cabin windows."

Read more of this story at Slashdot.

China Uses Giant Rail Gun to Shoot a Smart Bomb Nine Miles Into the Sky

"China's navy has apparently tested out a hypersonic rail gun," reports Futurism, describing it as "basically a device that uses a series of electromagnets to accelerate a projectile to incredible speeds." But "during a demonstration of its power, things didn't go quite as planned." As the South China Morning Post reports, the rail gun test lobbed a precision-guided projectile — or smart bomb — nine miles into the stratosphere. But because it apparently didn't go up as high as it was supposed to, the test was ultimately declared unsuccessful. This conclusion came after an analysis led by Naval Engineering University professor Lu Junyong, whose team found with the help of AI that even though the winged smart bomb exceeded Mach 5 speeds, it didn't perform as well as it could have. This occurred, as Lu's team found, because the projectile was spinning too fast during its ascent, resulting in an "undesirable tilt." But what's more interesting is the project itself. "Successful or not, news of the test is a pretty big deal given that it was just a few months ago that reports emerged about China's other proposed super-powered rail gun, which is intended to send astronauts on a Boeing 737-size ship into space.... which for the record did not make it all the way to space..." Chinese officials, meanwhile, are paying lip service to the hypersonic rail gun technology's potential to revolutionize civilian travel by creating even faster railways and consumer space launches, too. Japan and France also have railgun projects, according to a recent article from Defense One. "Yet the nation that has demonstrated the most continuing interest is China," with records of railgun work dating back as far as 2011: The Chinese team claimed that their railgun can fire a projectile 100 to 200 kilometers at Mach 6. Perhaps most importantly, it uses up to 100,000 AI-enabled sensors to identify and fix any problems before critical failure, and can slowly improve itself over time. This, they said, had enabled them to test-fire 120 rounds in a row without failure, which, if true, suggests that they solved a longstanding problem that reportedly bedeviled U.S. researchers. However, the team still has a ways to go before mounting an operational railgun on a ship; according to one Chinese article, the projectiles fired were only 25mm caliber, well below the size of even lightweight naval artillery. As with many other Chinese defense technology programs, much remains opaque about the program... While railguns tend to get the headlines, this lab has made advances in a wide range of electric and electromagnetic applications for the PLA Navy's warships. For example, the lab's research on electromagnetic launch technology has also been applied to the development of electromagnetic catapults for the PLAN's growing aircraft carrier fleet... While it remains to be seen whether the Chinese navy can develop a full-scale railgun, produce it at scale, and integrate it onto its warships, it is obvious that it has made steady advances in recent years on a technology of immense military significance that the US has abandoned. Thanks to long-time Slashdot reader Tangential for sharing the news.

Read more of this story at Slashdot.

AI 'Godfather' Geoffrey Hinton: If AI Takes Jobs We'll Need Universal Basic Income

"The computer scientist regarded as the 'godfather of artificial intelligence' says the government will have to establish a universal basic income to deal with the impact of AI on inequality," reports the BBC: Professor Geoffrey Hinton told BBC Newsnight that a benefits reform giving fixed amounts of cash to every citizen would be needed because he was "very worried about AI taking lots of mundane jobs". "I was consulted by people in Downing Street and I advised them that universal basic income was a good idea," he said. He said while he felt AI would increase productivity and wealth, the money would go to the rich "and not the people whose jobs get lost and that's going to be very bad for society". "Until last year he worked at Google, but left the tech giant so he could talk more freely about the dangers from unregulated AI," according to the article. Professor Hinton also made this predicction to the BBC. "My guess is in between five and 20 years from now there's a probability of half that we'll have to confront the problem of AI trying to take over". He recommended a prohibition on the military use of AI, warning that currently "in terms of military uses I think there's going to be a race".

Read more of this story at Slashdot.

US Defense Department 'Concerned' About ULA's Slow Progress on Satellite Launches

Earlier this week the Washington Post reported that America's Defense department "is growing concerned that the United Launch Alliance, one of its key partners in launching national security satellites to space, will not be able to meet its needs to counter China and build its arsenal in orbit with a new rocket that ULA has been developing for years." In a letter sent Friday to the heads of Boeing's and Lockheed Martin's space divisions, Air Force Assistant Secretary Frank Calvelli used unusually blunt terms to say he was growing "concerned" with the development of the Vulcan rocket, which the Pentagon intends to use to launch critical national security payloads but which has been delayed for years. ULA, a joint venture of Boeing and Lockheed Martin, was formed nearly 20 years ago to provide the Defense Department with "assured access" to space. "I am growing concerned with ULA's ability to scale manufacturing of its Vulcan rocket and scale its launch cadence to meet our needs," he wrote in the letter, a copy of which was obtained by The Washington Post. "Currently there is military satellite capability sitting on the ground due to Vulcan delays...." ULA originally won 60 percent of the Pentagon's national security payloads under the current contract, known as Phase 2. SpaceX won an award for the remaining 40 percent, but it has been flying its reusable Falcon 9 rocket at a much higher rate. ULA launched only three rockets last year, as it transitions to Vulcan; SpaceX launched nearly 100, mostly to put up its Starlink internet satellite constellation. Both are now competing for the next round of Pentagon contracts, a highly competitive procurement worth billions of dollars over several years. ULA is reportedly up for sale; Blue Origin is said to be one of the suitors... In a statement to The Post, ULA said that its "factory and launch site expansions have been completed or are on track to support our customers' needs with nearly 30 launch vehicles in flow at the rocket factory in Decatur, Alabama." Last year, ULA CEO Tory Bruno said in an interview that the deal with Amazon would allow the company to increase its flight rate to 20 to 25 a year and that to meet that cadence it was hiring "several hundred" more employees. The more often Vulcan flies, he said, the more efficient the company would become. "Vulcan is much less expensive" than the Atlas V rocket that the ULA currently flies, Bruno said, adding that ULA intends to eventually reuse the engines. "As the flight rate goes up, there's economies of scale, so it gets cheaper over time. And of course, you're introducing reusability, so it's cheaper. It's just getting more and more competitive." The article also notes that years ago ULA "decided to eventually retire its workhorse Atlas V rocket after concerns within the Pentagon and Congress that it relied on a Russian-made engine, the RD-180. In 2014, the company entered into a partnership with Jeff Bezos' Blue Origin to provide its BE-4 engines for use on Vulcan. However, the delivery of those engines was delayed for years — one of the reasons Vulcan's first flight didn't take place until earlier this year." The article says Cavelli's letter cited the Pentagon's need to move quickly as adversaries build capabilities in space, noting "counterspace threats" and adding that "our adversaries would seek to deny us the advantage we get from space during a potential conflict." "The United States continues to face an unprecedented strategic competitor in China, and our space environment continues to become more contested, congested and competitive."

Read more of this story at Slashdot.

Amazon Defends Its Use of Signal Messages in Court

America's Federal Trade Commission and 17 states filed an antitrust suit against Amazon in September. This week Amazon responded in court about its usage of Signal's "disappearing messages" feature. Long-time Slashdot reader theodp shares GeekWire's report: At a company known for putting its most important ideas and strategies into comprehensive six-page memos, quick messages between executives aren't the place for meaningful business discussions. That's one of the points made by Amazon in its response Monday to the Federal Trade Commission's allegations about executives' use of the Signal encrypted communications app, known for its "disappearing messages" feature. "For these individuals, just like other short-form messaging, Signal was not a means to send 'structured, narrative text'; it was a way to get someone's attention or have quick exchanges on sensitive topics like public relations or human resources," the company says as part of its response, filed Monday in U.S. District Court in Seattle. Of course, for regulators investigating the company's business practices, these offhanded private comments between Amazon executives could be more revealing than carefully crafted memos meant for wider internal distribution. But in its filing this week, Amazon says there is no evidence that relevant messages have been lost, or that Signal was used to conceal communications that would have been responsive to the FTC's discovery requests. The company says "the equally logical explanation — made more compelling by the available evidence — is that such messages never existed." In an April 25 motion, the FTC argued that the absence of Signal messages from Amazon discussing substantive business issues relevant to the case was a strong indication that such messages had disappeared. "Amazon executives deleted many Signal messages during Plaintiffs' pre-Complaint investigation, and Amazon did not instruct its employees to preserve Signal messages until over fifteen months after Amazon knew that Plaintiffs' investigation was underway," the FTC wrote in its motion. "It is highly likely that relevant information has been destroyed as a result of Amazon's actions and inactions...." Amazon's filing quotes the company's founder, Jeff Bezos, saying in a deposition in the case that "[t]o discuss anything in text messaging or Signal messaging or anything like that of any substance would be akin to business malpractice. It's just too short of a messaging format...." The company's filing traces the initial use of Signal by executives back to the suspected hacking of Bezos' phone in 2018, which prompted the Amazon founder to seek ways to send messages more securely.

Read more of this story at Slashdot.

Deep Fake Scams Growing in Global Frequency and Sophistication, Victim Warns

In an elaborate scam in January, "a finance worker, was duped into attending a video call with people he believed were the chief financial officer and other members of staff," remembers CNN. But Hong Kong police later said that all of them turned out to be deepfake re-creations which duped the employee into transferring $25 million. According to police, the worker had initially suspected he had received a phishing email from the company's UK office, as it specified the need for a secret transaction to be carried out. However, the worker put aside his doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized. Now the targeted company has been revealed: a major engineering consulting firm, with 18,500 employees across 34 offices: A spokesperson for London-based Arup told CNN on Friday that it notified Hong Kong police in January about the fraud incident, and confirmed that fake voices and images were used. "Unfortunately, we can't go into details at this stage as the incident is still the subject of an ongoing investigation. However, we can confirm that fake voices and images were used," the spokesperson said in an emailed statement. "Our financial stability and business operations were not affected and none of our internal systems were compromised," the person added... Authorities around the world are growing increasingly concerned about the sophistication of deepfake technology and the nefarious uses it can be put to. In an internal memo seen by CNN, Arup's East Asia regional chairman, Michael Kwok, said the "frequency and sophistication of these attacks are rapidly increasing globally, and we all have a duty to stay informed and alert about how to spot different techniques used by scammers." The company's global CIO emailed CNN this statement. "Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes. "What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months." Slashdot reader st33ld13hl adds that in a world of Deep Fakes, insurance company USAA is now asking its customers to authenticate with voice. (More information here.) Thanks to Slashdot reader quonset for sharing the news.

Read more of this story at Slashdot.

Are Car Companies Sabotaging the Transition to Electric Vehicles?

The thinktank InfluenceMap produces "data-driven analysis on how business and finance are impacting the climate crisis." Their web site says their newest report documents "How automaker lobbying threatens the global transition to electric vehicles." This report analyses the climate policy engagement strategies of fifteen of the largest global automakers in seven key regions (Australia, EU, Japan, India, South Korea, UK, US). It shows how even in countries where major climate legislation has recently passed, such as the US and Australia, the ambition of these policies has been weakened due to industry pressure. All fifteen automakers, except Tesla, have actively advocated against at least one policy promoting electric vehicles. Ten of the fifteen showed a particularly high intensity of negative engagement and scored a final grade of D or D+ by InfluenceMap's methodology. Toyota is the lowest-scoring company in this analysis, driving opposition to climate regulations promoting battery electric vehicles in multiple regions, including the US, Australia and UK. Of all automakers analyzed, only Tesla (scoring B) is found to have positive climate advocacy aligned with science-based policy. CleanTechnica writes that Toyota "led on hybrid vehicles (and still does), so it's actually not surprising that it has been opposed to the next stage of climate-cutting auto evolution — it's clinging on to its lead rather than continuing to innovate for a new era." More from InfluenceMap: Only three of fifteen companies — Tesla, Mercedes Benz and BMW — are forecast to produce enough electric vehicles by 2030 to meet the International Energy Agency's updated 1.5 degreesC pathway of 66% electric vehicle (battery electric, fuel cell and plug-in hybrids) sales according to InfluenceMap's independent analysis of industry-standard data from February 2024. Current industry forecasts analyzed for this report show automaker production will reach only 53% electric vehicles in 2030. Transport is the third-largest source of greenhouse gas emissions globally, and road transport is failing to decarbonize at anywhere near the rate of many other industries. InfluenceMap's report also finds that Japanese automakers are the least prepared for an electric vehicle transition and are engaging the hardest against it. "InfluenceMap highlights that these anti-EV efforts in the industry are often coming from industry associations rather than coming directly from automakers, shielding them a bit from inevitable public backlash," writes CleanTechnica. "Every automaker included in the study except Tesla remains a member of at least two of these groups," InfluenceMap reports, "with most automakers a member of at least five." Thanks to Slashdot reader Baron_Yam for sharing the news.

Read more of this story at Slashdot.

America Takes Its Biggest Step Yet to End Coal Mining

The Washington Post reports that America took "one of its biggest steps yet to keep fossil fuels in the ground," announcing Thursday that it will end new coal leasing in the Powder River Basin, "which produces nearly half the coal in the United States... "It could prevent billions of tons of coal from being extracted from more than 13 million acres across Montana and Wyoming, with major implications for U.S. climate goals." A significant share of the nation's fossil fuels come from federal lands and waters. The extraction and combustion of these fuels accounted for nearly a quarter of U.S. carbon dioxide emissions between 2005 and 2014, according to a study by the U.S. Geological Survey. In a final environmental impact statement released Thursday, Interior's Bureau of Land Management found that continued coal leasing in the Powder River Basin would harm the climate and public health. The bureau determined that no future coal leasing should happen in the basin, and it estimated that coal mining in the Wyoming portion of the region would end by 2041. Last year, the Powder River Basin generated 251.9 million tons of coal, accounting for nearly 44 percent of all coal produced in the United States. Under the bureau's determination, the 14 active coal mines in the Powder River Basin can continue operating on lands they have leased, but they cannot expand onto other public lands in the region... "This means that billions of tons of coal won't be burned, compared to business as usual," said Shiloh Hernandez, a senior attorney at the environmental law firm Earthjustice. "It's good news, and it's really the only defensible decision the BLM could have made, given the current climate crisis...." The United States is moving away from coal, which has struggled to compete economically with cheaper gas and renewable energy. U.S. coal output tumbled 36 percent from 2015 to 2023, according to the Energy Information Administration. The Sierra Club's Beyond Coal campaign estimates that 382 coal-fired power plants have closed down or proposed to retire, with 148 remaining. In addition, the Environmental Protection Agency finalized an ambitious set of rules in April aimed at slashing air pollution, water pollution and planet-warming emissions spewing from the nation's power plants. One of the most significant rules will push all existing coal plants by 2039 to either close or capture 90 percent of their carbon dioxide emissions at the smokestack. "The nation's electricity generation needs are being met increasingly by wind, solar and natural gas," said Tom Sanzillo, director of financial analysis at the Institute for Energy Economics and Financial Analysis, an energy think tank. "The nation doesn't need any increase in the amount of coal under lease out of the Powder River Basin."

Read more of this story at Slashdot.

Robot Dogs Armed With AI-aimed Rifles Undergo US Marines Special Ops Evaluation

Long-time Slashdot reader SonicSpike shared this report from Ars Technica: The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic "dogs" developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone. While MARSOC is testing Ghost Robotics' quadrupedal unmanned ground vehicles (called "Q-UGVs" for short) for various applications, including reconnaissance and surveillance, it's the possibility of arming them with weapons for remote engagement that may draw the most attention. But it's not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past. MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx's SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously. On LinkedIn, Onyx Industries shared a video of a similar system in action. In a statement to The War Zone, MARSOC states that weaponized payloads are just one of many use cases being evaluated. MARSOC also clarifies that comments made by Onyx Industries to The War Zone regarding the capabilities and deployment of these armed robot dogs "should not be construed as a capability or a singular interest in one of many use cases during an evaluation."

Read more of this story at Slashdot.

Why a 'Frozen' Distribution Linux Kernel Isn't the Safest Choice for Security

Jeremy Allison — Sam (Slashdot reader #8,157) is a Distinguished Engineer at Rocky Linux creator CIQ. This week he published a blog post responding to promises of Linux distros "carefully selecting only the most polished and pristine open source patches from the raw upstream open source Linux kernel in order to create the secure distribution kernel you depend on in your business." But do carefully curated software patches (applied to a known "frozen" Linux kernel) really bring greater security? "After a lot of hard work and data analysis by my CIQ kernel engineering colleagues Ronnie Sahlberg and Jonathan Maple, we finally have an answer to this question. It's no." The data shows that "frozen" vendor Linux kernels, created by branching off a release point and then using a team of engineers to select specific patches to back-port to that branch, are buggier than the upstream "stable" Linux kernel created by Greg Kroah-Hartman. How can this be? If you want the full details the link to the white paper is here. But the results of the analysis couldn't be clearer. - A "frozen" vendor kernel is an insecure kernel. A vendor kernel released later in the release schedule is doubly so. - The number of known bugs in a "frozen" vendor kernel grows over time. The growth in the number of bugs even accelerates over time. - There are too many open bugs in these kernels for it to be feasible to analyze or even classify them.... [T]hinking that you're making a more secure choice by using a "frozen" vendor kernel isn't a luxury we can still afford to believe. As Greg Kroah-Hartman explicitly said in his talk "Demystifying the Linux Kernel Security Process": "If you are not using the latest stable / longterm kernel, your system is insecure." CIQ describes its report as "a count of all the known bugs from an upstream kernel that were introduced, but never fixed in RHEL 8." For the most recent RHEL 8 kernels, at the time of writing, these counts are: RHEL 8.6 : 5034 RHEL 8.7 : 4767 RHEL 8.8 : 4594 In RHEL 8.8 we have a total of 4594 known bugs with fixes that exist upstream, but for which known fixes have not been back-ported to RHEL 8.8. The situation is worse for RHEL 8.6 and RHEL 8.7 as they cut off back-porting earlier than RHEL 8.8 but of course that did not prevent new bugs from being discovered and fixed upstream.... This whitepaper is not meant as a criticism of the engineers working at any Linux vendors who are dedicated to producing high quality work in their products on behalf of their customers. This problem is extremely difficult to solve. We know this is an open secret amongst many in the industry and would like to put concrete numbers describing the problem to encourage discussion. Our hope is for Linux vendors and the community as a whole to rally behind the kernel.org stable kernels as the best long term supported solution. As engineers, we would prefer this to allow us to spend more time fixing customer specific bugs and submitting feature improvements upstream, rather than the endless grind of backporting upstream changes into vendor kernels, a practice which can introduce more bugs than it fixes. ZDNet calls it "an open secret in the Linux community." It's not enough to use a long-term support release. You must use the most up-to-date release to be as secure as possible. Unfortunately, almost no one does that. Nevertheless, as Google Linux kernel engineer Kees Cook explained, "So what is a vendor to do? The answer is simple: if painful: Continuously update to the latest kernel release, either major or stable." Why? As Kroah-Hartman explained, "Any bug has the potential of being a security issue at the kernel level...." Although [CIQ's] programmers examined RHEL 8.8 specifically, this is a general problem. They would have found the same results if they had examined SUSE, Ubuntu, or Debian Linux. Rolling-release Linux distros such as Arch, Gentoo, and OpenSUSE Tumbleweed constantly release the latest updates, but they're not used in businesses. Jeremy Allison's post points out that "the Linux kernel used by Android devices is based on the upstream kernel and also has a stable internal kernel ABI, so this isn't an insurmountable problem..."

Read more of this story at Slashdot.

OpenAI's Sam Altman Wants AI in the Hands of the People - and Universal Basic Compute?

OpenAI CEO Sam Altman gave an hour-long interview to the "All-In" podcast (hosted by Chamath Palihapitiya, Jason Calacanis, David Sacks and David Friedberg). And when asked about this summer's launch of the next version of ChatGPT, Altman said they hoped to "be thoughtful about how we do it, like we may release it in a different way than we've released previous models... Altman: One of the things that we really want to do is figure out how to make more advanced technology available to free users too. I think that's a super-important part of our mission, and this idea that we build AI tools and make them super-widely available — free or, you know, not-that-expensive, whatever that is — so that people can use them to go kind of invent the future, rather than the magic AGI in the sky inventing the future, and showering it down upon us. That seems like a much better path. It seems like a more inspiring path. I also think it's where things are actually heading. So it makes me sad that we have not figured out how to make GPT4-level technology available to free users. It's something we >really want to do... Q: It's just very expensive, I take it? Altman: It's very expensive. But Altman said later he's confident they'll be able to reduce cost. Altman: I don't know, like, when we get to intelligence too cheap to meter, and so fast that it feels instantaneous to us, and everything else, but I do believe we can get there for, you know, a pretty high level of intelligence. It's important to us, it's clearly important to users, and it'll unlock a lot of stuff. Altman also thinks there's "great roles for both" open-source and closed-source models, saying "We've open-sourced some stuff, we'll open-source more stuff in the future. "But really, our mission is to build toward AGI, and to figure out how to broadly distribute its benefits... " Altman even said later that "A huge part of what we try to do is put the technology in the hands of people..." Altman: The fact that we have so many people using a free version of ChatGPT that we don't — you know, we don't run ads on, we don't try to make money on it, we just put it out there because we want people to have these tools — I think has done a lot to provide a lot of value... But also to get the world really thoughtful about what's happening here. It feels to me like we just stumbled on a new fact of nature or science or whatever you want to call it... I am sure, like any other industry, I would expect there to be multiple approaches and different peoiple like different ones. Later Altman said he was "super-excited" about the possibility of an AI tutor that could reinvent how people learn, and "doing faster and better scientific discovery... that will be a triumph." But at some point the discussion led him to where the power of AI intersects with the concept of a universal basic income: Altman: Giving people money is not going to go solve all the problems. It is certainly not going to make people happy. But it might solve some problems, and it might give people a better horizon with which to help themselves. Now that we see some of the ways that AI is developing, I wonder if there's better things to do than the traditional conceptualization of UBI. Like, I wonder — I wonder if the future looks something more like Universal Basic Compute than Universal Basic Income, and everybody gets like a slice of GPT-7's compute, and they can use it, they can re-sell it, they can donate it to somebody to use for cancer research. But what you get is not dollars but this like slice — you own part of the the productivity. Altman was also asked about the "ouster" period where he was briefly fired from OpenAI — to which he gave a careful response: Altman: I think there's always been culture clashes at — look, obviously not all of those board members are my favorite people in the world. But I have serious respect for the gravity with which they treat AGI and the importance of getting AI safety right. And even if I stringently disagree with their decision-making and actions, which I do, I have never once doubted their integrity or commitment to the sort of shared mission of safe and beneficial AGI... I think a lot of the world is, understandably, very afraid of AGI, or very afraid of even current AI, and very excited about it — and even more afraid, and even more excited about where it's going. And we wrestle with that, but I think it is unavoidable that this is going to happen. I also think it's going to be tremendously beneficial. But we do have to navigate how to get there in a reasonable way. And, like a lot of stuff is going to change. And change is pretty uncomfortable for people. So there's a lot of pieces that we've got to get right... I really care about AGI and think this is like the most interesting work in the world.

Read more of this story at Slashdot.

Will Smarter Cars Bring 'Optimized' Traffic Lights?

"Researchers are exploring ways to use features in modern cars, such as GPS, to make traffic safer and more efficient," reports the Associated Press. "Eventually, the upgrades could do away entirely with the red, yellow and green lights of today, ceding control to driverless cars." Among those reimagining traffic flows is a team at North Carolina State University led by Ali Hajbabaie, an associate engineering professor. Rather than doing away with today's traffic signals, Hajbabaie suggests adding a fourth light, perhaps a white one, to indicate when there are enough autonomous vehicles on the road to take charge and lead the way. "When we get to the intersection, we stop if it's red and we go if it's green," said Hajbabaie, whose team used model cars small enough to hold. "But if the white light is active, you just follow the vehicle in front of you." He points out that this approach could be years aways, since it requires self-driving capability in 40% to 50% of the cars on the road. But the article notes another approach which could happen sooner, talking to Henry Liu, a civil engineering professor who is leading ">a study through the University of Michigan: They conducted a pilot program in the Detroit suburb of Birmingham using insights from the speed and location data found in General Motors vehicles to alter the timing of that city's traffic lights. The researchers recently landed a U.S. Department of Transportation grant under the bipartisan infrastructure law to test how to make the changes in real time... Liu, who has been leading the Michigan research, said even with as little as 6% of the vehicles on Birmingham's streets connected to the GM system, they provide enough data to adjust the timing of the traffic lights to smooth the flow... "The beauty of this is you don't have to do anything to the infrastructure," Liu said. "The data is not coming from the infrastructure. It's coming from the car companies." Danielle Deneau, director of traffic safety at the Road Commission in Oakland County, Michigan, said the initial data in Birmingham only adjusted the timing of green lights by a few seconds, but it was still enough to reduce congestion. "Even bigger changes could be in store under the new grant-funded research, which would automate the traffic lights in a yet-to-be announced location in the county."

Read more of this story at Slashdot.

Australia Criticized For Ramping Up Gas Extraction Through '2050 and Beyond'

Slashdot reader sonlas shared this report from the BBC: Australia has announced it will ramp up its extraction and use of gas until "2050 and beyond", despite global calls to phase out fossil fuels. Prime Minister Anthony Albanese's government says the move is needed to shore up domestic energy supply while supporting a transition to net zero... Australia — one of the world's largest exporters of liquefied natural gas — has also said the policy is based on "its commitment to being a reliable trading partner". Released on Thursday, the strategy outlines the government's plans to work with industry and state leaders to increase both the production and exploration of the fossil fuel. The government will also continue to support the expansion of the country's existing gas projects, the largest of which are run by Chevron and Woodside Energy Group in Western Australia... The policy has sparked fierce backlash from environmental groups and critics — who say it puts the interest of powerful fossil fuel companies before people. "Fossil gas is not a transition fuel. It's one of the main contributors to global warming and has been the largest source of increases of CO2 [emissions] over the last decade," Prof Bill Hare, chief executive of Climate Analytics and author of numerous UN climate change reports told the BBC... Successive Australian governments have touted gas as a key "bridging fuel", arguing that turning it off too soon could have "significant adverse impacts" on Australia's economy and energy needs. But Prof Hare and other scientists have warned that building a net zero policy around gas will "contribute to locking in 2.7-3C global warming, which will have catastrophic consequences".

Read more of this story at Slashdot.

Linux Kernel 6.9 Officially Released

"6.9 is now out," Linus Torvalds posted on the Linux kernel mailing list, "and last week has looked quite stable (and the whole release has felt pretty normal)." Phoronix writes that Linux 6.9 "has a number of exciting features and improvements for those habitually updating to the newest version." And Slashdot reader prisoninmate shared this report from 9to5Linux: Highlights of Linux kernel 6.9 include Rust support on AArch64 (ARM64) architectures, support for the Intel FRED (Flexible Return and Event Delivery) mechanism for improved low-level event delivery, support for AMD SNP (Secure Nested Paging) guests, and a new dm-vdo (virtual data optimizer) target in device mapper for inline deduplication, compression, zero-block elimination, and thin provisioning. Linux kernel 6.9 also supports the Named Address Spaces feature in GCC (GNU Compiler Collection) that allows the compiler to better optimize per-CPU data access, adds initial support for FUSE passthrough to allow the kernel to serve files from a user-space FUSE server directly, adds support for the Energy Model to be updated dynamically at run time, and introduces a new LPA2 mode for ARM 64-bit processors... Linux kernel 6.9 will be a short-lived branch supported for only a couple of months. It will be succeeded by Linux kernel 6.10, whose merge window has now been officially opened by Linus Torvalds. Linux kernel 6.10 is expected to be released in mid or late September 2024. "Rust language has been updated to version 1.76.0 in Linux 6.9," according to the article. And Linus Torvalds shared one more details on the Linux kernel mailing list. "I now have a more powerful arm64 machine (thanks to Ampere), so the last week I've been doing almost as many arm64 builds as I have x86-64, and that should obviously continue during the upcoming merge window too."

Read more of this story at Slashdot.

Reddit Grows, Seeks More AI Deals, Plans 'Award' Shops, and Gets Sued

Reddit reported its first results since going public in late March. Yahoo Finance reports: Daily active users increased 37% year over year to 82.7 million. Weekly active unique users rose 40% from the prior year. Total revenue improved 48% to $243 million, nearly doubling the growth rate from the prior quarter, due to strength in advertising. The company delivered adjusted operating profits of $10 million, versus a $50.2 million loss a year ago. [Reddit CEO Steve] Huffman declined to say when the company would be profitable on a net income basis, noting it's a focus for the management team. Other areas of focus include rolling out a new user interface this year, introducing shopping capabilities, and searching for another artificial intelligence content licensing deal like the one with Google. Bloomberg notes that already Reddit "has signed licensing agreements worth $203 million in total, with terms ranging from two to three years. The company generated about $20 million from AI content deals last quarter, and expects to bring in more than $60 million by the end of the year." And elsewhere Bloomberg writes that Reddit "plans to expand its revenue streams outside of advertising into what Huffman calls the 'user economy' — users making money from others on the platform... " In the coming months Reddit plans to launch new versions of awards, which are digital gifts users can give to each other, along with other products... Reddit also plans to continue striking data licensing deals with artificial intelligence companies, expanding into international markets and evaluating potential acquisition targets in areas such as search, he said. Meanwhile, ZDNet notes that this week a Reddit announcement "introduced a new public content policy that lays out a framework for how partners and third parties can access user-posted content on its site." The post explains that more and more companies are using unsavory means to access user data in bulk, including Reddit posts. Once a company gets this data, there's no limit to what it can do with it. Reddit will continue to block "bad actors" that use unauthorized methods to get data, the company says, but it's taking additional steps to keep users safe from the site's partners.... Reddit still supports using its data for research: It's creating a new subreddit — r/reddit4researchers — to support these initiatives, and partnering with OpenMined to help improve research. Private data is, however, going to stay private. If a company wants to use Reddit data for commercial purposes, including advertising or training AI, it will have to pay. Reddit made this clear by saying, "If you're interested in using Reddit data to power, augment, or enhance your product or service for any commercial purposes, we require a contract." To be clear, Reddit is still selling users' data — it's just making sure that unscrupulous actors have a tougher time accessing that data for free and researchers have an easier time finding what they need. And finally, there's some court action, according to the Register. Reddit "was sued by an unhappy advertiser who claims that internet giga-forum sold ads but provided no way to verify that real people were responsible for clicking on them." The complaint [PDF] was filed this week in a U.S. federal court in northern California on behalf of LevelFields, a Virginia-based investment research platform that relies on AI. It says the biz booked pay-per-click ads on the discussion site starting September 2022... That arrangement called for Reddit to use reasonable means to ensure that LevelField's ads were delivered to and clicked on by actual people rather than bots and the like. But according to the complaint, Reddit broke that contract... LevelFields argues that Reddit is in a particularly good position to track click fraud because it's serving ads on its own site, as opposed to third-party properties where it may have less visibility into network traffic... Nonetheless, LevelFields's effort to obtain IP address data to verify the ads it was billed for went unfulfilled. The social media site "provided click logs without IP addresses," the complaint says. "Reddit represented that it was not able to provide IP addresses." "The plaintiffs aspire to have their claim certified as a class action," the article adds — along with an interesting statistic. "According to Juniper Research, 22 percent of ad spending last year was lost to click fraud, amounting to $84 billion."

Read more of this story at Slashdot.

OpenAI's Sam Altman on iPhones, Music, Training Data, and Apple's Controversial iPad Ad

OpenAI CEO Sam Altman gave an hour-long interview to the "All-In" podcast (hosted by Chamath Palihapitiya, Jason Calacanis, David Sacks and David Friedberg). And speaking on technology's advance, Altman said "Phones are unbelievably good.... I personally think the iPhone is like the greatest piece of technology humanity has ever made. It's really a wonderful product." Q: What comes after it? Altman: I don't know. I mean, that was what I was saying. It's so good, that to get beyond it, I think the bar is quite high. Q: You've been working with Jony Ive on something, right? Altman: We've been discussing ideas, but I don't — like, if I knew... Altman said later he thought voice interaction "feels like a different way to use a computer." But the conversation turned to Apple in another way. It happened in a larger conversation where Altman said OpenAI has "currently made the decision not to do music, and partly because exactly these questions of where you draw the lines..." Altman: Even the world in which — if we went and, let's say we paid 10,000 musicians to create a bunch of music, just to make a great training set, where the music model could learn everything about song structure and what makes a good, catchy beat and everything else, and only trained on that — let's say we could still make a great music model, which maybe we could. I was posing that as a thought experiment to musicians, and they were like, "Well, I can't object to that on any principle basis at that point — and yet there's still something I don't like about it." Now, that's not a reason not to do it, um, necessarily, but it is — did you see that ad that Apple put out... of like squishing all of human creativity down into one really iPad...? There's something about — I'm obviously hugely positive on AI — but there is something that I think is beautiful about human creativity and human artistic expression. And, you know, for an AI that just does better science, like, "Great. Bring that on." But an AI that is going to do this deeply beautiful human creative expression? I think we should figure out — it's going to happen. It's going to be a tool that will lead us to greater creative heights. But I think we should figure out how to do it in a way that preserves the spirit of what we all care about here. What about creators whose copyrighted materials are used for training data? Altman had a ready answer — but also some predictions for the future. "On fair use, I think we have a very reasonable position under the current law. But I think AI is so different that for things like art, we'll need to think about them in different ways..." Altman:I think the conversation has been historically very caught up on training data, but it will increasingly become more about what happens at inference time, as training data becomes less valuable and what the system does accessing information in context, in real-time... what happens at inference time will become more debated, and what the new economic model is there. Altman gave the example of an AI which was never trained on any Taylor Swift songs — but could still respond to a prompt requesting a song in her style. Altman: And then the question is, should that model, even if it were never trained on any Taylor Swift song whatsoever, be allowed to do that? And if so, how should Taylor get paid? So I think there's an opt-in, opt-out in that case, first of all — and then there's an economic model. Altman also wondered if there's lessons in the history and economics of music sampling...

Read more of this story at Slashdot.

Webb Telescope Finds a (Hot) Earth-Sized Planet With an Atmosphere

An anonymous reader shared this report from the Associated Press: A thick atmosphere has been detected around a planet that's twice as big as Earth in a nearby solar system, researchers reported Wednesday. The so-called super Earth — known as 55 Cancri e — is among the few rocky planets outside our solar system with a significant atmosphere, wrapped in a blanket of carbon dioxide and carbon monoxide. The exact amounts are unclear. Earth's atmosphere is a blend of nitrogen, oxygen, argon and other gases. "It's probably the firmest evidence yet that this planet has an atmosphere," said Ian Crossfield, an astronomer at the University of Kansas who studies exoplanets and was not involved with the research. The research was published in the journal Nature. "The boiling temperatures on this planet — which can reach as hot as 4,200 degrees Fahrenheit (2,300 degrees Celsius) — mean that it is unlikely to host life," the article points out. "Instead, scientists say the discovery is a promising sign that other such rocky planets with thick atmospheres could exist that may be more hospitable."

Read more of this story at Slashdot.

Could Atomically Thin Layers Bring A 19x Energy Jump In Battery Capacitors?

Researchers believe they've discovered a new material structure that can improve the energy storage of capacitors. The structure allows for storage while improving the efficiency of ultrafast charging and discharging. The new find needs optimization but has the potential to help power electric vehicles. * An anonymous reader shared this report from Popular Mechanics: In a study published in Science, lead author Sang-Hoon Bae, an assistant professor of mechanical engineering and materials science, demonstrates a novel heterostructure that curbs energy loss, enabling capacitors to store more energy and charge rapidly without sacrificing durability... Within capacitors, ferroelectric materials offer high maximum polarization. That's useful for ultra-fast charging and discharging, but it can limit the effectiveness of energy storage or the "relaxation time" of a conductor. "This precise control over relaxation time holds promise for a wide array of applications and has the potential to accelerate the development of highly efficient energy storage systems," the study authors write. Bae makes the change — one he unearthed while working on something completely different — by sandwiching 2D and 3D materials in atomically thin layers, using chemical and nonchemical bonds between each layer. He says a thin 3D core inserts between two outer 2D layers to produce a stack that's only 30 nanometers thick, about 1/10th that of an average virus particle... The sandwich structure isn't quite fully conductive or nonconductive. This semiconducting material, then, allows the energy storage, with a density up to 19 times higher than commercially available ferroelectric capacitors, while still achieving 90 percent efficiency — also better than what's currently available. Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

Photographer Sets World Record for Fastest Drone Flight at 298 MPH

An anonymous reader shared this report from PetaPixel: A photographer and content creator has set the world record for the fastest drone flight after his custom-made aircraft achieved a staggering 298.47 miles per hour (480.2 kilometers per hour). Guinness confirmed the record noting that Luke Maximo Bell and his father Mike achieved the "fastest ground speed by a battery-powered remote-controlled (RC) quadcopter." Luke, who has previously turned his GoPro into a tennis ball, describes it as the most "frustrating and difficult project" he has ever worked on after months of working on prototypes that frequently caught fire. From the very first battery tests for the drone that Luke calls Peregrine 2, there were small fires as it struggled to cope with the massive amount of current which caused it to heat up to over 266 degrees Fahrenheit (130 degrees Celsius). The motor wires also burst into flames during full load testing causing Luke and Mike to use thicker ones so they didn't fail... After 3D-printing the final model and assembling all the parts, Luke took it for a maiden flight which immediately resulted in yet another fire. This setback made Bell almost quit the project but he decided to remake all the parts and try again — which also ended in fire. This second catastrophe prompted Luke and his Dad to "completely redesign the whole drone body." It meant weeks of work as the new prototype was once again tested, 3D-printed, and bolted together.

Read more of this story at Slashdot.

Is Dark Matter's Main Rival Theory Dead?

"One of the biggest mysteries in astrophysics today is that the forces in galaxies do not seem to add up," write two U.K. researchers in the Conversation: Galaxies rotate much faster than predicted by applying Newton's law of gravity to their visible matter, despite those laws working well everywhere in the Solar System. To prevent galaxies from flying apart, some additional gravity is needed. This is why the idea of an invisible substance called dark matter was first proposed. But nobody has ever seen the stuff. And there are no particles in the hugely successful Standard Model of particle physics that could be the dark matter — it must be something quite exotic. This has led to the rival idea that the galactic discrepancies are caused instead by a breakdown of Newton's laws. The most successful such idea is known as Milgromian dynamics or Mond [also known as modified Newtonian dynamics], proposed by Israeli physicist Mordehai Milgrom in 1982. But our recent research shows this theory is in trouble... Due to a quirk of Mond, the gravity from the rest of our galaxy should cause Saturn's orbit to deviate from the Newtonian expectation in a subtle way. This can be tested by timing radio pulses between Earth and Cassini. Since Cassini was orbiting Saturn, this helped to measure the Earth-Saturn distance and allowed us to precisely track Saturn's orbit. But Cassini did not find any anomaly of the kind expected in Mond. Newton still works well for Saturn... Another test is provided by wide binary stars — two stars that orbit a shared centre several thousand AU apart. Mond predicted that such stars should orbit around each other 20% faster than expected with Newton's laws. But one of us, Indranil Banik, recently led a very detailed study that rules out this prediction. The chance of Mond being right given these results is the same as a fair coin landing heads up 190 times in a row. Results from yet another team show that Mond also fails to explain small bodies in the distant outer Solar System... The standard dark matter model of cosmology isn't perfect, however. There are things it struggles to explain, from the universe's expansion rate to giant cosmic structures. So we may not yet have the perfect model. It seems dark matter is here to stay, but its nature may be different to what the Standard Model suggests. Or gravity may indeed be stronger than we think — but on very large scales only. "Ultimately though, Mond, as presently formulated, cannot be considered a viable alternative to dark matter any more," the researchers conclude. "We may not like it, but the dark side still holds sway."

Read more of this story at Slashdot.

Father of SQL Says Yes to NoSQL

An anonymous reader shared this report from the Register: The co-author of SQL, the standardized query language for relational databases, has come out in support of the NoSQL database movement that seeks to escape the tabular confines of the RDBMS. Speaking to The Register as SQL marks its 50th birthday, Donald Chamberlin, who first proposed the language with IBM colleague Raymond Boyce in a 1974 paper [PDF], explains that NoSQL databases and their query languages could help perform the tasks relational systems were never designed for. "The world doesn't stay the same thing, especially in computer science," he says. "It's a very fast, evolving, industry. New requirements are coming along and technology has to change to meet them, I think that's what's happening. The NoSQL movement is motivated by new kinds of applications, particularly web applications, that need massive scalability and high performance. Relational databases were developed in an earlier generation when scalability and performance weren't quite as important. To get the scalability and performance that you need for modern apps, many systems are relaxing some of the constraints of the relational data model." [...] A long-time IBMer, Chamberlin is now semi-retired, but finds time to fulfill a role as a technical advisor for NoSQL company Couchbase. In the role, he has become an advocate for a new query language designed to overcome the "impedance mismatch" between data structures in the application language and a database, he says. UC San Diego professor Yannis Papakonstantinou has proposed SQL++ to solve this problem, with a view to addressing impedance mismatch between heavily object-based JavaScript, the core language for web development and the assumed relational approach embedded in SQL. Like C++, SQL++ is designed as a compatible extension of an earlier language, SQL, but is touted as better able to handle the JSON file format inherent in JavaScript. Couchbase and AWS have adopted the language, although the cloud giant calls it PartiQL. At the end of the interview, Chamblin adds that "I don't think SQL is going to go away. A large part of the world's business data is encoded in SQL, and data is very sticky. Once you've got your database, you're going to leave it there. Also, relational systems do a very good job of what they were designed to do... "[I]f you're a startup company that wants to sell shoes on the web or something, you're going to need a database, and one of those SQL implementations will do the job for free. I think relational databases and the SQL language will be with us for a long time."

Read more of this story at Slashdot.

AMD Core Performance Boost For Linux Getting Per-CPU Core Controls

An anonymous reader shared this report from Phoronix: For the past several months AMD Linux engineers have been working on AMD Core Performance Boost support for their P-State CPU frequency scaling driver. The ninth iteration of these patches were posted on Monday and besides the global enabling/disabling support for Core Performance Boost, it's now possible to selectively toggle the feature on a per-CPU core basis... The new interface is under /sys/devices/system/cpu/cpuX/cpufreq/amd_pstate_boost_cpb for each CPU core. Thus users can tune whether particular CPU cores are boosted above the base frequency.

Read more of this story at Slashdot.

❌