FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Here's the next batch of Xbox Game Pass games for August

Microsoft has confirmed the next batch of titles headed to Xbox Game Pass for the latter half of August: Atlas Fallen, Core Keeper, and Star Trucker.

Then there's that little known game called Call of Duty Black Ops 6. You'll be able to participate in the early access open beta when it kicks off for Xbox Game Pass subscribers on 30th August, 2024, with pre-downloading available from 28th August.

"Sure, it takes itself way too seriously and the loot chase can get monotonous, but everything outside of the monster-slaying is just an excuse to get right back to the monster-slaying. Or make the monster-slaying cooler with upgrades," we said in our Atlas Fallen review.

Read more

You'll be able to watch The Borderlands film at home very soon, it seems

The Borderlands film adaptation will seemingly be available to watch digitally from the comfort of your home very soon.

While nothing has been officially announced by Lionsgate itself, multiple sources such The Hollywood Handle, DVD Release Dates, ScreenTime and When to Stream all have the Cate Blanchett-fronted film listed as being made available digitally from as soon as 30th August.

That's less than a month after its cinematic release, with the film only debuting on 9th August.

Read more

Call of Duty might have seen its last 200GB+ install size as Activision announces optimisation plans

Call of Duty has become an absolute hard drive hog in recent years, with 2023's entry managing to consume over 200GB of storage in some cases. That all might be about to change, however, as Activision has announced major changes to the way it'll be handing installs with this year's Black Ops 6, promising "smaller and more customised downloads" as a result.

Activision shared the news in a post on its Call of Duty blog, explaining its optimisation work will begin with a revamp of "the experience formerly known as Call of Duty HQ". This revamp is set to roll out over the course of several updates ahead of Black Ops 6's October launch, and will promises to introduce a streamlined interface, direct access to games, more control over downloads, and expanded texture streaming technology to reduce file sizes.

A first update to reorganise game content arrives on 21st August. Then, following Black Ops 6's open beta on 30th August, a new user interface and other "remaining updates" are scheduled for mid-October. After these "larger initial updates", Activision says future Call of Duty downloads will decrease in size and existing files will take up less space on players' device.

Read more

Doom and Doom 2: are Nightdive's latest remasters the definitive editions?

For many, scaling Mount Everest has stood as the ultimate challenge of one's strength and endurance. An achievement of a lifetime. For long-time Doom players, however, there is an equivalent: NUTS.WAD. Legend has it that NUTS.WAD descended upon Doom players in the year 2001: a map from the future in which players are dropped into a single map with more than 10,000 enemies and a handful of power-ups. And now - for the first time ever - it's playable on a games console.

I'm half-joking, of course, but the ability to load in any Doom mod is just one great feature found in the latest version of Doom and Doom 2. Helmed by Nightdive in cooperation with id Software and Machine Games, this new version is worth looking at as it is the most feature rich, best-performing version of Doom on consoles. It's available on PC, PS5, Xbox Series consoles, Switch and even last-gen PS4 and Xbox hardware. The game was transitioned over to Nightdive's KEX engine and brings with it a vast array of enhancements - 120fps support on consoles, 16-player multiplayer including co-op, and a new soundtrack from the legendary Andrew Hulshult.

But it was the mod support that was my first destination and with it, the chance to see how Nightdive's work would cope with the NUTS.WAD challenge. This pushes beyond the limits of what Doom engine was intended to handle and now we can test it on console and the results are interesting. Before we go on, it's worth stressing that all current-gen machines can handle 4K gaming at 120fps - and yes, that includes Series S. The engine is optimised and fast - all the included content and every map I tested runs like greased lightning. I wanted to raise this caveat because the challenge of NUTS.WAD is so extreme and cruel that I don't want people to get the wrong idea. The fact that you can run NUTS.WAD at all is cause for celebration!

Read more

Treyarch shows off Call of Duty: Black Ops 6's Terminus Island Zombies map in new trailer

Activision sure is dragging out its reveal of Call of Duty: Black Ops 6's Zombies mode. But if you're sucker for the undead stuff, there's more where last week's trailer and gameplay reveal came from, with developer Treyarch having now offered a tour of Zombies' Terminus map.

Terminus Island is one of two maps that'll be available at launch (the other being the West Virginian town of Liberty Falls), and Treyarch calls it "one of the largest round-based Zombies maps ever created". Black Ops 6's Zombies mode takes place in the early 90s - five years after the events of Black Ops Cold War's Zombies mode - and Terminus Island serves as a prison for some familiar Requiem faces. After their liberation early at the start of the story, players can explore the prison itself before moving out to investigate its tropical island surroundings.

There's a secret research facility specialising in "weird science" (in case you were wondering where the zombies might spring up from this time), as well as the ocean, and assorted smaller islands - all of which players will visit as part of Terminus' main quest. It's described as a "living world" full of scripted encounters, ranging from zombies smashing out of vats and prison guards still trying to control the undead threat, to less fortunate souls being chomped on.

Read more

Would the Borderlands movie have been better with Uwe Boll at the helm? Well, he seems to think so

Eli Roth's recent Borderlands film adaptation hasn't exactly had the best debut. Ahead of release, critics far and wide shared disparaging reviews of the adaptation, with phrases such as "disaster" and "lifeless, unfunny, and visually repulsive dud" being bandied about.

And then the film was released and, well, things didn't really get much better. The Cate Blanchette-fronted adaptation generated just $4m on its opening day, a disappointing result that looks set to result in the film becoming a commercial flop.

In fact, Borderlands has been such a misfire that now even filmmaker Uwe Boll - who previously directed that Alone in the Dark adaptation, which saw him nominated for not one but two 'Worst Director' awards (one of which he won) - is taking a swipe at it.

Read more

Anova will charge customers to use its sous vide app, because everything must be a subscription

Anova will soon start charging customers a monthly or yearly fee to use the “smart” features of its well-regarded sous vide cooking appliances. The subscription costs kick in on August 21 and apply to the proprietary app, which controls wireless functionality. In other words, you won’t be able to remotely control the device without paying the piper.

The subscription price isn’t exactly exorbitant, at $2 per month or $10 per year, but it’s the principle of the thing. In the old days, we’d buy an object and then use that object. End of story. Now everything’s a dang subscription. Yes. I wrote those previous sentences in a cartoonish old man's voice, but the point stands.

Anova says that the subscription fee will only apply to new users. If you already own an Anova cooker and use the app, the company will grandfather you in for free. However, it’s now mandatory to create an account. Before this change, it was optional. If you use an Anova cooker in guest mode, get that account made pronto. 

An update for our app users —> https://t.co/vg6NOEDubE

— Anova (@AnovaCulinary) August 14, 2024

These cookers can be used without the app, but that turns them into bare-bones sous vide machines (not that there’s anything wrong with that.) The app allows for remote adjustments, access to status updates, the perusal of recipes and more.

“Our community has literally cooked 100s of millions of times with our app. Unfortunately, each connected cook costs us money,” company CEO Stephen Svajian wrote in a blog post. Svajian didn't go into detail as to how using simple Bluetooth features costs the company money, but whatever. 

The bad news doesn’t stop there. Anova is stripping its first-gen products of all smart features. This applies to the Bluetooth and Bluetooth + Wi-Fi models of the original Anova Precision Cooker. Not even a subscription will save these devices, though the change doesn’t go into effect until 2025.

This article originally appeared on Engadget at https://www.engadget.com/home/kitchen-tech/anova-will-charge-customers-to-use-its-sous-vide-app-because-everything-must-be-a-subscription-151906912.html?src=rss

©

© Anova

A cooking appliance.

Instagram's experimental profile grid has rectangular images instead of squares

Instagram is testing a new profile grid layout that features rectangular images instead of the squares you're used to. In an Instagram story, Adam Mosseri has revealed that the app is testing a vertical grid for users' profiles. He explained that the original square grid was designed back in the day when the app only allowed users to upload square photos. Those days are long gone, and the vast majority of Instagram uploads are apparently vertical, specifically 4 x 3 images and 9 x 16 videos. He called cropping those uploads down to square as "pretty brutal." 

When you click on Instagram's video tab, you'll already see a rectangular grid, so the experimental layout won't look terribly unfamiliar. In fact, the test profile looks exactly the same, based on a screenshot that a user posted on Threads, except the grid includes photo posts and not just videos. A spokesperson told The Verge that the test has only rolled out to a small number of users and that the Instagram team will listen to feedback before expanding the redesigned grid's availability. 

Based on an old post by reverse engineer Alessandro Paluzzi, the app has been working on the new rectangular grid layout since at least 2022. It looks like the test is making its way to more users — and it seems like not everyone's happy about it. Mosseri posted his Story in response to a comment submitted to his "Ask Me Anything" session, pleading for the app not to kill the old layout.

This article originally appeared on Engadget at https://www.engadget.com/social-media/instagrams-experimental-profile-grid-has-rectangular-images-instead-of-squares-160007086.html?src=rss

©

© Anatoliy Sizov via Getty Images

Tyumen, Russia - May 1,2019: Apple iPhone XR showing Instagram application on mobile

Head in the clouds, boots on the ground.

Self-hosted infrastructure is the first step toward voluntary apotheosis.

–Unknown

When people think of The Cloud(tm), they think of ubiquitous computing. Whatever you need, whenever you need it’s there from the convenience of your mobile, from search engines to storage to chat.  However, as the latest Amazon and Cloudflare outages have demonstrated all it takes is a single glitch to knock out half the Internet as we know it. 

This is, as they say, utter bollocks.  Much of the modern world spent a perfectly good day that could have been spent procrastinating, shitposting, and occasionally doing something productive bereft of Slack, Twitter, Autodesk, Roku, and phone service through Vonage.  While thinking about this fragile state of affairs in the shower this morning I realized that, for the somewhat technically inclined and their respective cohorts there are ways to mitigate the risks of letting other people run stuff you need every day.  Let us consider the humble single board computer, computing devices the size of two decks of cards at most, a €1 coin at the very least.  While this probably won’t help you keep earning a paycheque it would help you worry less about the next time Amazon decides to fall on its face.

Something I never really understood was the phenomenon of people recreating the data center in miniature with stacks of Raspberry Pis shaped like a scaled down telecom rack.  There’s nothing wrong with that – it’s as valid a way of building stuff out as anything else.  However… do you really have to go this route?  Single board computers are small enough that they can be placed anywhere and everywhere in such a manner that they’re practically invisible.  At any time you could be surrounded by a thin fog of computing power, doing things you care about, completely out of sight and out of mind, independent of AWS, Azure, or any other provider’s health.  That fog could be as large or as small as you need, expanded only as needs dictate, cheap to upgrade or replace, and configured to automatically update and upgrade itself to minimize management.  Some visionaries imagine a world in which any random thing you lay eyes upon may have enough inexpensive smarts built in to crunch numbers – why not take a step in that direction?

Starting with a relatively powerful wireless router running OpenWRT for maximum customizability and stability might be a good place to start.  Speaking only as someone who’s used it for a couple of years, OpenWRT stuff is largely “set it and forget it,” with only an up-front investment of time measured in an afternoon.  Additionally, using it as your home wireless access point in no way compromises getting stuff done every day.  If nothing else, it might be more efficient than the crappy wireless access point-cum-modem that your ISP makes you use.

Now onto the flesh and bones of your grand design – where are you going to put however many miniature machines you need?  Think back to how you used to hide your contraband.  The point of this exercise isn’t to have lots of blinky lights all over the place (nerd cred aside), the point is to have almost ubiquitous computing power that goes without notice unless the power goes out (in which case you’re up a creek, no matter what).  Consider the possibility of having one or two SBCs in a hollowed out book, either hand-made or store bought (“book safes”) with holes drilled through the normally not-visible page side of the book to run power lines to a discrete power outlet.  Think about using a kitschy ceramic skull on your desk containing a Raspberry Pi 0w, miniature USB hub, and a couple of flash drives.  How about a stick PC stuck into the same coffee cup you keep your pens and pencils in?

Maybe a time will come when you need to think bigger.  Let’s say that you want to spread your processing power out a bit so it’s not all in the same place.  Sure, you could put a machine or two at a friend’s house, your parents’ place, or what have you.. but why not think a little bigger?  Consider a RasPi with a USB cellular modem, a pre-paid SIM card, and SSH over Tor (to commit the odd bit of system administration) hanging out on the back of your desk at the office (remember those?) or stashed behind the counter of a friendly coffee shop.

Which moves us right along to the question, what do you actually run on a personal cluster?  Normally, people build personal clusters to experiment with containerization technologies like Docker (for encapsulating applications in such a way that they “think” they’re all by their lonesome on a server) and Kubernetes or the cut-down k3s (for doing most of the sysadmin work of juggling those containers).  Usually a web panel of some kind is used to manipulate the cluster.  This is quite handy due to the myriad of self-hosted applications which happen to be Dockerized.  The reason for such a software architecture is that the user can specify that a containerized application should be started with a couple of taps on their mobile’s screen and Kubernetes looks at its cluster, figures out which node has enough memory and disk space available to install the application and its dependencies, and does so without further user intervention.  Unfortunately, this comes at the expense of having to do just about everything in a Dockerized or Kubernetes-ized way.  In a containerized environment things don’t like to play nicely with traditionally installed stuff, or at least not without a lot of head scratching, swearing, and tinkering.

We can think much bigger, though.  Say we’re setting up a virtual space for your family.  Or your affinity group.  Or a non-profit organization.  There are some excellent all-in-one systems out there like Yunohost and Sandstorm which offer supported applications galore (Sandstorm is only available for the x86-64 platform right now, though there’s nothing that says that you can’t add the odd NUC or VPS to your exocortex) which can be yours for a couple of mouse clicks.

How about easy to use storage?  It’s always good to have someplace to keep your data as well as back it up (you DO make backups, don’t you?)  You could do a lot worse than a handful of 256 GB or 512 GB flash drives plugged into your fog and tastefully scattered around the house.  To access them you can let whatever applications you’re running do their thing, or you can stand up a a copy of MinIO on each server (also inside of Docker containers) which, as far as anything you care about will be concerned is just Amazon’s S3 with a funny hostname.

Pontification about where and what safely behind us, the question that now arises is, how do we turn all this stuff into a fog?  If you have machines all over the place, and some of them aren’t at home (which means that we can’t necessarily poke holes in any intervening firewalls), how can the different parts of the cluster talk to each other?  Unsurprisingly, such a solution already exists in the form of Nebula, which Slack invented to do exactly what we need.  There’s a little up-front configuration that has to be done but once the certificates are generated and Nebula is installed on every system you don’t have to mess with it anymore unless there are additional services that you want to expose.  It helps to think of it as a cut-down VPN which requires much less fighting and swearing but gives you much more “it just works(tm)” than a lot of things.  Sandstorm on a VPS?  MinIO running on a deck of cards at your friendly local gaming shop?  Nebula can make them look like they’re right next to one another, no muss, no fuss.  Sure, you could use a Tor hidden service to accomplish the same thing (and you really should set up one or two, if only so you can log in remotely) but Nebula is a much better solution in this regard.

Setting up this kind of infrastructure might take anywhere from a couple of days to a couple of weeks, depending on your level of skill, ability to travel, availability of equipment, and relative accessibility of where you want to stash your stuff.  Of course, not everybody needs such a thing.  Some people are more than happy to keep using Google applications or Microsoft 365, or what have you.  While some may disagree or distruct these services (with good reason), ultimately people use what they use because it works for them.  A certain amount of determination is required to de-FAANGify one’s life, and not everyone has that need or use case.  Still, it is my hope that a few people out there make a serious consideration.

By day the hacker known as the Doctor does information security for a large software-as-a-service company (whom he does NOT speak for), with a background in penetration testing, reverse engineering, and systems architecture. By night the Doctor doffs his disguise, revealing his true nature as a transhumanist cyborg (at last measurement, comprised of approximately 30% hardware and software augmentations), technomancer (originally trained in the chaos sorta-tradition), cheerful nihilist, lovable eccentric, and open source programmer. His primary organic terminal has been observed presenting at a number of hacker conventions over the years on a variety of topics. Other semi-autonomous agents of his selfhood may or may not have manifested at other fora.  The Doctor is a recognized ambassador of the nation of Magonia, with all of the rights and privileges thereof.

The post Head in the clouds, boots on the ground. appeared first on Mondo 2000.

Secret Level Anthology Show Includes an Armored Core Short

An Armored Core Flies over an Icy Wasteland in Secret Level, the anthology series.

On August 20, 2024, FromSoftware announced that the Prime Video anthology show Secret Level will feature an all new story set in one of the universes of Armored Core. The series, being developed by the same creative team as Love, Death & Robots, will feature 15 shorts from various video games.

Armored Core joins a list of many other famous games to appear in installments in Secret Level, including Mega Man, Dungeons & Dragons, Warhammer 40,000 and Pac-Man. The tweet announcing this FromSoftware game's involvement also showed off a mech racing across an icy wasteland and a picture of a human character who resembles actor Keanu Reeves.

You can see the official tweet here:

https://twitter.com/armoredcore/status/1825978617781723177

This isn't the first time the Armored Core property has gone beyond the video games. Numerous model kits have been made of famous Cores from the series. A novel called Armored Core: Brave New World appeared in Japan.

Blur Studio, the production company behind Love, Death & Robots and Secret Level previously did work on films like James Cameron's Avatar, created trailers for Batman: Arkham City and Batman: Arkham Knight, and redid the cutscenes for the Master Chief Collection version of Halo 2.

Secret Level will be streaming on Amazon Prime Video on December 10, 2024, and the Armored Core episode will be a part of the show.

The post Secret Level Anthology Show Includes an Armored Core Short appeared first on Siliconera.

Borderlands 4 announced with 2025 release

Publisher 2K and developer Gearbox Software have announced Borderlands 4 for multiple platforms, though a release is a ways off. Borderlands 4 is in development for Windows PC (via Steam), Xbox Series X|S, and PlayStation 5 with a release set for 2025. Here’s a brief blurb on the game, plus a teaser trailer: The definitive […]

Source

Here's a full Call Of Duty: Black Ops 6 campaign level on sporadic fast-forward

Activision have just screened an abbreviated video of Call Of Duty: Black Ops 6 campaign level “Most Wanted”, in which you and a buddy infiltrate a US fundraiser to save returning character Adler from Bad Dudes. Good news, people who like Call Of Duty: this looks like Call Of Duty. It’s got a homing knife, an exploding remote-controlled car and a big chap in full face armour with an overcompensatory minigun. Catch the full video below.

Read more

We now know that Borderlands 4 is a Borderlands game with 4 at the end, coming 2025

The trailers of Gamescom 2024 want to be here with you, explained Keighley in his opening monologue. They are not directed at you. They are with you. That's lovely. Thanks videogames. And if - as we surely must - we judge the eagerness of these adverts to spend time with us by how quickly they appear on our screens, then FPS sequel Borderlands 4 surely loves us the most.

Do we love it back? Who knows. But it does exist, and it isn’t using a subtitle. That’s got to count for something, right?

Read more

Activision are finally cutting down Call Of Duty's horrendous install sizes for Black Ops 6's release

For years, our PC storage has wobbled and buckled beneath the tyranny of gigantic Call Of Duty installs. Like 13th century peasants straining to convey huge, teetering loads of freshly quarried LMGs, our SSDs cry out for justice. Perhaps scenting imminent rebellion and a mass audience desertion to low-poly shooters with more civilised file sizes, Activision have relented. Future installations of the much-padded FPS will be "smaller and more customised", though in a last cruel stroke of villainy, they want you to download a large update to prepare the ground.

Read more

Doom modders are annoyed at the "chum-bucket" of wrongly credited mods in the latest Doom remaster

Last week, Bethesda released a remastered edition of Doom and Doom II on Steam, with lots of extra episodes and improvements. One of these new features is a built-in browser for mods, and support for many existing mods that previously required a different version of the game. Basically, lots of good fan-made mods are now playable on the Steam version of ye olde Doom. That's neat! Ah, but there is some demon excrement on the health pack, so to speak. The mod browser lacks moderation and lets people upload the work of others with their own name pinned as the author. That's prompted one level designer to call it "a massive breach of trust and violation of norms the Doom community has done its best to hold to for those 30 years."

Read more

Here's a full Call Of Duty: Black Ops 6 campaign level on sporadic fast-forward

Activision have just screened an abbreviated video of Call Of Duty: Black Ops 6 campaign level “Most Wanted”, in which you and a buddy infiltrate a US fundraiser to save returning character Adler from Bad Dudes. Good news, people who like Call Of Duty: this looks like Call Of Duty. It’s got a homing knife, an exploding remote-controlled car and a big chap in full face armour with an overcompensatory minigun. Catch the full video below.

Read more

We now know that Borderlands 4 is a Borderlands game with 4 at the end, coming 2025

The trailers of Gamescom 2024 want to be here with you, explained Keighley in his opening monologue. They are not directed at you. They are with you. That's lovely. Thanks videogames. And if - as we surely must - we judge the eagerness of these adverts to spend time with us by how quickly they appear on our screens, then FPS sequel Borderlands 4 surely loves us the most.

Do we love it back? Who knows. But it does exist, and it isn’t using a subtitle. That’s got to count for something, right?

Read more

Activision are finally cutting down Call Of Duty's horrendous install sizes for Black Ops 6's release

For years, our PC storage has wobbled and buckled beneath the tyranny of gigantic Call Of Duty installs. Like 13th century peasants straining to convey huge, teetering loads of freshly quarried LMGs, our SSDs cry out for justice. Perhaps scenting imminent rebellion and a mass audience desertion to low-poly shooters with more civilised file sizes, Activision have relented. Future installations of the much-padded FPS will be "smaller and more customised", though in a last cruel stroke of villainy, they want you to download a large update to prepare the ground.

Read more

Doom modders are annoyed at the "chum-bucket" of wrongly credited mods in the latest Doom remaster

Last week, Bethesda released a remastered edition of Doom and Doom II on Steam, with lots of extra episodes and improvements. One of these new features is a built-in browser for mods, and support for many existing mods that previously required a different version of the game. Basically, lots of good fan-made mods are now playable on the Steam version of ye olde Doom. That's neat! Ah, but there is some demon excrement on the health pack, so to speak. The mod browser lacks moderation and lets people upload the work of others with their own name pinned as the author. That's prompted one level designer to call it "a massive breach of trust and violation of norms the Doom community has done its best to hold to for those 30 years."

Read more

Dungeonette The New Adventure - A new game is coming to the Amiga AGA / CD32 and it still looks fab!

While the last few years have seen a plethora of new Amiga games released, few if any have come from established industry veterans that created games back in the 90's. But today news looks to be something special for the Commodore Amiga, as long-time Amiga head Adrian Cummings, creator of original titles such as Cyberpunks, Tin Toy, Doodlebug and more. Has announced a video update, to what looks

METRO SIEGE - A technical preview lets you play some of the levels that are in development for the Amiga (1MB)!

What an incredible amount of Amiga news we've had this month, from the in-development game of Galaga, to the latest releases of Ninja Carnage, and our personal favourite Shift. Well here we are with another Amiga news story, as we've just been told through Facebook, that BitbeamCannon, Enable Software and JOHN TSAKIRIS, has made available 3 playable levels, with the choice of two characters

Call of Duty: Black Ops 6 images leak online, including multiplayer maps and menu screens

Images of Call of Duty: Black Ops 6 have leaked online, including multiplayer maps, menu screens, and perks.

Within hours, the screenshots were hit with copyright claims from Activision, adding credence to the claims they come from an in-development build. However, this is the internet, which means nothing's ever truly gone, and the images are still going up faster than Activision can remove them.

Whilst the map images have since been removed, the leaker claims the maps have partial titles, many of which will be familiar to Call of Duty players.

Read more

Xbox console sales decline continues

Microsoft's continued decline in Xbox console sales continued through the last financial quarter, the company has confirmed.

In what is now a familiar pattern - and one not limited to Microsoft - revenue from gaming hardware fell again, this time by 42 percent. That's down further on the 31 percent fall reported back in April this year, for the quarter before.

Microsoft's gaming revenue overall was up overall by 44 percent, thanks to a significant boost from the addition of Activision Blizzard, which Microsoft bought for $68.7bn before its latest round of layoffs. Without Activision, growth would have sat around three percent.

Read more

Elden Ring update boosts Spirit Ashes, but missing translation staff yet to be added

A new update for Elden Ring has been released, bringing multiple balance adjustments and boosting Spirit Ashes.

Update 1.13 is now available for all platforms. However, localisation staff removed from the credits are still yet to be added in.

As previously reported by Eurogamer, some Latin American translation staff were removed from the credits with the release of the Shadow of the Erdtree DLC, differing from the base game.

Read more

PSA: This weekend is your last chance to buy from the Xbox 360 online marketplace

This is your friendly reminder that Microsoft is set to close its Xbox 360 digital store on 29th July – that's next Monday – so you have just a few days left to make the most of those last discounts on some of the best Xbox 360 games of the generation.

Microsoft announced a raft of discounts on Xbox 360 digital games back in May. Whilst some games will live on via other platforms and services – including Microsoft's comprehensive backwards compatibility system – there are a handful of games that will disappear from sale forever. So, if you've ever fancied one, now's the time to pick it up.

X user Kalyoshika has shared a list of the games/DLC that "will not survive", as well as "a couple of games that are going from cheap, easy-to-get digital copies", to "impossible-to-get, expensive, piracy only, jump-through-hoops to play".

Read more

Apple's latest iOS 18 beta walks back some changes to the redesigned Photos app

Apple is pumping the brakes on some of its updates to the Photos app in iOS 18. The company made some changes — removing some features and tweaking others — on Monday to address user feedback. The pared-down version can be found in the software’s fifth developer beta, which app makers can install today.

The biggest change is that Apple removed the Carousel from the Photos app altogether. The iOS 18 feature used “on-device intelligence” (which, confusingly, isn’t the same as Apple Intelligence) to aggregate what it thought was your best content, placing them in a swipeable row. Previously found to the right of the photo grid, it’s now gone altogether, helping Apple clean up one of the features that earned a healthy dose of complaints from beta testers.

In addition, Apple tweaked the All Photos view in today’s update to show more of the photos grid. The company also added Recently Saved content to the Recent Days collection. Finally, Apple made albums easier to find for users with more than one. (The difficulty of locating that section was a frequently echoed complaint among testers.)

Two iPhones showing different views of the overhauled iOS 18 Photos app. White background.
Apple

Apple pitched the changes to the Photos app as one of the pillars of its 2024 software update. Although the app is streamlined into a single view and designed to be more customizable, it too often ends up as a mishmash of extra features most people won’t need, sometimes getting in the way of finding what you’re looking for.

A Reddit thread from July with over 1,000 upvotes gave voice to some of the most frequent complaints. “Once again taking a rapid-use app and making it into an experience for no reason,” u/thiskillstheredditor commented. “I just want a camera roll and maybe the ability to sort photos by location. It was perfectly fine, if maybe a bit bloated, before. But this is an unmitigated mess.”

Time will tell if today’s updates are enough to clean up the app’s user experience ahead of iOS 18’s fall launch to the public. The changes aren’t yet in the public beta but will likely appear there in the next version or soon after.

This article originally appeared on Engadget at https://www.engadget.com/apples-latest-ios-18-beta-walks-back-some-changes-to-the-redesigned-photos-app-180145232.html?src=rss

© Apple

A hand (out of frame from the wrist on) holding an iPhone with the Photos app open. Gray background.

Safari beta lets you selectively block distractions like pop-ups

Ahead of the full release of iOS 18, iPadOS 18, macOS Sequoia and more, Apple continues to bring updates to the betas it's made available to early testers. Today, the company has dropped the fifth developer beta to those platforms, and with it comes a few changes to Safari and Photos. Specifically, Apple's browser is getting some tools that could make surfing today's cluttered and overwhelming web pages a lot less distracting, with something called Distraction Control. 

Is Safari's Distraction Control an ad blocker?

To be clear, this isn't intended to be an ad blocker. It's for parts of a page that distract you, like an overlay asking you to subscribe or even requests to use cookies. When you land on a website, you can press the Page Menu button in the Search field (where the Reader and Viewer buttons are). There, you can tap "Hide Distracting Items" and go on to choose which parts of a page you want to filter out. Subsequently, that part will be blocked on that domain moving forward on repeated visits.

There are a few important caveats, though. The first time you click on Distraction Control, Apple will inform you that it won't permanently remove ads or other areas where content might change or get updated. Since on-page banner ads usually refresh on each visit, this renders Distraction Control useless for those elements. 

You'll also be the one selecting which parts of the site to hide, and there's no artificial intelligence automatically detecting which components might be deemed distracting. You'll see a blue outline over certain areas and can tap to select them. According to Apple, nothing will be hidden unless a user proactively selects it. You'll also be able to unhide items afterwards, by going back to the hide icon in the search field and choosing "Show hidden items."

If something you've chosen to block, like a headline or an ad, has changed in any way, it will resurface upon your next visit. 

How does Distraction Control handle those pesky GDPR cookie requests?

Theoretically, you would also be able to use Distraction Control to hide the dialogs with the GDPR-stipulated cookies permission requests. If you choose to block those, the website would just be told you closed its request without an answer. Based on the legal requirements in different regions, the website would then have to proceed based on that information. 

It's not yet clear how Distraction Control will handle paywalls, especially since there are different ways that content is protected. 

The fifth developer beta also brings with it features that were teased at WWDC, like a redesigned Reader and Highlights, which brings up summarized information from a website like a business' hours or phone number. There's also a new Viewer experience that works when Safari detects a video on the page and puts in front and center. It'll also give you system playback controls in this mode, including picture-in-picture. 

If you're curious about how the new tools and Distraction Control work, you can run Apple's developer beta. Just know that since you'll be opting in to preview software, there may be bugs or quirks, so make sure to backup your data before you proceed. According to the information accompanying the iOS 18 beta 5 update, it requires 7.11GB of storage, too.

Update, August 5 2024, 1:31PM ET: This story has been updated to clarify that hiding distracting items only applies to that specific domain moving forward, and not all websites across the internet.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/safari-can-block-distracting-ads-and-other-website-clutter-with-the-latest-ios-18-and-macos-betas-172041678.html?src=rss

© Apple

A composite of two images, each showing an iPhone with a different Safari feature on its screen. The left shows the new Highlights tool that displays pertinent information of a hotel's website, like a map with its location, a call button with its phone number and address. The right shows the redesigned Reader, which has a summary and table of contents for the article "Can Meditation Change Your Mind?"

Head in the clouds, boots on the ground.

Self-hosted infrastructure is the first step toward voluntary apotheosis.

–Unknown

When people think of The Cloud(tm), they think of ubiquitous computing. Whatever you need, whenever you need it’s there from the convenience of your mobile, from search engines to storage to chat.  However, as the latest Amazon and Cloudflare outages have demonstrated all it takes is a single glitch to knock out half the Internet as we know it. 

This is, as they say, utter bollocks.  Much of the modern world spent a perfectly good day that could have been spent procrastinating, shitposting, and occasionally doing something productive bereft of Slack, Twitter, Autodesk, Roku, and phone service through Vonage.  While thinking about this fragile state of affairs in the shower this morning I realized that, for the somewhat technically inclined and their respective cohorts there are ways to mitigate the risks of letting other people run stuff you need every day.  Let us consider the humble single board computer, computing devices the size of two decks of cards at most, a €1 coin at the very least.  While this probably won’t help you keep earning a paycheque it would help you worry less about the next time Amazon decides to fall on its face.

Something I never really understood was the phenomenon of people recreating the data center in miniature with stacks of Raspberry Pis shaped like a scaled down telecom rack.  There’s nothing wrong with that – it’s as valid a way of building stuff out as anything else.  However… do you really have to go this route?  Single board computers are small enough that they can be placed anywhere and everywhere in such a manner that they’re practically invisible.  At any time you could be surrounded by a thin fog of computing power, doing things you care about, completely out of sight and out of mind, independent of AWS, Azure, or any other provider’s health.  That fog could be as large or as small as you need, expanded only as needs dictate, cheap to upgrade or replace, and configured to automatically update and upgrade itself to minimize management.  Some visionaries imagine a world in which any random thing you lay eyes upon may have enough inexpensive smarts built in to crunch numbers – why not take a step in that direction?

Starting with a relatively powerful wireless router running OpenWRT for maximum customizability and stability might be a good place to start.  Speaking only as someone who’s used it for a couple of years, OpenWRT stuff is largely “set it and forget it,” with only an up-front investment of time measured in an afternoon.  Additionally, using it as your home wireless access point in no way compromises getting stuff done every day.  If nothing else, it might be more efficient than the crappy wireless access point-cum-modem that your ISP makes you use.

Now onto the flesh and bones of your grand design – where are you going to put however many miniature machines you need?  Think back to how you used to hide your contraband.  The point of this exercise isn’t to have lots of blinky lights all over the place (nerd cred aside), the point is to have almost ubiquitous computing power that goes without notice unless the power goes out (in which case you’re up a creek, no matter what).  Consider the possibility of having one or two SBCs in a hollowed out book, either hand-made or store bought (“book safes”) with holes drilled through the normally not-visible page side of the book to run power lines to a discrete power outlet.  Think about using a kitschy ceramic skull on your desk containing a Raspberry Pi 0w, miniature USB hub, and a couple of flash drives.  How about a stick PC stuck into the same coffee cup you keep your pens and pencils in?

Maybe a time will come when you need to think bigger.  Let’s say that you want to spread your processing power out a bit so it’s not all in the same place.  Sure, you could put a machine or two at a friend’s house, your parents’ place, or what have you.. but why not think a little bigger?  Consider a RasPi with a USB cellular modem, a pre-paid SIM card, and SSH over Tor (to commit the odd bit of system administration) hanging out on the back of your desk at the office (remember those?) or stashed behind the counter of a friendly coffee shop.

Which moves us right along to the question, what do you actually run on a personal cluster?  Normally, people build personal clusters to experiment with containerization technologies like Docker (for encapsulating applications in such a way that they “think” they’re all by their lonesome on a server) and Kubernetes or the cut-down k3s (for doing most of the sysadmin work of juggling those containers).  Usually a web panel of some kind is used to manipulate the cluster.  This is quite handy due to the myriad of self-hosted applications which happen to be Dockerized.  The reason for such a software architecture is that the user can specify that a containerized application should be started with a couple of taps on their mobile’s screen and Kubernetes looks at its cluster, figures out which node has enough memory and disk space available to install the application and its dependencies, and does so without further user intervention.  Unfortunately, this comes at the expense of having to do just about everything in a Dockerized or Kubernetes-ized way.  In a containerized environment things don’t like to play nicely with traditionally installed stuff, or at least not without a lot of head scratching, swearing, and tinkering.

We can think much bigger, though.  Say we’re setting up a virtual space for your family.  Or your affinity group.  Or a non-profit organization.  There are some excellent all-in-one systems out there like Yunohost and Sandstorm which offer supported applications galore (Sandstorm is only available for the x86-64 platform right now, though there’s nothing that says that you can’t add the odd NUC or VPS to your exocortex) which can be yours for a couple of mouse clicks.

How about easy to use storage?  It’s always good to have someplace to keep your data as well as back it up (you DO make backups, don’t you?)  You could do a lot worse than a handful of 256 GB or 512 GB flash drives plugged into your fog and tastefully scattered around the house.  To access them you can let whatever applications you’re running do their thing, or you can stand up a a copy of MinIO on each server (also inside of Docker containers) which, as far as anything you care about will be concerned is just Amazon’s S3 with a funny hostname.

Pontification about where and what safely behind us, the question that now arises is, how do we turn all this stuff into a fog?  If you have machines all over the place, and some of them aren’t at home (which means that we can’t necessarily poke holes in any intervening firewalls), how can the different parts of the cluster talk to each other?  Unsurprisingly, such a solution already exists in the form of Nebula, which Slack invented to do exactly what we need.  There’s a little up-front configuration that has to be done but once the certificates are generated and Nebula is installed on every system you don’t have to mess with it anymore unless there are additional services that you want to expose.  It helps to think of it as a cut-down VPN which requires much less fighting and swearing but gives you much more “it just works(tm)” than a lot of things.  Sandstorm on a VPS?  MinIO running on a deck of cards at your friendly local gaming shop?  Nebula can make them look like they’re right next to one another, no muss, no fuss.  Sure, you could use a Tor hidden service to accomplish the same thing (and you really should set up one or two, if only so you can log in remotely) but Nebula is a much better solution in this regard.

Setting up this kind of infrastructure might take anywhere from a couple of days to a couple of weeks, depending on your level of skill, ability to travel, availability of equipment, and relative accessibility of where you want to stash your stuff.  Of course, not everybody needs such a thing.  Some people are more than happy to keep using Google applications or Microsoft 365, or what have you.  While some may disagree or distruct these services (with good reason), ultimately people use what they use because it works for them.  A certain amount of determination is required to de-FAANGify one’s life, and not everyone has that need or use case.  Still, it is my hope that a few people out there make a serious consideration.

By day the hacker known as the Doctor does information security for a large software-as-a-service company (whom he does NOT speak for), with a background in penetration testing, reverse engineering, and systems architecture. By night the Doctor doffs his disguise, revealing his true nature as a transhumanist cyborg (at last measurement, comprised of approximately 30% hardware and software augmentations), technomancer (originally trained in the chaos sorta-tradition), cheerful nihilist, lovable eccentric, and open source programmer. His primary organic terminal has been observed presenting at a number of hacker conventions over the years on a variety of topics. Other semi-autonomous agents of his selfhood may or may not have manifested at other fora.  The Doctor is a recognized ambassador of the nation of Magonia, with all of the rights and privileges thereof.

The post Head in the clouds, boots on the ground. appeared first on Mondo 2000.

AI/ML’s Role In Design And Test Expands

The role of AI and ML in test keeps growing, providing significant time and money savings that often exceed initial expectations. But it doesn’t work in all cases, sometimes even disrupting well-tested process flows with questionable return on investment.

One of the big attractions of AI is its ability to apply analytics to large data sets that are otherwise limited by human capabilities. In the critical design-to-test realm, AI can address problems such as tool incompatibilities between the design set-up, simulation, and ATE test program, which typically slows debugging and development efforts. Some of the most time-consuming and costly aspects of design-to-test arise from incompatibilities between tools.

“During device bring-up and debug, complex software/hardware interactions can expose the need for domain knowledge from multiple teams or stakeholders, who may not be familiar with each other’s tools,” said Richard Fanning, lead software engineer at Teradyne. “Any time spent doing conversions or debugging differences in these set-ups is time wasted. Our toolset targets this exact problem by allowing all set-ups to use the same set of source files so everyone can be sure they are running the same thing.”

ML/AI can help keep design teams on track, as well. “As we drive down this technology curve, the analytics and the compute infrastructure that we have to bring to bear becomes increasingly more complex and you want to be able to make the right decision with a minimal amount of overkill,” said Ken Butler, senior director of business development in the ACS data analytics platform group at Advantest. “In some cases, we are customizing the test solution on a die-by-die type of basis.”

But despite the hype, not all tools work well in every circumstance. “AI has some great capabilities, but it’s really just a tool,” said Ron Press, senior director of technology enablement at Siemens Digital Industries Software, in a recent presentation at a MEPTEC event. “We still need engineering innovation. So sometimes people write about how AI is going to take away everybody’s job. I don’t see that at all. We have more complex designs and scaling in our designs. We need to get the same work done even faster by using AI as a tool to get us there.”

Speeding design to characterization to first silicon
In the face of ever-shrinking process windows and the lowest allowable defectivity rates, chipmakers continually are improving the design-to-test processes to ensure maximum efficiency during device bring-up and into high volume manufacturing. “Analytics in test operations is not a new thing. This industry has a history of analyzing test data and making product decisions for more than 30 years,” said Advantest’s Butler. “What is different now is that we’re moving to increasingly smaller geometries, advanced packaging technologies and chiplet-based designs. And that’s driving us to change the nature of the type of analytics that we do, both in terms of the software and the hardware infrastructure. But from a production test viewpoint, we’re still kind of in the early days of our journey with AI and test.”

Nonetheless, early adopters are building out the infrastructure needed for in-line compute and AI/ML modeling to support real-time inferencing in test cells. And because no one company has all the expertise needed in-house, partnerships and libraries of applications are being developed with tool-to-tool compatibility in mind.

“Protocol libraries provide out-of-the-box solutions for communicating common protocols. This reduces the development and debug effort for device communication,” said Teradyne’s Fanning. “We have seen situations where a test engineer has been tasked with talking to a new protocol interface, and saved significant time using this feature.”

In fact, data compatibility is a consistent theme, from design all the way through to the latest developments in ATE hardware and software. “Using the same test sequences between characterization and production has become key as the device complexity has increased exponentially,” explained Teradyne’s Fanning. “Partnerships with EDA tool and IP vendors is also key. We have worked extensively with industry leaders to ensure that the libraries and test files they output are formats our system can utilize directly. These tools also have device knowledge that our toolset does not. This is why the remote connect feature is key, because our partners can provide context-specific tools that are powerful during production debug. Being able to use these tools real-time without having to reproduce a setup or use case in a different environment has been a game changer.”

Serial scan test
But if it seems as if all the configuration changes are happening on the test side, it’s important to take stock of substantial changes on the approach to multi-core design for test.

Tradeoffs during the iterative process of design for test (DFT) have become so substantial in the case of multi-core products that a new approach has become necessary.

“If we look at the way a design is typically put together today, you have multiple cores that are going to be produced at different times,” said Siemens’ Press. “You need to have an idea of how many I/O pins you need to get your scan channels, the deep serial memory from the tester that’s going to be feeding through your I/O pins to this core. So I have a bunch of variables I need to trade off. I have the number of pins going to the core, the pattern size, and the complexity of the core. Then I’ll try to figure out what’s the best combination of cores to test together in what is called hierarchical DFT. But as these designs get more complex, with upwards of 2,500 cores, that’s a lot of tradeoffs to figure out.”

Press noted that applying AI with the same architecture can provide a 20% to 30% higher efficiency, but an improved methodology based on packetized scan test (see figure 1) actually makes more sense.


Fig. 1: Advantages to the serial scan network (SSN) approach. Source: Siemens

“Instead of having tester channels feeding into the scan channels that go to each core, you have a packetized bus and packets of data that feed through all the cores. Then you instruct the cores when their packet information is going to be available. By doing this, you don’t have as many variables you need to trade off,” he said. At the core level, each core can be optimized for any number of scan channels and patterns, and the I/O pin count is no longer a variable in the calculation. “Then, when you put it into this final chip, it deliver from the packets the amount of data you need for that core, that can work with any size serial bus, in what is called a serial scan network (SSN).”

Some of the results reported by Siemens EDA customers (see figure 2) highlight both supervised and unsupervised machine learning implementation for improvements in diagnosis resolution and failure analysis. DFT productivity was boosted by 5 to 10X using the serial scan network methodology.


Fig. 2: Realized benefits using machine learning and the serial scan network approach. Source: Siemens

What slows down AI implementation in HVM?
In the transition from design to testing of a device, the application of machine learning algorithms can enable a number of advantages, from better pairing of chiplet performance for use in an advanced package to test time reduction. For example, only a subset of high-performing devices may require burn-in.

“You can identify scratches on wafers, and then bin out the dies surrounding those scratches automatically within wafer sort,” said Michael Schuldenfrei, fellow at NI/Emerson Test & Measurement. “So AI and ML all sounds like a really great idea, and there are many applications where it makes sense to use AI. The big question is, why isn’t it really happening frequently and at-scale? The answer to that goes into the complexity of building and deploying these solutions.”

Schuldenfrei summarized four key steps in ML’s lifecycle, each with its own challenges. In the first phase, the training, engineering teams use data to understand a particular issue and then build a model that can be used to predict an outcome associated with that issue. Once the model is validated and the team wants to deploy it in the production environment, it needs to be integrated with the existing equipment, such as a tester or manufacturing execution system (MES). Models also mature and evolve over time, requiring frequent validation of the data going into the model and checking to see that the model is functioning as expected. Models also must adapt, requiring redeployment, learning, acting, validating and adapting, in a continuous circle.

“That eats up a lot of time for the data scientists who are charged with deploying all these new AI-based solutions in their organizations. Time is also wasted in the beginning when they are trying to access the right data, organizing it, connecting it all together, making sense of it, and extracting features from it that actually make sense,” said Schuldenfrei.

Further difficulties are introduced in a distributed semiconductor manufacturing environment in which many different test houses are situated in various locations around the globe. “By the time you finish implementing the ML solution, your model is stale and your product is probably no longer bleeding edge so it has lost its actionability, when the model needs to make a decision that actually impacts either the binning or the processing of that particular device,” said Schuldenfrei. “So actually deploying ML-based solutions in a production environment with high-volume semiconductor test is very far from trivial.”

He cited a 2014 Google article that stated how the ML code development part of the process is both the smallest and easiest part of the whole exercise, [1] whereas the various aspects of building infrastructure, data collection, feature extraction, data verification, and managing model deployments are the most challenging parts.

Changes from design through test ripple through the ecosystem. “People who work in EDA put lots of effort into design rule checking (DRC), meaning we’re checking that the work we’ve done and the design structure are safe to move forward because we didn’t mess anything up in the process,” said Siemens’ Press. “That’s really important with AI — what we call verifiability. If we have some type of AI running and giving us a result, we have to make sure that result is safe. This really affects the people doing the design, the DFT group and the people in test engineering that have to take these patterns and apply them.”

There are a multitude of ML-based applications for improving test operations. Advantest’s Butler highlighted some of the apps customers are pursuing most often, including search time reduction, shift left testing, test time reduction, and chiplet pairing (see figure 3).

“For minimum voltage, maximum frequency, or trim tests, you tend to set a lower limit and an upper limit for your search, and then you’re going to search across there in order to be able to find your minimum voltage for this particular device,” he said. “Those limits are set based on process split, and they may be fairly wide. But if you have analytics that you can bring to bear, then the AI- or ML-type techniques can basically tell you where this die lies on the process spectrum. Perhaps it was fed forward from an earlier insertion, and perhaps you combine it with what you’re doing at the current insertion. That kind of inference can help you narrow the search limits and speed up that test. A lot of people are very interested in this application, and some folks are doing it in production to reduce search time for test time-intensive tests.”


Fig. 3: Opportunities for real-time and/or post-test improvements to pair or bin devices, improve yield, throughput, reliability or cost using the ACS platform. Source: Advantest

“The idea behind shift left is perhaps I have a very expensive test insertion downstream or a high package cost,” Butler said. “If my yield is not where I want it to be, then I can use analytics at earlier insertions to be able to try to predict which devices are likely to fail at the later insertion by doing analysis at an earlier insertion, and then downgrade or scrap those die in order to optimize downstream test insertions, raising the yield and lowering overall cost. Test time reduction is very simply the addition or removal of test content, skipping tests to reduce cost. Or you might want to add test content for yield improvement,” said Butler.

“If I have a multi-tiered device, and it’s not going to pass bin 1 criteria – but maybe it’s bin 2 if I add some additional content — then people may be looking at analytics to try to make those decisions. Finally, two things go together in my mind, this idea of chiplet designs and smart pairing. So the classic example is a processor die with a stack of high bandwidth memory on top of it. Perhaps I’m interested in high performance in some applications and low power in others. I want to be able to match the content and classify die as they’re coming through the test operation, and then downstream do pick-and-place and put them together in such a way that I maximize the yield for multiple streams of data. Similar kinds of things apply for achieving a low power footprint and carbon footprint.”

Generative AI
The question that inevitably comes up when discussing the role of AI in semiconductors is whether or not large language models like ChatGPT can prove useful to engineers working in fabs. Early work shows some promise.

“For example, you can ask the system to build an outlier detection model for you that looks for parts that are five sigma away from the center line, saying ‘Please create the script for me,’ and the system will create the script. These are the kinds of automated, generative AI-based solutions that we’re already playing with,” says Schuldenfrei. “But from everything I’ve seen so far, there is still quite a lot of work to be done to get these systems to provide outputs with high enough quality. At the moment, the amount of human interaction that is needed afterward to fix problems with the algorithms or models that generative AI is producing is still quite significant.”

A lingering question is how to access the test programs needed to train the new test programs when everyone is protecting important test IP? “Most people value their test IP and don’t necessarily want to set up guardrails around the training and utilization processes,” Butler said. “So finding a way to accelerate the overall process of developing test programs while protecting IP is the challenge. It’s clear this kind of technology is going to be brought to bear, just like we already see in the software development process.”

Failure analysis
Failure analysis is typically a costly and time-consuming endeavor for fabs because it requires a trip back in time to gather wafer processing, assembly, and packaging data specific to a particular failed device, known as a returned material authorization (RMA). Physical failure analysis is performed in an FA lab, using a variety of tools to trace the root cause of the failure.

While scan diagnostic data has been used for decades, a newer approach involves pairing a digital twin with scan diagnostics data to find the root cause of failures.

“Within test, we have a digital twin that does root cause deconvolution based on scan failure diagnosis. So instead of having to look at the physical device and spend time trying to figure out the root cause, since we have scan, we have millions and millions of virtual sample points,” said Siemens’ Press. “We can reverse-engineer what we did to create the patterns and figure out where the mis-compare happened within the scan cells deep within the design. Using YieldInsight and unsupervised machine learning with training on a bunch of data, we can very quickly pinpoint the fail locations. This allows us to run thousands, or tens of thousands fail diagnoses in a short period of time, giving us the opportunity to identify the systematic yield limiters.”

Yet another approach that is gaining steam is using on-die monitors to access specific performance information in lieu of physical FA. “What is needed is deep data from inside the package to monitor performance and reliability continuously, which is what we provide,” said Alex Burlak, vice president of test and analytics at proteanTecs. “For example, if the suspected failure is from the chiplet interconnect, we can help the analysis using deep data coming from on-chip agents instead of taking the device out of context and into the lab (where you may or may not be able to reproduce the problem). Even more, the ability to send back data and not the device can in many cases pinpoint the problem, saving the expensive RMA and failure analysis procedure.”

Conclusion
The enthusiasm around AI and machine learning is being met by robust infrastructure changes in the ATE community to accommodate the need for real-time inferencing of test data and test optimization for higher yield, higher throughput, and chiplet classifications for multi-chiplet packages. For multi-core designs, packetized test, commercialized as an SSN methodology, provides a more flexible approach to optimizing each core for the number of scan chains, patterns and bus width needs of each core in a device.

The number of testing applications that can benefit from AI continues to rise, including test time reduction, Vmin/Fmax search reduction, shift left, smart pairing of chiplets, and overall power reduction. New developments like identical source files for all setups across design, characterization, and test help speed the critical debug and development stage for new products.

Reference

  1. https://proceedings.neurips.cc/paper_files/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf

The post AI/ML’s Role In Design And Test Expands appeared first on Semiconductor Engineering.

Valve "White Sands" project spotted on Starfield voice actor portfolio, prompting Half-Life speculation

As Hamlet requested of Horatio, it is time to once again absent myself from felicity awhile, and in this harsh world draw my breath in pain to tell you that the Half-Life 3 speculators are at it again. Over the weekend, the discovery of a mystery Valve project called "White Sands" on a voice actor's portfolio has set tongues and fingers wagging about potential Half-Life news in the offing.

Read more

Valve "White Sands" project spotted on Starfield voice actor portfolio, prompting Half-Life speculation

As Hamlet requested of Horatio, it is time to once again absent myself from felicity awhile, and in this harsh world draw my breath in pain to tell you that the Half-Life 3 speculators are at it again. Over the weekend, the discovery of a mystery Valve project called "White Sands" on a voice actor's portfolio has set tongues and fingers wagging about potential Half-Life news in the offing.

Read more

Five new Steam games you probably missed (August 5, 2024)

Best of the best

Baldur's Gate 3 - Jaheira with a glowing green sword looks ready for battle

(Image credit: Larian Studios)

2024 games: Upcoming releases
Best PC games: All-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best MMOs: Massive worlds
Best RPGs: Grand adventures

On an average day about a dozen new games are released on Steam. And while we think that's a good thing, it can be understandably hard to keep up with. Potentially exciting gems are sure to be lost in the deluge of new things to play unless you sort through every single game that is released on Steam. So that’s exactly what we’ve done. If nothing catches your fancy this week, we've gathered the best PC games you can play right now and a running list of the 2024 games that are launching this year. 

Slot Waste

Steam‌ ‌page‌ ‌
Release:‌ August 1
Developer:‌ Pickpanpuck productions

Slot Waste is about an inexplicable production line of unknown utility. It's your job as "the spirit of the factory" to aide each component of the factory line; in other words, Slot Waste is made up of ten surreal mini-games. This isn't a horror game per se, but it's definitely unsettling, with creatures of mysterious provenance drawn into the task of powering these bizarre systems and contraptions. I'm reminded of the Tom Waits' song 'What's He Building?'. This is what he's building, probably.

Motordoom

Steam‌ ‌page‌ ‌
Release:‌ August 3
Developer:‌ Hobo Cat Games

It's impossible to get excited about a new Vampire Survivors clone, but the genre has a bunch of brilliant ideas to pillage. Motordoom is one of the most interesting evolutions of the survivor format to date: it's a third-person "freestyle-sports" roguelite shooter. So imagine Rollerdrome, replace its vibrant comicbook art style with grimy PS2 textures, and add an ever-growing number of swarming enemies into tight trick-friendly arenas. Like Vampire Survivors each map is strewn with blue XP gems, but you'll also accrue points for performing impressive trick and kill combos. Oh, and if all this sounds like too much, you can just toggle auto-shoot and focus on pulling off stunts.

Kitsune Tails

Steam‌ ‌page‌ ‌
Release:‌ August 2
Developers:‌ Kitsune Games

I was sold on Kitsune Tails right away by the gorgeous 16-bit style pixel art, which is a pretty unambiguous salute to Super Mario Bros. 3. This platformer also borrows a lot of ideas from that '90s classic, including outfits that furnish special abilities. The chief distinguishing quality here is that instead of starring Italian plumbers, it stars—in the words of our sibling site Gamesradar—"lesbian fox girls" (kitsunes are mythical foxes from Japanese folklore). I'm especially excited by the prospect of post-game kaizo levels, for the masochists among us who take dextrous platforming way too seriously.

Smack Studio

Steam‌ ‌page‌ ‌
Release:‌ August 1
Developer:‌ ThirdPixel Interactive

After a stint in Early Access this platform fighter with a huge suite of user-generated content tools has hit 1.0. So it's Smash Bros Maker, kinda, and the creation tools sound pretty impressive: it'll automatically turn your 2D pixel art into 3D animations, with a process that "maps 2D images to bones in a 3D skeleton". You can also edit animations on a frame-by-frame level, and of course, special effects can be created. As for the fights themselves, Smack Studio has full online support with rollback netcode, as well as local multiplayer. Hopefully this can build a huge community of brilliant creations and not just 100 variations on Tails.

Malware

Steam‌ ‌page‌ ‌
Release:‌ August 1
Developer:‌ Odd Games

This game makes me anxious. It simulates an installation wizard hellbent on tricking its user into installing malware on your computer. That means you'll need to be super vigilant with every new prompt, double and triple checking the meaning behind seemingly ignorable auto-checked options like "use the information assistant" (my skin crawls imagining the janky UI of this impossible-to-delete program). As you become more and more adept at defying underhanded malware installations you'll start to get requests for help from other hapless '90s PC users. You'll become an anti-malware hero, in other words. This looks like some amusing fun, and it even supports Steam Workshop. Make your own fake malware!

© Pickpanpuck Productions

Xbox console sales decline continues

Microsoft's continued decline in Xbox console sales continued through the last financial quarter, the company has confirmed.

In what is now a familiar pattern - and one not limited to Microsoft - revenue from gaming hardware fell again, this time by 42 percent. That's down further on the 31 percent fall reported back in April this year, for the quarter before.

Microsoft's gaming revenue overall was up overall by 44 percent, thanks to a significant boost from the addition of Activision Blizzard, which Microsoft bought for $68.7bn before its latest round of layoffs. Without Activision, growth would have sat around three percent.

Read more

Elden Ring update boosts Spirit Ashes, but missing translation staff yet to be added

A new update for Elden Ring has been released, bringing multiple balance adjustments and boosting Spirit Ashes.

Update 1.13 is now available for all platforms. However, localisation staff removed from the credits are still yet to be added in.

As previously reported by Eurogamer, some Latin American translation staff were removed from the credits with the release of the Shadow of the Erdtree DLC, differing from the base game.

Read more

PSA: This weekend is your last chance to buy from the Xbox 360 online marketplace

This is your friendly reminder that Microsoft is set to close its Xbox 360 digital store on 29th July – that's next Monday – so you have just a few days left to make the most of those last discounts on some of the best Xbox 360 games of the generation.

Microsoft announced a raft of discounts on Xbox 360 digital games back in May. Whilst some games will live on via other platforms and services – including Microsoft's comprehensive backwards compatibility system – there are a handful of games that will disappear from sale forever. So, if you've ever fancied one, now's the time to pick it up.

X user Kalyoshika has shared a list of the games/DLC that "will not survive", as well as "a couple of games that are going from cheap, easy-to-get digital copies", to "impossible-to-get, expensive, piracy only, jump-through-hoops to play".

Read more

Meta's Threads has 200 million users

The Threads app has passed the 200 million user mark, according to Meta exec Adam Mosseri. The milestone comes one day after Mark Zuckerberg said that the service was “about” to hit 200 million users during the company’s latest earnings call.

While Threads is still relatively tiny compared to Meta’s other apps, it has grown at a much faster clip. Zuckerberg previously announced 175 million users last month as Threads marked its one-year anniversary, and the Meta CEO has repeatedly speculated that it could be the company’s next one-billion-user app.

“We've been building this company for 20 years, and there just are not that many opportunities that come around to grow a billion-person app,” Zuckerberg said. “Obviously, there's a ton of work between now and there.”

Continuing to grow the app’s user base will be key to Meta’s ability to eventually monetize Threads, which currently has no ads or business model. “All these new products, we ship them, and then there's a multi-year time horizon between scaling them and then scaling them into not just consumer experiences but very large businesses,” Zuckerberg said.

While Threads has so far been able to capitalize on the chaos and controversy surrounding X, Meta is still grappling with how to position its app that’s widely viewed as an alternative to X. Mosseri and Zuckerberg have said they don’t want the app to promote political content to users that don’t explicitly ask for it. This policy has even raised questions among some Meta employees, The Information recently reported.

Thread’s “for you” algorithm is also widely viewed as slow to keep up with breaking news and current events. Mosseri recently acknowledged the issue. “We’re definitely not fast enough yet, and we’re actively working to get better there,” he wrote in a post on Threads.

This article originally appeared on Engadget at https://www.engadget.com/metas-threads-has-200-million-users-211656147.html?src=rss

© Meta

Meta's Threads app had 200 million users.
❌